Loading...

2022-06-10 19:09:49 By : Mr. philip chen

The rapid evolution of artificial-intelligence-based technologies and their adoption by businesses and governments have outpaced efforts to hold them to human rights standards, Michelle Bachelet, the United Nations Commissioner for Human Rights, warned Wednesday.

She called for a moratorium on artificial intelligence systems that could put human rights at risk — at least until stronger safeguards are in place internationally.

“We cannot afford to continue playing catch-up regarding AI — allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact,” she said in a statement.

Buenos Aires is using facial recognition system that tracks child suspects, rights group says

The remarks came alongside the publication of a report by the U.N. Human Rights Council analyzing the human rights risks posed by a range of AI-powered technologies — including profiling, automated decision-making and machine learning. The consequences of unfettered proliferation of such technologies could be “catastrophic,” Bachelet said.

The report also pointed out that data sets used by AI can have historical racial and ethnic biases embedded, which can perpetuate, or enhance, discrimination.

Many AI tools seek to predict outcomes, assess risk and provide insights into patterns of behavior on an individual or societal scale. The report raised warnings of a “digital welfare dystopia” in which data-matching could automate decisions about welfare benefits entitlements, loan access or home visits from child-care services — with human rights implications.

Technologies used by law enforcement agencies, including national security and border management agencies, are particularly fraught. AI systems can mine criminal arrest records, crime statistics, social media posts and travel records to profile people and identify sites of increased criminal or even terrorist activity, triggering criminal justice interventions, “even though AI assessments by themselves should not be seen as a basis for reasonable suspicion,” the report argued.

China built the world’s largest facial recognition system. Now, it’s getting camera-shy.

Bachelet did not call for an outright ban on facial recognition technology — the scanning of human features including faces, fingerprints, irises and voices to identify individuals — but urged that a moratorium be imposed on the use of real-time remote biometric recognition until rights provisions can be agreed upon.

The report did not single out any countries by name, but AI technologies in some places around the world have caused alarm over human rights in recent years, according to experts.

China has been sharply criticized for conducting mass surveillance that uses AI technology with few checks — particularly in the Xinjiang region, where the Chinese Communist Party has for decades systematically suppressed and sought to and assimilate the mainly Muslim Uyghur ethnic minority group.

Huawei worked on several surveillance systems promoted to identify ethnicity, documents show

The Chinese tech giant Huawei tested AI systems, using facial recognition technology, that would send automated “Uyghur alarms” to police once a camera detected a member of the minority group, The Washington Post reported last year. Huawei responded that the language used to describe the capability had been “completely unacceptable,” yet the company had advertised ethnicity-tracking efforts.

Who are the Uighurs, and what’s happening to them in China?

Technology can enable authorities systematically to identify and track individuals in public spaces, affecting the right to freedom of expression, and of peaceful assembly and movement, Bachelet said.

Fear of such surveillance affected protesters in Myanmar this year, Reuters reported. In March, Human Rights Watch criticized the Myanmar military junta’s usage of a public camera system, provided by Huawei, that used facial and license plate recognition to alert the government to individuals on a “wanted list.”

Human Rights Watch last year denounced a system in Buenos Aires that published personal data including photos of child suspects with open arrest warrants. The system using facial recognition software operated in some city subway stations, the organization said.

Bachelet’s statement echoed growing global concerns. The city of Portland, Ore., last September passed a broad ban on facial recognition technology, including uses by local police. The European Commission in April proposed a ban on the use of AI for tracking individuals and ranking their actions. Amnesty International launched the “Ban the Scan” initiative to prohibit the use of facial recognition by New York City government agencies.

“The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility,” Bachelet said, calling for greater transparency, systematic assessment and monitoring of the effects of the use of AI. “Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”