iPal smart AI for robots for children's education
© Robyn Beck/AFP via Getty ImagesiPal smart AI for robots for children's education are displayed at the AvatarMind booth at the CES 2019 consumer electronics show at the Las Vegas Convention Center in Las Vegas, Nevada, on Jan. 8, 2019.



Comment: Some common sense is actually on display from a UN representative. We're floored.


The United Nations has warned that artificial intelligence (AI) systems may pose a "negative, even catastrophic" threat to human rights and called for AI applications that are not used in compliance with human rights to be banned.

U.N. human rights chief Michelle Bachelet on Sept. 15 urged members states to put a temporary ban on the sale and use of AI until the potential risks it poses have been addressed and adequate safeguards put in place to ensure the technology will not be abused.

"We cannot afford to continue playing catch-up regarding AI โ€” allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact," Bachelet said in a statement.

"The power of AI to serve people is undeniable, but so is AI's ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us," the human rights chief added.

Her remarks come shortly after her office published a report that analyzes how AI affects people's right to privacy, as well as a string of other rights regarding health, education, freedom of movement, and freedom of expression, among others.

The document includes an assessment of profiling, automated decision-making, and other machine-learning technologies.

While the report notes that AI can be used for good use, and can help "societies overcome some of the great challenges of our times," its use as a forecasting and profiling tool can drastically impact "rights to privacy, to a fair trial, to freedom from arbitrary arrest and detention and the right to life."

According to the report, numerous states and businesses often fail to carry out due diligence while rushing to incorporate AI applications, and in some cases, this has resulted in dangerous blunders, with some people reportedly being mistreated and even arrested due to flawed facial recognition software.

Meanwhile, facial recognition has the potential to allow for unlimited tracking of individuals, which may well lead to an array of issues surrounding discrimination and data protection.

An AI robot (L) by CloudMinds is seen during the Mobile World Conference in Shanghai on June 27, 2018
© AFP/Getty ImagesAn AI robot (L) by CloudMinds is seen during the Mobile World Conference in Shanghai on June 27, 2018
AI smart city system by iFLY
© STR/AFP/Getty ImagesVisitors look at an AI smart city system by iFLY at the 2018 International Intelligent Transportation Industry Expo in Hangzhou in Chinaโ€™s eastern Zhejiang province in December 2018.
As many AI systems rely on large data sets, further issues surrounding how this data is stored in the long-term also poses a risk, and there is potential for such data to be exploited in the future, which could post significant national security risks.

"The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society," the report states.

Tim Engelhardt, a human rights officer in the Rule of Law and Democracy Section, warned that the situation is "dire" and that it has only become worse over the years as some countries and businesses adopt AI applications while failing to research the multiple potential risks associated with the technology.

While he welcomes the EU's agreement to "strengthen the rules on control," he noted that a solution to the myriad of issues surrounding AI won't be coming in the next year and that the first steps to resolve these issues need to be taken now or "many people in the world will pay a high price."

"The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be," Bachelet added.

The report and Bachelet's comments come following July's revelations that spyware, known as Pegasus, was used to hack the smartphones of thousands of people around the world, including journalists, government officials, and human rights activists.

The phone of France's finance minister Bruno Le Maire was just one of many being investigated amid the hack via the spyware, which was developed by the Israeli company NSO Group.

NSO Group issued a statement to multiple outlets that did not address the allegations, but said that the company will "continue to provide intelligence and law enforcement agencies around the world with life-saving technologies to fight terror and crime."

Speaking at the Council of Europe hearing on the implications stemming from the Pegasus spyware controversy, Bachelet said the revelations came as no surprise, given the "unprecedented level of surveillance across the globe by state and private actors."