UN human rights chief calls for moratorium on AI technologies

0

The United Nations (UN) High Commissioner for Human Rights has urgently called for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights. ‘man.

Michelle Bachelet – a former president of Chile who has served as UN high commissioner for human rights since September 2018 – said a moratorium should be in place at least until adequate safeguards are in place. work, and also called for an outright ban on AI applications that cannot be used in accordance with international human rights law.

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our time,” Bachelet said in a statement. “But AI technologies can have negative and even catastrophic effects if used without sufficient consideration of how they affect people’s human rights.

“Artificial intelligence now reaches almost every corner of our physical and mental life and even our emotional states. AI systems are used to determine who gets public services, decide who has a chance of being hired for a job and, of course, they affect the information people see and can share online.

“With the rapid and continued growth of AI, bridging the huge accountability gap in the way data is collected, stored, shared and used is one of the most pressing human rights issues facing we are facing.

Bachelet’s comments coincide with the release of a report (designated A / HRC / 48/31) by the United Nations Office for Human Rights, which analyzes how AI affects people’s privacy rights , health, education, freedom of movement, freedom of peaceful assembly and association. , and freedom of expression.

The report found that states and businesses have often rushed to deploy AI systems and generally fail to do due diligence on the impact of those systems on human rights.

“The objective of human rights due diligence processes is to identify, assess, prevent and mitigate negative human rights impacts that an entity may cause or to which it can contribute or be directly linked, ”the report says, adding that due diligence should be conducted throughout the lifecycle of an AI system.

“When due diligence processes reveal that use of AI is incompatible with human rights, due to a lack of significant means to mitigate damage, that form of use should not be prosecuted “, did he declare.

The report also noted that the data used to inform and guide AI systems can be flawed, discriminatory, outdated or irrelevant – presenting particularly acute risks to already marginalized groups – and is often shared, merged and analyzed in ways opaque by states and corporations.

As such, he said, special attention is required in situations where there is a “close connection” between a state and a tech company, both of which need to be more transparent about how they develop and deploy AI.

“The state is an important economic actor that can shape the way AI is developed and used, beyond the role of the state in legal and policy measures,” the UN report said. “When states work with AI developers and private sector service providers, states should take additional steps to ensure that AI is not used for purposes inconsistent with human rights law. man.

“When states act as economic actors, they remain primarily responsible under international human rights law and must proactively meet their obligations. At the same time, companies remain responsible for respecting human rights when working with states and should seek ways to honor human rights when faced with state demands that are in effect. conflict with human rights law.

He added that when states rely on businesses to deliver goods or public services, they need to provide oversight of the development and deployment process, which can be done by requiring and evaluating information for accuracy and the risks of an AI application.

In the UK, for example, the Metropolitan Police Service (MPS) and South Wales Police (SWP) use a facial recognition system called NeoFace Live, which was developed by the Japanese company NEC Corporation.

However, in August 2020, the Court of Appeal ruled that the SWP’s use of the technology was illegal – a ruling that was based in part on the force’s failure to comply with its obligation to equality in the public sector to examine how its policies and practices could be discriminatory.

The court ruling said: “For reasons of commercial confidentiality, the manufacturer is not prepared to disclose details so that it can be tested. This may be understandable but, in our opinion, it does not allow a public authority to fulfill its own, non-delegable, duty. “

The UN report added that “intentional secrecy from government and private actors” undermines public efforts to understand the effects of AI systems on human rights.

Commenting on the report’s findings, Bachelet said: “We cannot afford to continue to catch up with AI – allowing its use with limited or no limits or oversight, and addressing the almost inevitable consequences on rights. humans after the fact.

“The power of AI to serve people is undeniable, but so too is AI’s ability to fuel human rights violations on a massive scale with virtually no visibility. We must act now to put human rights safeguards on the use of AI, for the good of all. “

The European Commission has already started to look into the regulation of AI, releasing its draft artificial intelligence (AIA) bill in April 2021.

However, experts and digital civil rights organizations told Computer Weekly that while the regulation is a step in the right direction, it does not address the fundamental power imbalances between those who develop and deploy the technology and those within it. are submitted.

They claimed that in the end, the proposal will do little to mitigate the worst abuses of AI technology and will essentially act as a green light for a number of high-risk use cases in due to the emphasis on technical standards and mitigation of human rights risks.

In August 2021 – following Forbidden Stories and Amnesty International’s exhibit on how NSO Group’s Pegasus spyware was used to conduct widespread surveillance of hundreds of mobile devices – a number of Special Rapporteurs officials called on all states to impose a global moratorium on the sale and transfer of “potentially lethal” surveillance technologies.

They warned that it was “very dangerous and irresponsible” to allow the surveillance technology sector to become a “human rights free zone,” adding: “Such practices violate the rights to freedom of expression, to privacy and liberty, potentially endangering hundreds of individuals, endanger media freedom and undermine democracy, peace, security and international cooperation.


Source link

Leave A Reply

Your email address will not be published.