Skip to main content Skip to secondary navigation
Page Content

Governments Aren’t Yet Serious About AI’s Risk to Human Rights

In the rush to develop national strategies on artificial intelligence, a new report finds, most governments pay lip service to civil liberties.

Image
world flags flying in the wind

Getty Images/iStockphoto

A new analysis shows governments aren't doing enough to protect human rights in their AI strategies.

More than 25 governments around the world, including those of the United States and across the European Union, have adopted elaborate national strategies on artificial intelligence — how to spur research; how to target strategic sectors; how to make AI systems reliable and accountable.

Yet a new analysis finds that almost none of these declarations provide more than a polite nod to human rights, even though artificial intelligence has potentially big impacts on privacy, civil liberties, racial discrimination, and equal protection under the law.

That’s a mistake, says Eileen Donahoe, executive director of Stanford’s Global Digital Policy Incubator, which produced the report in conjunction with a leading international digital rights organization called Global Partners Digital.

“Many people are unaware that there are authoritarian-leaning governments, with China leading the way, that would love to see the international human rights framework go into the dustbin of history,” Donahoe says.

For all the good that AI can accomplish, she cautions, it can also be a tool to undermine rights as basic as those of freedom of speech and assembly. The report calls on governments to make explicit commitments: first, to analyze human rights risks of AI across all agencies and the private sector, as well as at every level of development; second, to set up ways of reducing those risks; and third, to establish consequences and vehicles for remediation when rights are jeopardized.

Human Rights Risks

“My focal point is on civil political rights and, in that regard, the foundational right that’s at risk from AI is privacy,” Donahoe says. “AI systems are built on and fed with data. If everything you say and do is tracked and monitored, that will have a chilling effect on what you feel free to say, where you feel free to go, and with whom you feel free to meet. If you’re a dissident, it will affect your ability to criticize the government. And that’s the whole point of mass surveillance for an authoritarian government — that people will self-regulate and self-censor. Loss of privacy leads directly to risks to the freedoms of assembly, association, and expression.”

The risks aren’t hypothetical, she says. Local and national governments already use facial recognition technology and ubiquitous camera surveillance to identify suspected criminals, and sometimes even political protesters. In China, some cities have installed cameras outside houses and apartments, apparently to monitor people in quarantine because of the coronavirus. The U.S. Department of Homeland Security has begun installing facial recognition systems at airport check-ins and gates, potentially adding millions of people a day to a federal database of faces and identities.

Artificial intelligence and machine learning systems also pose risks of arbitrary discrimination. Hiring systems that sort through job applications and resumes, for example, have been found to incorporate previous discriminatory practices against minorities and women. The same problems have arisen with AI systems that some courts use for decisions about bail, sentencing and parole.

Developing a Real Response

The new study, which analyzed more than two dozen national AI strategies, found that most of the governments did acknowledge ethical concerns and human rights risks. The U.S. strategy has a chapter on ethical issues, which begins with a call for protecting “civil liberties, privacy and American values.”

But the researchers found that very few governments made explicit commitments to do systematic human rights-based analysis of the potential risks, much less to reduce them or impose consequences when rights are violated.  The report mentions that Norway, Germany, Denmark, and the Netherlands took pains to emphasize human rights in their strategies, but it suggests that none of the governments has moved abstract commitments toward concrete and systematic plans.

The report argues that governments and companies should explicitly commit themselves to performing a human rights impact analysis in every sector and for every new application of AI.  To avoid endless debates about the definition of “human rights,” they argue, governments should base their assessments on the well-established international frameworks, such as the U.N. Declaration on Human Rights and the International Covenant on Civil and Political Rights.

The next step, says Donahoe, will be for governments to actually ensure that those assessments are carried out, and then to set up institutions or mechanisms for mitigating the risks and remedying violations when they occur.

“In all but a very small number of cases, there was a lack of depth and specificity on how human rights should be protected,” the report notes. Without clear and specific commitments, the report warns, ”even the strongest language will only be words.”

Donahoe cautions that no national strategy can anticipate every risk, adding that many situations require trade-offs. Many people, for example, would support digital contact tracing systems if they were only being used to slow the coronavirus pandemic. Many would oppose those tools, however, if their data was also going to law enforcement agencies or immigration.

“The key principles here are necessity, proportionality, and legality,” Donahoe says. Is the tracking technology necessary for addressing a genuine public danger? Is the intrusion of privacy proportionate to the benefits, and are there less intrusive alternatives? And is the government’s goal truly legitimate?

Donahoe acknowledges that it won’t be easy to get the details right. The state of Washington recently enacted new restrictions on how state and local agencies use facial recognition. The new law requires agencies to set up mechanisms aimed at ensuring transparency and accountability, and it requires that systems be tested for possible racial or ethnic biases. Civil rights and privacy advocates, however, said the measure fell short of what was necessary.

“We are in the early days when it comes to public understanding of the value of human rights impact assessments for AI,” Donahoe says. “The international human rights framework provides a globally recognized, universally applicable set of norms that nations around the world can incorporate into their national strategies. Much more work needs to be done, however, to articulate how to apply this framework to the AI applications and systems being deployed throughout society. The goal of this report was to remind governments that their existing human rights commitments already provide the normative foundation for assessing the impact of AI.”   

More News Topics

Related Content