Artificial Intelligence (AI) offers the means to dramatically expand the capabilities of government-run surveillance systems. But as individuals become more intensively surveyed, the extent of privacy they enjoy shrinks. We maintain that governments and the makers of AI surveillance technologies are morally obliged to respect people’s privacy as a universal human right. This requires instituting significant checks on surveillance regimes.
What is Privacy?
Within legal and philosophical literature, privacy has been defined in many ways. This speaks in favour of understanding privacy as a “cluster concept” with various dimensions. Philosopher Anita L. Allen outlines four dimensions of privacy (Allen 2005: 485):
- Decisional privacy: freedom from outside interference with personal decisions;
- Physical privacy: seclusion, solitude, and bodily integrity;
- Informational privacy: confidentiality, anonymity, data protection, the secrecy of personal facts;
- Proprietary privacy: limits on the use of a person’s name, likeness, or other attributes of identity.
The Human Right to Privacy
A human right to privacy is affirmed in all international and regional human rights instruments, including the Universal Declaration of Human Rights (Article 12), the International Covenant on Civil and Political Rights (Article 17), the African Union Principles on Freedom of Expression (Article 4), the American Convention on Human Rights (Article 11), the Arab Charter on Human Rights (Articles 16 and 21), the European Convention on Human Rights (Article 8), and the ASEAN Human Rights Declaration (Article 21) (Privacy International).
In agreement with these instruments, we argue that a human right to privacy is a moral entitlement owed to all human beings. Nearly all human beings have an interest in privacy, and consequently, the flourishing of human societies depends on the presence of norms that enable individuals to secure a condition of privacy. A large body of ethnographic research suggests that although the socially expected scope of privacy may vary across cultures, people everywhere display an interest in maintaining privacy (Westin 1967; Altman 1977; Moore 2003). Even in societies where people appear to live with minimal privacy, there exist practices that empower individuals to limit contact with others and to control what others know about them. For instance: the Mehinacu of Brazil observed long periods of seclusion, had strong prohibitions against asking embarrassing questions of each other, and they regularly lied to avoid revealing sensitive information (Altman 1977: 73).
AI and Threats to Privacy
Given privacy’s importance, the threats to privacy emerging from AI-augmented surveillance technologies must be addressed. China’s surveillance of the Muslim-majority Uighur population in the Xinjiang region demonstrates how egregiously the right to privacy can be infringed upon with the help of AI. Chinese authorities use AI-based facial recognition systems to search surveillance camera footage and track Uighurs based on their ethnically distinctive facial appearance. The facial recognition technologies can raise red flags when Uighurs congregate, and can tag the faces of any Uighurs who leave Xinjiang. These surveillance technologies support a well-documented regime of repression to which the Uighurs have been subjected by the Chinese government. Over 1 million Uighurs have been interned in indoctrination camps, where detainees are separated from their families, forced to renounce their faith, and deprived of food and sleep.
Surveillance can potentially improve law enforcement and terrorism prevention efforts, making societies generally safer. From this premise, some argue that we should tolerate reductions in privacy for the sake of greater security (Himma 2016). Yet in many ways, privacy protections are essential to the security of individ- uals and communities. They operate as safeguards against bad actors who would use surveillance to conduct violence and repression (cf. Moore 2016). The current plight of Chinese Uighurs is a case in point: they endure a state of extreme insecurity because their government has so thoroughly stripped them of privacy.
Overcoming the Threats
Some companies at the forefront of AI research and development, like Google and Microsoft, have drafted lists of principles to guide the design, production, and use of AI. Their stated principles include a commitment to privacy (Microsoft: “AI systems should be secure and respect privacy,” and Google: “We believe that AI should … incorporate privacy design principles”), among other values like fairness, reliability, safety, and accountability. However, these statements don’t explain what might constitute a violation of privacy, or how the companies intend to balance different values in situations where they compete. For example, it is left undetermined what should be done when maintaining privacy may conflict with safety concerns.
Ethical guidelines for AI technologies need to be more specific and concrete. Rather than a vague promise to “incorporate privacy design principles,” companies and institutions should specify what reasons constitute legitimate or illegitimate invasions of an individual’s privacy. For instance, keeping people safe is legitimate, whereas maintaining one-party rule, eliminating cultural or religious differences, and quashing dissent is not. Importantly, even if surveillance does facilitate great- er security for most people, the costs of surveillance should not be borne disproportionately by innocent minorities.
Transparency is vital. Without the possibility of external review, particularly judicial review, governments and companies are free to decide for themselves where to put the slider between legitimate and illegitimate invasions of privacy. Leaving the decision in their hands leaves people unjustifiably vulnerable, and thus we recommend that regulation and oversight of the use of surveillance technologies be conducted in a manner transparent to the public. These measures will help to keep up an open, informed debate on how to balance privacy and security.
Standing for Privacy
AI surveillance technologies are tempting governments to keep an ever-closer watch over private individuals, whether it be in service of security or authoritarian control. Likewise, companies manufacturing these technologies will be tempted to supply the growing demand for AI surveillance, without asking crucial questions about how their clients will use it. These temptations should be checked. The Uighurs’ experience is a grim reminder of how surveillance can make people less secure when it’s used as an instrument of repression. From transparent public oversight of governments’ surveillance practices to the self-policing codes of ethically conscientious manufacturers, a panoply of mechanisms is needed to safeguard the human right to privacy worldwide.
Andres Carlos Luco and Kathryn Muyskens – NTU