Neither Indifference Nor Essentialism: The Challenges of Building Globally Inclusive AI

AI bias has featured prominently in the news in recent years. Whether it is image recognition technology from Google labelling black people as “gorillas”, a recruiting algorithm developed then scrapped by Amazon for discriminating against women, or a chatbot by Microsoft rapidly learning to engage in racist and misogynistic hate speech on Twitter, there is no shortage of examples where AI has learned to imbibe and reproduce the prejudices of society at large. 

In response, there has been a multitude of calls and promises by industry leaders and corporations to build inclusive AI. Last September, IBM released its AI Fairness 360 toolkit for bias mitigation in machine learning, while Facebook announced in May their three-part plan to build more inclusive AI. All of this represents a step in the right direction. But a phrase like “inclusive AI” can be interpreted in many ways. The discussion in English-speaking media has tended to fo- cus around gender and race in a US-centred context. But what does inclusion look like in an international context, or in the highly multicultural societies of Singapore and Southeast Asia? 

“Diversity” and “inclusion” are highly context sensitive

After all, “gender” and “race”, not to mention other social categories, can mean very different things around the world. In the US, for example, “Asian” evokes someone of East Asian descent, but the term refers to South Asians in the UK, and is not considered a “race” at all in Singapore, where all the major recognized “races” are “Asian”. As such, any AI system trained in one locale to use “race” as a feature – or more optimistically, to prevent discrimination on its basis – will not generalise to other locales. 

Similar observations can be made about gender. Not only do gender norms and the status of women differ greatly across the region, leading to different biases in the data, Southeast Asia also has a long history of gender pluralism, with numerous cultures recognising three or more genders. Any AI system designed to recognise only two genders, as has historically been assumed in the West and its colonies, will thus be entirely incapable of representing such diversity. 

In improving representation, beware essentialism

To address these difficulties, it is tempting to simply patch AI systems to accommodate the categories and practices prevalent in the locale they are deployed. This is certainly better than cultural indifference – blithely imposing the assumptions taken for granted in one society upon another. But it also misses the deeper problem with any attempt to better represent the world through categorisations: it tends to essentialise those categories – to assume they have well-defined, immutable essences – to the detriment of anyone who does not fall neatly into them, and to the restriction of even those who do. 

Consider the system of racial categories that Singapore has inherited from the Brit- ish: Chinese, Malay, Indian, Other. When the British used it, it was rooted in pseudo-scientific beliefs about the superiority of some races over others, and often used to pursue policies of segregation. Singapore’s continued use of these categories, by contrast, is well-intended: for example, ensuring ethnic integration via racial quotas, and preserving linguistic heritage through mother tongue policy. But this has not been without its problems. Critics point out that a race-based mother tongue policy has tended to exclude racial minorities from high-performing schools that are gazetted to preserve Chinese language and culture. And it was only in 2010 that biracial children were able to have their multiple ethnic heritages recognised in government systems. If designers of AI – or even AI de-biasing systems – are not careful, they may unintentionally cause similar effects through rigid categorisation, reducing cultural and personal autonomy rather than enabling it. 

Indeed, these sorts of problems have already come to light in AI systems that recognise gender. In 2018, the Uber app automatically suspended the accounts of transgender drivers, because its facial ID security feature was unable to accurately identify the faces of drivers undergoing gender transition. A subsequent study of the literature on Automatic Gender Recognition (AGR) has found that more than 95% of papers mistakenly assume that gender is a binary variable, an immutable variable, or both. This simplistic assumption is made even in papers that critique gender bias in AGR technology. 

Racial and gender essentialism are but two examples of how categorisation, however well-intended, can go wrong. On top of all this, there is the risk of cultural essentialism, which is a limitation of any rigid locale-based workaround to the problems highlighted above. If AI engineers only localise systems by following the dominant practices of presumably well-defined cultures, this ignores the fact that cultures inevitably overlap and evolve. Singapore is a good example – any AI speech recognition system designed for only English, Malay, or Mandarin would be incapable of parsing the potpourri of those languages (and more) used in a food centre here. 

No inclusive AI without an inclusive society

The issues raised above are not just technical issues, but also social and political ones – “political” in the sense of who has the power to develop, direct, or deploy AI. After all, it is only due to the present geopolitical order that contemporary AI services are overwhelmingly US-centric, while also becoming increasingly Sinocentric. A globally inclusive AI ecosystem requires instead that countries coordinate and collaborate, enabling shared and equitable growth of AI capabilities across national borders. It also requires multinational tech corporations to make concerted efforts to diversify and localise their AI research and development teams. 

This process of diversification needs to go beyond ensuring that different nationalities are represented. It is crucial that under-represented populations within each country have a place at the table – and the workstation – as well. Otherwise, all that this “global diversity” will amount to is a roomful of professional men from different countries, each agreeing that everyone’s “national interest” must be accounted for, while scarcely knowing the interests of marginalised people in the countries that they purport to represent. To build inclusive AI, we need to recognise that expertise in human concerns is highly distributed by default, and so that diversity is essential at every level of AI development, from data annotation to local offices to international headquarters. As the AI Now Institute puts it [96], we need to always ask: “Which humans are in the loop?” 

Even demographic diversity is not enough. AI organisations also need to foster a culture of diversity – one that encourages critical contributions and insights from each person’s unique life experiences and expertise. If not, an organisation may have social diversity and yet lack intellectual diversity. Dominant cultural norms may inhibit minorities, such as women in AI, from raising their concerns, or else incentivise staff to focus on uses of AI that are more “globally applicable”, which likely implies Western use cases. This is especially likely in engineering and computer science, where the technical and the social are often seen as separate domains. Organisations need to deconstruct this artificial divide, building discussions about social implications into everyday work and never letting “I’m just an engineer” be an excuse. 

Towards versatile, non-essentialist AI

With a social environment that fosters inclusion, many technical avenues for inclusion become viable as well. Today’s AI systems are often limited to the categories and labels they are initially provided with. There is an enormous potential to build AI systems that effectively learn new categories from the data, while also reorganising their existing categories and representations to better suit the tasks at hand – AI that is not essentialist, but ontologically versatile. And if we develop AI to be increasingly personalized, but also privacy-preserving, then the reliance on broad and reductive categories will diminish, enabling AI to treat and respect us as individuals, who may indeed be situated in larger social groups and relations but should never be reduced to them. 

If all this comes to pass, then the future generation of AI systems need not end up like those portrayed in so many dystopian series and novels: laissez-faire systems designed with little thought for social implications, unintentionally optimising society towards a less diverse, more polarised world; or authoritarian, impersonal algorithms, reshaping society to conform to the rigid and “orderly” social vision of their creators. Rather, AI and its development will be embedded in a matrix of empowered actors, all of whom have a voice in shaping AI’s goals and assumptions, and in deciding how they want to be recognised, represented, and treated as individuals. 

In this future, AI will be truly inclusive: partaking in neither indifference, nor essentialism, but embracing the world in its ever-shifting diversity. 

Tan Zhi Xuan – AI Researcher, MIT / Board Member, Effective Altruism Singapore

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

Who Are We?

The foundation gathers thought leaders, researchers, decision-makers, from Asia and Europe, to lead working groups and research projects on the positive impacts of artificial intelligence on our society.

© 2018 Live With AI | All Rights Reserved