Reclaiming AI as a Tool for Equity

Artificial intelligence (AI) has become central to creative industries, media, and digital representation. But it seems that it didn’t advance inclusivity. AI often replicates societal biases embedded within the data it consumes. These biases are not incidental; they stem from the broader influence of racial capitalism, where commercial motives prioritise profit over authentic representation. I argue that AI must be fundamentally redesigned to prioritise equity and inclusivity over profit-driven motives.

DALL·E is an artificial intelligence tool developed by OpenAI that can generate images on request. First I entered the command “Directly generate the most handsome man in the world” into DALL·E. The AI displayed a very monotonous aesthetic. It almost always generate white men with short curly hair or slicked-back hair. The results align with a stereotypical Western standard of beauty. Then, I entered the command “Directly generate a group of rich people”. The results were photos of white men and women in suits or formal wear attending banquets. This is certainly lacking in ethnic diversity. It seems that in the AI stereotype, the white community is wealthy and has a beautiful appearance. This may be a continuation of the white supremacy theory. Artificial intelligence is based on algorithms. Safia Noble (2018) points out through an analysis of search algorithms that algorithms are not neutral; they reflect the biases of the societies that create them. Noble argues that search engines, often perceived as impartial tools, actually reinforce existing social inequalities by amplifying dominant cultural norms and ideologies (Noble, 2018, pp. 35-36). This phenomenon can also extend to image-generation tools like DALL·E.

“Directly generate the most handsome man in the world” Generate results
“Directly generate a group of rich people” Generate results

Ruha Benjamin’s concept of the “New Jim Code” extends this critique. It shows how technologies falsely marketed as objective actually embed and perpetuate social biases. Benjamin (2019) points out that if AI is left unchecked, will established a “a digital caste system.” It will be associating racialised names with negative stereotypes or limiting job opportunities for marginalised groups. Benjamin’s insights underscore that so-called “neutral” technology is a myth; ethical AI requires intentional, bias-aware design (Benjamin, 2019, pp. 6-7). From my point of view, for AI to break free from these biases, developers must create mechanisms that detect and actively correct inequities rather than blindly following historical data patterns.

The commodification of diversity further complicates the issue, as noted by Saha and van Lente (2022). The authors argue that media industries often treat diversity as a marketable asset without addressing deeper issues of representation. In AI-generated content, diverse characters are often reduced to stereotypical roles. It reflects a narrow portrayal that is easy to market. But fails to challenge systemic inequalities (Saha and van Lente, 2022, p. 218). A shift in AI development toward authenticity over marketability is essential to creating systems that advance fairer portrayals of marginalised communities.

A structural challenge to this reimagining of AI lies in its development within capitalist frameworks. Virdee (2019) argues that racism has historically been a tool for sustaining capitalism by creating hierarchical divisions within society (Virdee, 2019, p. 22). AI systems developed within these frameworks will inevitably replicate these hierarchies unless actively designed to do otherwise.

To position AI as a proactive force for equity. I argue that developers must adopt intentional design principles that go beyond mere compliance with market standards. Ethical AI development should involve a community-centred approach. It needs to bring voices from marginalised groups into the design process and establish oversight structures that scrutinise AI outputs for potential biases. Through collaborative data curation and continuous refinement. The developers can design algorithms that elevate marginalised narratives. Make it prioritising social justice over commercial imperatives. Developers should not treat bias as an inherent flaw but rather as a challenge. It should be mitigated with the right tools and frameworks. This approach could reimagine AI as an agent of social equity. It is challenging the historical and economic forces that have traditionally marginalised certain groups.

Extended link: Most Popular AI Ethics Principles

In conclusion, AI’s current replication of societal biases is not inevitable. We need to rethink AI’s purpose, data sources, and development practices. And create tools that promote a more equitable digital landscape. Such a shift is not only possible but necessary if AI is to become a force for positive, inclusive social change.

References

Benjamin, R. (2019) Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press.


Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.


Saha, A. and van Lente, S. (2022) ‘Diversity, Media and Racial Capitalism: A Case Study on Publishing’, Ethnic and Racial Studies, 45(16), pp. 216–236.


Virdee, S. (2019) ‘Racialized Capitalism: An Account of its Contested Origins and Consolidation’, The Sociological Review, 67(1), pp. 3–27.