
Like many industries that garner billions of dollars in investments, AI is a very privileged, white, male space. And since many AI programs are created by that subset, the danger of the homogenization of AI output is pretty high.
The danger of homogenization
Amy Webb’s talk at SXSW in March 2024 highlights this issue. When she asked AI image generators to create an image of a CEO, it always produced a middle-aged white man. When she asked AI to produce an image of a CEO of a tampon company, it created a picture of a middle-aged white man… surrounded by tampons.
While most of us think that large language models like ChatGPT just spit out automated responses, the truth is that there are many humans behind the responses. Companies like Data Annotation hire gig workers to help the models write more like humans. While it's good that the teams helping to write AI content are diverse (but face bad working conditions, something I’ll cover in a future piece), the internal checks and balances at these companies are unclear. What are they asking their employees to do? It’s still too early to tell.
University of Texas professor S. Craig Watkins, who studies inequality in tech, highlighted real-world biases in his 2021 TED talk. He cited the wrongful arrest of Robert Williams, a Black man misidentified by facial recognition technology. An Amazon AI recruiting tool was found to discriminate against women a few years before that.
The problem in both of these cases isn’t only that AI had a bias issue, it’s the biases of its creators and users. Amazon's AI recruiting tool mirrored existing hiring biases. In WIlliams’ case, the police arrested him because AI told them to - even though he looked nothing like the suspect they were after.
The problem isn’t only that the AI had a bias issue, it’s the biases of its creators and users.
When OpenAI introduced new ChatGPT features in May, it wasn't surprising that the AI's voice was female and somewhat flirty, echoing gender biases in tech.
As Watkins says, these biases are often unintentional, stemming from unconscious bias. Unlearning these biases is a challenging but necessary task.
The solution: diverse voices in AI development
As Caroline Criado Perez writes in her book, Invisible Women, "When your big data is corrupted by big silences, the truths you get are half-truths, at best... When we are designing a world that is meant to work for everyone, we need women in the room. If the people making decisions that affect us all are white, able-bodied men, that constitutes a data gap."
It's not just about using diverse programmers. We need ethicists, business strategists, behavioural scientists, psychologists, designers, artists, and writers involved in making decisions that will shape how we develop AI and shape our future.
How to make AI more inclusive
1. Diverse data sets: Ensure AI models learn from diverse and inclusive data, reflecting different genders, races, cultures, and viewpoints. This helps AI produce more balanced and representative outputs.
2. Inclusive development teams: Diverse teams make better decisions than homogeneous ones 87% of the time. Korn Ferry found that diverse-by-design teams are significantly more effective in decision-making processes.
3. Bias detection and correction: Regular bias audits can reduce algorithmic bias by up to 40%. This makes ongoing monitoring and corrective measures to maintain fairness in AI systems essential.
4. Transparent algorithms: Transparency in algorithms involves making them understandable and open to external review, which helps build trust and allows for the identification and correction of potential biases.
5. Ethical guidelines: Establishing and adhering to ethical guidelines ensures representation and fairness, supported by industry standards or regulatory frameworks. This approach is essential for maintaining ethical AI development practices.
6. User feedback: Google found that incorporating user feedback improved AI accuracy by 20%, highlighting the value of user insights in refining AI technologies to better meet diverse needs.
7. AI literacy: Educate the public and developers about AI and its potential biases to create a more informed, critical user base. Higher AI literacy will also encourage people from all backgrounds to enter the field.
8. Cultural sensitivity: Ensure that people using and programming AI models are sensitive to cultural differences and nuances to prevent stereotyping or ignoring minority perspectives, through careful training and testing with diverse user groups.
Weekly disruptions
Hollywood stars’ estates agree to the use of their voices with AI (CNN) AI company ElevenLabs is set to release digitally recreated voices of deceased stars Judy Garland, James Dean, and Burt Reynolds for its new Reader app, enabling users to hear the celebrities narrate various texts. The initiative, which involves agreements with the estates of these actors, showcases AI's potential in Hollywood but also sparks debates about copyright and the ethical use of synthetic voices.
For older people who are lonely, is the solution a robot friend? (New York Times) New York is tackling loneliness among older adults with ElliQ, an AI-powered robotic companion designed to engage in meaningful conversations, share news, play games, and remind users about medication.
Mind-reading AI turns thoughts into pictures with unprecedented accuracy (Interesting Engineering) Researchers at Netherlands’ Radboud University developed an AI system that can reconstruct images from brain activity with unprecedented accuracy. By using advanced mind-reading technology, they were able to produce near-perfect reconstructions of original images from both human and monkey brain signals. This breakthrough holds potential for new treatments for vision loss and revolutionary communication methods for individuals with disabilities.