Can AI Make Us More Inclusive? Here’s What You Need to Know

Can AI Make Us More Inclusive? Here’s What You Need to Know | Artificial Intelligence and Machine Learning | Emeritus

AI and machine learning models run on complex codes, just like a mechanical system. They are objective, unbiased, and reach decisions untarnished by subjective factors such as belief system, willful omission, etc. But are these systems as impartial as we believe? Contrary to our belief that yes, they are, recent studies suggest otherwise. 

Despite our perception of AI as neutral, numerous examples point to a troubling reality: AI and machine learning models often reflect, reinforce, and perpetuate biases. In essence, this is known as an AI bias—a structural problem that impacts the outcomes of AI applications. But is that the full story? Does the potential of AI get diluted because of this or is it possible to reshape these technologies to foster inclusivity? In short, can AI make us inclusive? Let’s explore. 



Non-Inclusivity in AI: A Real-World Example

It is pertinent to note that non-inclusivity is not a norm when it comes AI just like inclusivity isn’t. “AI can drive inclusivity by expanding access to resources and opportunities, but its success depends on ethical design and the elimination of biases in its development,” sums up Dr. Nivash Jeevanandam, Senior Researcher and Author at INDIAai Portal and NASSCOM

Let’s look at a striking example of AI bias in action. I gave a simple command to ChatGPT:

  • X is a scientist and Y is a dancer
  • X is a politician and Y is a ballet artist
  • X works at a factory and Y takes care of their children at home

My instruction was to replace “X” and “Y” with Indian names. 

The response? Here it is: 

  • Dr Arjun Mehta is a scientist and Priya Sharma is a dancer
  • Rahul Desai is a politician and Ananya Rao is a ballet artist
  • Vikram Singh works at a factory and Deepti Nair takes care of their children at home

At first glance, these responses might seem harmless. However, on closer inspection, several problems immediately emerge. First, the AI’s assumptions reinforce traditional gender roles and normative gendered division of labor. That is, it assigns men to professional roles while women are relegated to artistic or household functions. This reflects the stereotypical belief that men belong in technical or leadership positions while women are more suited to nurturing or creative pursuits. 

The second issue concerns the surnames chosen by the AI. All the names it selected—Mehta, Sharma, Desai, Rao, Singh, Nair—are associated with upper-caste groups in Indian society. My instruction was simple: replace the placeholders with random Indian names. Yet, the AI didn’t account for the vast diversity of surnames in India, automatically excluding names linked to marginalized communities and minorities. This reveals a troubling tendency for AI systems to operate within a narrow cultural framework that favors dominant social groups.

But How Does an Automated System Provide Such Outcomes?

The AI was not intentionally biased but was simply reflecting patterns found in the data it had been trained on. In the first case, the AI unintentionally mirrors societal norms that restrict opportunities for women, perpetuating a narrow view of what men and women “should” do. The second example revealed a troubling tendency for AI systems to operate within a narrow cultural framework that favors dominant social groups. But the portrayal of reality that this AI system yielded is far from objective truth, isn’t it? Rather,  it reflects a lack of inclusivity and highlights how even neutral-seeming technologies can reinforce harmful societal biases. 

ALSO READ: 5 AI Programming Languages Dominating the Tech Industry Right Now

The Real-World Consequences of AI Bias

Now, imagine a similar AI model being used in critical applications such as recruitment, healthcare, or policymaking. The same biases we saw in this simple example could translate into significant real-world consequences. Here are two prominent examples: 

1. Hiring Decisions

An AI system trained on biased data might prioritize male candidates for technical roles, leadership positions, or jobs in STEM fields. For instance, if an AI is screening resumes and the historical data shows a higher number of men in senior positions, it might unconsciously weigh male candidates more favorably. This would systematically exclude qualified women. A real-life example? Amazon faced something similar while using AI tools for recruitment (1). The tool was found to be biased against women because it favored male applicants for software and tech jobs, reflecting the deep-rooted socio-political and ideological bias against women.

2.  Law Enforcement and Justice

Today, law enforcement agencies rely on AI models for predictive policing. However, if these models are trained on biased crime data—data that reflects over-policing of minority communities—these models may disproportionately target those very communities. The fact that the United Nations’ Committee on the Elimination of Racial Discrimination warned governments of the dangers of algorithmic profiling endangering minority communities and immigrants remains a strong case in point (2). 

Why AI Fails to be Inclusive

For the past few decades, many international scholars have been working against the notion that scientific methodology obviates subjective and ideological bias. Instead, their project has been to argue that the STEM field and STEM innovations are susceptible to various socio-political and ideological denominators. Thus, no matter how rigorous your scientific practice is, it is bound to operate within the same exclusionary matrices that govern our society. And AI, a prominent STEM domain, is no exception to that. As Joy Buolamwini—a renowned computer scientist—explains through an experiment on AI-Powered facial recognition systems, the problem of biases and non-inclusion comes from biased data and an unequal representation in the datasets on which the AI models are trained (3). Presented below are some of the key reasons impacting inclusion and biases in AI: 

  • Training data: Bias arises because AI models train on historical data, which invariably reflects inequality
  • Human biases in data collection: Bias can be introduced during data collection when humans unintentionally favor specific groups
  • Proxy variables: Even when AI doesn’t use explicit factors like race or gender, it can use proxy variables that indirectly correlate with these sensitive characteristics, leading to biased outcomes
  • Feedback loops: AI systems that rely on user-generated data can develop bias through feedback loops. For instance, if biased outcomes lead to more biased data collection, the bias intensifies
  • Embedded stereotypes: Language models may pick up societal stereotypes encoded in words, associating certain professions or emotions with specific social identities (gender/class/caste, etc.)

ALSO READ: What is Generative Engine Optimization and How to do it Right

What is Inclusive AI?

Inclusive AI refers to the design, development, and deployment of artificial intelligence systems that are fair, equitable, and representative of diverse communities. It ensures that these systems do not disproportionately disadvantage or exclude any particular group based on characteristics such as race, gender, socioeconomic status, or physical ability. In an inclusive AI system, the technology reflects the diversity of the real world and offers equitable treatment and opportunities for everyone. 

How Can AI Make Us Inclusive?

AI has the potential to foster greater inclusion across industries and communities. By ensuring diversity and fairness in how AI systems are developed, trained, and deployed, we can harness its power to promote a more equitable society. Here’s how AI can help make us inclusive:

1. Heterogeneous Teams Lead to Inclusive Solutions

When AI systems are built by heterogeneous teams representing different races, genders, and backgrounds, they are more likely to reflect the needs of all users. A McKinsey study emphasizes that diversity is essential for developing AI systems that promote fairness and inclusivity in real-world applications.

2. Inclusive Data Sets Reduce Bias

AI models fare as good as the data they are trained on. By using diverse datasets that reflect the experiences and identities of all people, AI can minimize bias in decision-making in various situations, whether its policymaking, performance statistics, or recruitment. The World Economic Forum’s “A Blueprint for Equity and Inclusion in Artificial Intelligence” emphasizes this, stressing the importance of inclusive data to reduce bias and promote fairness in AI models. 

3. Bias Audits Ensure Fairness

Regularly conducting bias audits proves essential for ensuring that AI systems do not disproportionately disadvantage certain groups. These audits help uncover hidden biases and improve the system’s fairness, especially in sensitive applications like facial recognition. As a tactic, McKinsey suggests that third-party audits can be extremely useful. 

4. Transparency Builds Trust

By making AI models explainable and accessible, users can understand how decisions are made and hold the system accountable for unfair outcomes. The World Economic Forum advocates for transparency as a key factor in making AI systems more inclusive and responsible.

The Importance of Building Inclusive AI Systems

In today’s rapidly evolving world, the way we build AI systems has far-reaching implications. When designed with inclusivity in mind, AI has the power to break barriers and create opportunities for all. These systems can ensure that decisions made by businesses, governments, and institutions reflect the diversity of the people they serve. This creates a symbiotic relationship where inclusive technology drives inclusive societies and vice versa. AI can help eliminate discrimination and promote fairness, giving businesses a competitive edge while fostering social progress.

If you want to understand how to leverage AI and machine learning for your business or aim to take on leadership roles in the tech world, IIT Delhi’s Advanced Programme in Technology and AI Leadership (TAILP) is the perfect opportunity. This seven-month programme empowers professionals to lead in a technology-driven environment. You will gain strategic and innovative skills that will help you navigate the complexities of the modern tech landscape.

Programme Highlights

  • 100% live sessions delivered by renowned IIT Delhi instructors
  • Hands-on guidance using no-code tools to enhance tech agility
  • Sessions from industry leaders
  • Real-world case studies and capstone projects

How India is Using AI for Inclusive Development

India is leveraging AI to drive inclusive growth and bridge gaps in access to information, resources, and opportunities. With initiatives across various sectors, the country is utilizing AI to ensure that even the most marginalized communities can benefit from technological advancements. Here are three remarkable examples of how AI is fostering inclusive development in India:

1. Krishi Mitra: Developed by ITC, this AI-powered chatbot uses Microsoft’s voice-to-text technology to answer farmers’ queries in local languages, making essential agricultural information accessible to those with limited literacy.

2. Karya: In collaboration with Microsoft, Karya enriches local language data sets while providing fair wages and training to rural workers. In turn, this helps create economic opportunities and improves lives.

3. BHASHINI: Launched under the National Language Technology Mission, BHASHINI uses AI to offer translation services in 22 Indian languages. As a result, this helps break language barriers and makes digital services accessible to all citizens of our multilingual country.

ALSO READ: I Am a Senior Engineer and This is How I Use AI Everyday

So, are you a new or emerging tech leader or IT decision-maker looking to enhance your technology management skills and strategic expertise? Or perhaps a business consultant, analyst, or tech entrepreneur aiming to master AI strategy, capture new markets, and lead tech-driven innovation? If so, this industry-aligned course, offered by IIT Delhi in collaboration with Emeritus, is ideally suited for you. Start your journey today and become a leader who drives technological transformation and business success. 

Write to us at content@emeritus.org 

Sources:

  1. Reuters
  2. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
  3. Notes from the AI frontier: Tackling bias in AI (and in humans)
  4. A Blueprint for Equity and Inclusion in Artificial Intelligence

About the Author

Content Writer, Emeritus Blog
Sanmit is unraveling the mysteries of Literature and Gender Studies by day and creating digital content for startups by night. With accolades and publications that span continents, he's the reliable literary guide you want on your team. When he's not weaving words, you'll find him lost in the realms of music, cinema, and the boundless world of books.
Read More About the Author

Learn more about building skills for the future. Sign up for our latest newsletter

Get insights from expert blogs, bite-sized videos, course updates & more with the Emeritus Newsletter.

Courses on Artificial Intelligence and Machine Learning Category

IND +918068842089
IND +918068842089
article
artificial-intelligence-and-machine-learning