top of page

AI Bias & Ethics: A Syrian Girl’s Perspective

Writer: Lubna AlbadawiLubna Albadawi

Introduction: The Invisible Walls of AI Bias

Growing up in Syria, I believed technology was fair. A machine doesn’t have feelings, right? It doesn’t judge like people do. But after moving to the USA, I learned that even AI—a thing made from code—can be unfair. AI bias is real, and it can hurt people who look, speak, or live differently. As someone navigating two cultures, I see the dangers of AI bias from both sides, and I believe we must talk about it before it becomes even worse.


What is AI Bias and Why Should We Care?

AI bias happens when artificial intelligence makes unfair decisions because of the data it was trained on. AI doesn’t think for itself—it learns from humans. If it learns from biased data, it will also make biased choices. This can lead to serious problems, like facial recognition systems that don’t recognize people with darker skin or hiring software that rejects resumes with “foreign” names. These errors are not just technical mistakes; they affect real lives and create invisible walls between people and opportunities.


AI Bias in Everyday Life

One day, I tried using a facial recognition system to unlock my phone, but it didn’t recognize me when I was wearing a hijab. I had to remove it for the AI to see me. It made me wonder: if AI doesn’t “see” me, does that mean I don’t belong in its world? Later, I read about how facial recognition systems have higher error rates for Black and Middle Eastern people. A 2018 study by MIT found that some facial recognition software had error rates of over 30% for darker-skinned women, compared to less than 1% for lighter-skinned men (Buolamwini & Gebru, 2018). This is not just bad technology; it is a reflection of whose data AI was trained on and whose faces matter in the digital world.


Another example is AI in hiring. Some companies use AI to scan resumes, but if the system is trained mostly on past resumes from men, it might favor male applicants over women. In 2018, Amazon had to shut down its hiring AI because it was biased against women. The system learned that most past hires were men and assumed men were better candidates. If AI is replacing humans in decision-making, we must make sure it is fair.


The Ethical Responsibility of AI Developers

AI developers hold a huge responsibility. When they build AI, they shape the future. If they train AI only on Western, male-dominated, or English-speaking data, they create technology that only works for certain people. This is unfair. We need AI that represents everyone—not just those who fit the “default” model.

Diversity in AI teams is one way to fix this. If people from different backgrounds work on AI, they can notice biases that others might miss. AI also needs better data—data that includes people of all races, genders, and cultures. Companies like Google and Microsoft are now working on making their AI more inclusive, but there is still a long way to go.


Practical Solutions: How Can We Reduce AI Bias?

  • Diverse Data Sets – AI should be trained on data that represents all types of people, not just one group.

  • Fair Testing – Before releasing AI, companies must test it on different races, genders, and languages to check for bias.

  • Transparency – AI should not be a “black box.” We need to know how it makes decisions so we can hold companies accountable.

  • Human Oversight – AI should not replace humans completely. People should still be involved in important decisions to prevent unfair outcomes.

  • More Ethical Guidelines – Governments and organizations must create stronger laws to prevent AI from harming people unfairly.


A Future Where AI Sees Everyone

AI is powerful, but it should not decide who belongs and who doesn’t. As a Syrian woman in the USA, I see how technology can both help and harm. AI should not be another tool of discrimination—it should be a force for fairness. If we want AI to serve all of humanity, we must build it with responsibility, transparency, and inclusivity. AI should see me, you, and everyone, equally.


 

The Writer's Profile


Lubna Albadawi

Student of Journalism & Communication

University of Nebraska Omaha, USA



About : A Syrian master’s student living in the USA, passionate about ethical AI and technology’s impact on society. She believes that AI should represent all voices, not just the loudest ones.

 
 

1 comentário


Maisha Munir
Maisha Munir
6 days ago

Thank you for sharing this!

Curtir
bottom of page