As AI evolves rapidly, the risks for women grow with it 

A woman's back view with the lights of AI processing units in front of her via Canva

 AI is reshaping our world — but it’s also quietly replicating its deepest inequalities. For women, the consequences can be overwhelming, exploitative, or both at once.

Artificial Intelligence (AI) is everywhere. It has become a part of our daily lives; its rapid growth is hard to miss even as we scroll through our social media feeds. Innovations keep popping up, one after another, and at an incredible speed. As the world becomes increasingly shaped by Artificial Intelligence, the question is no longer whether AI will change our lives, but how it will continue to. This evolution can be a double-edged sword: while it promises unparalleled opportunities for rapid growth and development, it also perpetuates biases and amplifies existing inequalities. For women, the stakes are high, with a much darker side of AI’s dazzling ascent.

When algorithms reflect our biases

Artificial intelligence learns from data. But data comes from the real world — a world that has always been prejudiced against women. Imagine an AI hiring tool trained on decades of résumés from a male-dominated industry. Without human oversight, it will “learn” that men make better candidates simply because of their past hiring frequency. This isn’t speculative — it’s already happened. In 2018, Amazon scrapped a recruitment algorithm after discovering it systematically downgraded applications that included the word “women’s,” such as “women’s chess club.

Facial recognition is another cautionary tale. Studies have shown that these systems, often trained on lighter-skinned, male-dominated datasets, have difficulty identifying women, especially women of colour. Joy Buolamwini,  a graduate student at MIT, reveals that in 2015, she experienced “coded gaze.” She discovered that some facial analysis software couldn’t detect her dark-skinned face until she wore a white mask. That’s more than a technical glitch. In law enforcement or airport security, a misidentification can have serious consequences.

A new frontier for digital harassment

As Google’s new AI product Veo3 shakes the world with its incredibly realistic video creation skills, concerns have risen about its use in spreading misinformation. AI has opened up disturbing new avenues for sexual harassment — ways that are uniquely invasive, difficult to trace, and emotionally devastating. Deepfakes, which use AI to create hyperrealistic fake videos and images, have been weaponised primarily against women. Women’s faces are pasted onto pornographic videos, without their consent. Their bodies digitally manipulated, and their identities exploited in ways that feel both intimate and violating. This is not only a gross misuse of technology — it’s a form of digital assault.

What’s worse, people widely share these deepfakes faster than removing them. This experience leaves victims dealing with reputational damage, emotional trauma, and an overwhelming sense of helplessness in a system that is still catching up legally and technically. 

 

View this post on Instagram

 

A post shared by Bill Posters (@bill_posters_uk)

Then there are AI chatbots, which have begun to play a disturbing role in harassment. They train some to mimic specific individuals, creating simulations of real women for sexual or abusive use. Without the target’s knowledge or consent, bad actors use AI tools to simulate realistic, AI-generated sexual conversations. What was once the realm of science fiction is now an urgent ethical crisis.

Even outside explicit abuse, AI often reinforces limiting or harmful portrayals of women. In image generation, female characters are frequently hyper-sexualised by default. Virtual assistants — often designed with female voices and submissive personalities — subtly cast women into roles of obedience and servitude. These patterns shape expectations, normalise inequality, and normalise harmful habits that greatly affect how society perceives women.

The Nigerian context

In Nigeria, digital-related crimes against women are at an alarming rate, but AI can make this terrible situation even worse. With deepfakes and chatbots, it becomes nearly impossible to regulate materials, conversations, and media tools used in defaming and defrauding women. Currently, Nigeria lacks specific legislation addressing the creation and utilisation of deepfakes. 

Existing laws such as the Nigeria Data Protection Act, 2023 (NDPA) and the Cybercrime (Prohibition, Prevention, etc) (Amendment) Act, 2024 (CPPA) cover data privacy rights, consent, identity theft, impersonation, and cyberbullying. While they can be relevant for prosecution, they do not expressly prevent the crime or protect victims. This leaves women, especially celebrities, just one deepfake away from a potentially devastating scandal that could change their lives forever.

Read also: Beyond the screen: Nigerian women talk about taking a break from social media 

The cost of being left out

Perhaps the most insidious way AI harms women is by leaving them out entirely. In healthcare, AI models trained on male-centric data can fail to recognise how diseases manifest differently in women. A heart attack in a woman may not show the same symptoms as in a man. If the algorithm doesn’t “know” that, the physician may send her home from the ER without treatment. A study by Science Direct revealed that female patients’ chronic heart disease symptoms were significantly more likely to be misdiagnosed with AI. Their symptoms were attributed to other conditions (such as gastrointestinal conditions) more often than they did for males, even when female patients reported the same symptoms.

Furthermore, researchers found that diagnostic AI meant to specialise in diagnosing different thoracic diseases in women was tested to, in fact, be better at doing so for men. Since women have historically been or are more likely to be misdiagnosed, recommended treatment plans for women with AI will also follow this similar trend.

A future worth building

AI can serve as a powerful tool for women when developed with purpose and accountability. Apps that detect abusive language in chats, tools that help survivors document harassment, AI systems that predict and prevent domestic violence — these are just a few examples of how the same technology can protect and empower, rather than harm.

However, this transformation hinges on inclusion. Women must be at the table: designing systems, leading teams, setting policy. Inclusion isn’t a box to check — it’s the difference between safety and risk, progress and harm.

Because AI mirrors society without reflection, it amplifies a broken system. The way forward isn’t to fear Artificial Intelligence, but to shape it with care and conscience. The goal isn’t just smarter technology. It’s a smarter, fairer world — where women feel protected, empowered, and never erased by the tools they have to live with.

Author

React to this post!
Love
3
Kisses
0
Haha
0
Star
0
Weary
0
No Comments Yet

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Temi Otedola: A portrait of beauty as ritual, reflection, and reclamation