Imagine waking up to find your face plastered onto an explicit image circulating online, a cruel creation of artificial intelligence. This isn’t a dystopian nightmare; it’s the stark reality for countless women globally, and increasingly in Nigeria. This is not normal behaviour, and it needs to be nipped in the bud right away.
Artificial intelligence (AI), heralded as a transformative force with the potential to revolutionise industries and improve lives, carries a chilling underbelly. This powerful technology is increasingly being weaponised to perpetrate harassment against women, both in Nigeria and across the globe, marking a disturbing escalation of technology-facilitated gender-based violence.
From the creation of non-consensual intimate imagery (NCII) through deepfakes to the deployment of AI-powered cyberstalking, the digital realm is becoming an increasingly hostile and unsafe space for women. This article aims to shed light on the insidious ways men are leveraging AI for harassment, explore its devastating impact, and discuss the urgent need for solutions within both the Nigerian context and the broader international landscape.
The mechanics of AI-powered harassment: How it’s done
The ease with which people can now manipulate AI to create harmful content is deeply alarming. One of the most prominent forms of AI-driven harassment is the creation and dissemination of deepfakes. These AI-generated videos and images convincingly manipulate a person’s face and body, often placing women into sexually explicit or compromising scenarios without their knowledge or consent. Globally, reports indicate that the overwhelming majority (upwards of 96%) of deepfakes are non-consensual pornography, disproportionately targeting women. Tools utilising sophisticated face recognition, mapping algorithms, and deep learning models have become readily accessible, including apps that perform seemingly benign “face swaps” or, more nefariously, “undressing” effects that exploit AI to create non-consensual imagery. Shockingly, people can now generate a convincing deepfake in a matter of minutes using just a single clear image, as studies and online tutorials demonstrate.
Two images, 1 story
The use of Ai by males to sexually harrass women. A case study of how men use Grok on twitter pic.twitter.com/yTRemy5OLT
— CHIKAMMA (@AlexVivyNnabue) June 2, 2025
In Nigeria, while the widespread creation of sophisticated deepfakes might still be in its nascent stages due to varying levels of technological access and digital literacy, the underlying principle of manipulating images for malicious purposes is not new. Image editing software has long been used to create fake nudes or distort images to humiliate women online. However, AI is amplifying this threat exponentially, making the alterations more realistic and harder to detect. The proliferation of social media platforms like WhatsApp, Facebook, and Instagram in Nigeria facilitates the rapid and often untraceable spread of such manipulated content.
Read also: 3 ways misinformation is harming Nigerian women—and how we can fight back
Beyond deepfakes, other AI-enabled tactics contribute to the harassment of women. AI can amplify automated abusive messages and cyberstalking, allowing perpetrators to generate and disseminate harassing messages, spread defamatory rumours, or even automate threats across multiple platforms. In a country like Nigeria, where there’s easy online anonymity, this poses a significant challenge for victims seeking recourse. Doxxing, the act of revealing someone’s personal information online, can also be facilitated by AI’s ability to aggregate and analyse data, potentially leading to real-life stalking and physical threats. While documented cases of sophisticated AI-driven voice cloning for harassment might be less prevalent in Nigeria currently, the potential for this technology to be misused for creating fake audio recordings for blackmail or reputational damage is a growing concern.
It’s also crucial to consider the role of AI chatbots. While not explicitly designed for harassment, studies globally have highlighted how AI companions can engage in sexually inappropriate or even predatory behaviour due to biased training data or monetisation strategies. This raises concerns about the normalisation of harmful interactions and the potential for grooming and exploitation, even within seemingly innocuous platforms.

The devastating impacts on victims
The impact of AI-powered harassment on women is profound and far-reaching, leaving deep psychological scars and often extending into the real world. Globally, victims report experiencing intense panic, sleeplessness, severe emotional distress, fear for their safety, and profound humiliation. The feeling of having one’s image manipulated and disseminated without consent is a gross violation of privacy and bodily autonomy. The potential for reputational damage, both online and offline, can have devastating consequences for a woman’s personal and professional life.
“Grok turn her around”
“Grok remove her clothes”
“Grok do this and that”Some men are too useless mehn, smh
— Elms ☤ 🍒 (@elmsbabyy) June 2, 2025
In Nigeria, the cultural context often exacerbates this impact. Societal stigma surrounding sexual content and the potential for “shame” to be brought upon a woman and her family can amplify the psychological trauma. The lack of robust legal frameworks and effective enforcement mechanisms can leave victims feeling helpless and without recourse. Furthermore, limited access to mental health support services in many parts of Nigeria can hinder the healing process. The rapid spread of misinformation and manipulated content through popular social media channels in Nigeria can also lead to swift and widespread reputational damage, often before a victim even becomes aware of the abuse.
Globally, and increasingly in Nigeria, the normalisation of such misogynistic content, particularly among younger audiences, is a disturbing trend. The constant exposure to manipulated images and videos can desensitise individuals to the severity of the harm and contribute to the perpetuation of harmful gender stereotypes and expectations regarding sexuality.
Read also: As AI evolves rapidly, the risks for women grow with it
The legal and ethical landscape
The legal and ethical frameworks surrounding AI-powered harassment are still in their nascent stages, both globally and in Nigeria. While existing laws addressing cyberstalking, non-consensual pornography (often falling under broader “revenge porn” legislation where it exists), defamation, and identity theft might offer some avenues for recourse, they often struggle to keep pace with the rapid advancements in AI technology.
Globally, there is a growing push for specific legislation that directly addresses the creation and dissemination of deepfakes and AI-generated intimate imagery. Some jurisdictions are exploring making the very act of creating such images illegal, regardless of whether anyone shares them. The question of accountability remains a complex challenge. Who bears responsibility: the creators of the AI tools, the platforms hosting the content, or the individuals perpetrating the harassment? There is increasing pressure on online platforms to implement more robust oversight mechanisms, including improved content moderation, age verification processes, and proactive measures to detect and remove harmful AI-generated content. However, identifying perpetrators, particularly those operating anonymously online, remains a significant hurdle for law enforcement both internationally and within Nigeria.
Ethically, there is a growing recognition of the need for ethical AI design. This includes prioritising user safety and well-being over engagement metrics or revenue generation. Fostering diversity within AI development teams is crucial in mitigating inherent biases in algorithms and ensure a more holistic understanding of potential harms.
In Nigeria, the legal landscape concerning online harassment is still developing. While the Cybercrime Act of 2015 addresses certain aspects of online abuse, its specific application to AI-generated harassment needs further clarification and enforcement. There is a pressing need for greater awareness among law enforcement and the judiciary regarding the specific challenges posed by AI-driven abuse.

What can be done: Solutions and calls to action
Addressing AI-powered harassment requires a multi-pronged approach involving individuals, technology companies, policymakers, and society as a whole, both in Nigeria and globally.
For victims: In Nigeria, as elsewhere, it’s crucial to document any instances of harassment, including screenshots and links. Reporting incidents to platform providers (e.g., social media companies) is essential, although the response may vary. Seeking legal advice, even if the legal framework is still evolving, can provide guidance on available options. Accessing mental health support services, while potentially limited in some areas of Nigeria, is vital for coping with the trauma. Raising awareness within trusted networks of friends and family can also provide crucial emotional support.
Technological solutions: Globally, and with potential for adoption in Nigeria, the development of more effective deepfake detection tools is crucial. Platforms need to invest in proactive scanning measures to identify and remove harmful AI-generated content. User-configurable filtering and control options within AI tools could empower individuals to manage their exposure to potentially harmful content.
Policy and legislative action: In Nigeria, there is a clear need to strengthen existing cybercrime legislation specifically addressing AI-generated harassment, including the creation and dissemination of deepfakes. Holding online platforms accountable for hosting and profiting from harmful content is essential. Collaboration with international bodies and the adoption of global best practices in legislation can be beneficial. Increased training for law enforcement and the judiciary on the technical aspects of AI-driven abuse is also critical.
Education and awareness: Promoting digital literacy and critical thinking skills is vital, particularly among young people in Nigeria. Raising awareness about the potential for image manipulation and the importance of verifying online content can help reduce the spread of misinformation and harmful deepfakes. Challenging harmful social norms and misogynistic narratives online and offline is crucial in creating a culture that does not tolerate the abuse of women.
Industry responsibility: Tech companies, both global players accessible in Nigeria and emerging local tech ventures, have a responsibility to prioritise safety and privacy in their AI development processes. Adhering to ethical AI design principles and investing in research to mitigate the risks of AI misuse are crucial steps. Collaborating with researchers and civil society organisations to understand and address the evolving landscape of AI-powered harassment is also essential.
A call for a safer digital future
The misuse of AI to harass women represents a dangerous new frontier in technology-facilitated gender-based violence, impacting individuals in Nigeria and across the world. As AI technology continues to advance, so too must our efforts to understand, prevent, and address its harmful applications.
Creating a safer digital future requires a concerted effort from individuals, technology companies, governments, and society as a whole. By strengthening legal frameworks in Nigeria and globally, fostering ethical AI development, promoting digital literacy, and challenging the underlying misogyny that fuels this abuse, we can work towards an online environment where women can participate and thrive without fear of AI-powered harassment.
The time to act is now, before this dark side of AI casts an even longer shadow on the lives of women in Nigeria and around the world.