AI chatbot Grok is being investigated for the technology that allows sexually explicit images to be generated
British regulators have opened an investigation into Elon Musk’s AI chatbot, Grok, following reports that the technology was used to generate sexually explicit, non-consensual images. Reacting to the allegations, UK Prime Minister Keir Starmer described the misuse of artificial intelligence as “disgusting” and “unlawful,” warning that such tools must be brought under control if platforms are unable to police themselves.
At the centre of the probe is Grok, an AI chatbot developed by Musk’s company, xAI, and integrated directly into X (formerly Twitter). The tool enables users to generate text and images within the platform using real-time data drawn from posts on the site — capabilities that are now under scrutiny for how easily people can misuse them.
While the tool may appear innovative, it raises urgent concerns — particularly for women and minors — who are already disproportionately targeted online. AI-generated deepfakes have the power to erase consent, dignity, and personal safety, transforming everyday photos into tools for harassment and exploitation, with reputational damage that can follow victims long after the content goes viral.
What the UK’s AI deepfake investigation means for global online safety

The United Kingdom’s communications regulator, Ofcom, formally launched the probe on 12 January 2026, amid mounting evidence that users were prompting Grok to digitally remove clothing from real people and share the resulting deepfake images online. This was potentially in violation of the country’s Online Safety Act and existing laws against intimate image abuse and child sexual abuse material. Starmer’s government has signalled it is prepared to use the full force of British law to hold platforms accountable and protect victims, particularly women and children, from the distressing effects of these harmful AI-generated images.
For many women, the surge in deepfake imagery has been deeply personal, underscoring long-standing concerns about consent, privacy, and digital safety. Advocates and lawmakers have framed the controversy as not simply a technological issue but a broader cultural challenge. They continue to work to stop anyone from using powerful new AI tools to degrade, harass, or exploit individuals, especially women and girls who are disproportionately targeted online. Additionally, regulators in several countries have also scrutinised the platform’s safeguards, and discussions are underway about updating legal frameworks to explicitly criminalise the creation and distribution of non-consensual intimate AI imagery. The UK’s investigation — coming at a point of intense public and political pressure — may set a precedent for how democratic societies regulate emerging generative technologies and protect vulnerable communities.
Read also: AI’s new frontier of abuse: Deepfakes and the digital harassment of women
As AI deepfakes rise globally, Nigeria’s fight to protect women online intensifies

In Nigeria, the rapid spread of deepfake technology has exposed a troubling regulatory gap that leaves women particularly vulnerable to abuse. While AI tools increasingly serve creative, commercial, and political purposes, several individuals have weaponised them to produce non-consensual intimate images, impersonation scams, and harassment targeting women. Sadly, most of these perpetrators face little or no legal consequences for their actions.
Nigeria currently lacks AI-specific legislation, and although laws like the Cybercrimes Act 2015 address cybercrime and sexual abuse, enforcement is weak. As a result, victims are often left relying on outdated laws that fail to fully capture the scale or severity of AI-generated sexual content and deepfake violations.
Amid this landscape, women are being urged to take precautionary steps to protect themselves: tightening privacy settings on social media, limiting the public sharing of images and personal data, using watermarking or image-tracking tools where possible, documenting and reporting abuse promptly, and seeking support from digital rights organisations and legal aid groups. Advocates stress, however, that individual vigilance cannot replace systemic reform. Many citizens continue to urge Nigerian lawmakers to modernise digital safety laws, strengthen protections against image-based abuse, and govern emerging AI technologies in ways that uphold consent, dignity, and gender equity.
Ultimately, the Grok investigation highlights a global reckoning over how artificial intelligence is governed and who receives protection when it is misused. As regulators move to hold powerful platforms accountable, the contrast with Nigeria highlights the urgent need for stronger, gender-sensitive digital safety laws worldwide. Without clear regulation and enforcement, AI risks becoming another tool of harm rather than innovation — particularly for women and children, who already face disproportionate threats online.
Read more: As AI evolves rapidly, the risks of women grow with it