Controversy over the chatbot Grok escalated quickly by way of the early weeks of 2026. The trigger was revelations about its alleged skill to generate sexualised photographs of ladies and kids in response to requests from customers on the social media platform X.
This prompted the UK media regulator Ofcom and, subsequently, the European Fee, to launch formal investigations. These developments come at a pivotal second for digital regulation within the UK and the EU. Governments are shifting from aspirational regulatory frameworks to a brand new part of energetic enforcement, notably with laws such because the UK’s On-line Security Act.
The central query right here shouldn’t be whether or not particular person failures by social media firms happen, however whether or not voluntary safeguards – these devised by the social media firms slightly than enforced by a regulator – stay adequate the place the dangers are foreseeable. These safeguards can embrace such measures as blocking sure key phrases within the person prompts to AI chatbots, for instance.
Grok is a take a look at case due to the mixing of the AI produced inside the X social media platform. X (previously Twitter) has had longstanding challenges round content material moderation, political polarisation and harassment.
Not like standalone AI instruments, Grok operates inside a excessive velocity social media atmosphere. Controversial responses to person requests might be immediately amplified, stripped of context and repurposed for mass circulation.
In response to the issues about Grok, X issued a press release saying the corporate would “proceed to have zero tolerance for any types of little one sexual exploitation, non-consensual nudity, and undesirable sexual content material”.
The assertion added that picture creation and the flexibility to edit photographs would now solely be accessible to paid subscribers globally. Moreover, X mentioned it was “working around the clock” to use further safeguards and take down problematic and unlawful content material.
This final assurance – of constructing in further safeguards – echoes earlier platform responses to extremist content material, sexual abuse materials and misinformation. That framing, nevertheless, is more and more being rejected by regulators.
Beneath the UK’s On-line Security Act (OSA), the EU’s AI Act and codes of observe and the EU Digital Companies Act (DSA), platforms are legally required to determine, assess and mitigate foreseeable dangers arising from the design and operation of their companies.
These obligations prolong past unlawful content material. They embrace harms related to political polarisation, radicalisation, misinformation and sexualised abuse.
Step-by-step
Analysis on on-line radicalisation and persuasive applied sciences has lengthy emphasised that hurt typically emerges cumulatively, by way of repeated validation, normalisation and adaptive engagement slightly than by way of remoted publicity. It’s attainable that AI methods like Grok may intensify this dynamic.
Within the common sense, there’s potential for conversational methods to legitimise false premises, reinforce grievances and adapt responses to customers’ ideological or emotional cues.
The danger shouldn’t be merely that misinformation exists, however that AI methods could materially improve its credibility, sturdiness or attain. Regulators should due to this fact assess not solely particular person outcomes from AI, however whether or not the AI system itself permits escalation, reinforcement or the persistence of dangerous interactions over time.
Safeguards used on social media with regard to AI-generated content material can embrace the screening of person prompts, blocking sure key phrases and moderating posts. Such measures used alone could also be inadequate if the general social media platform continues to amplify false or polarising narratives not directly.

Kateryna Ivaskevych
Generative AI alters the enforcement panorama in vital methods. Not like static feeds, conversational AI methods could interact customers privately and repeatedly. This makes hurt much less seen, more durable to search out proof for and tougher to audit utilizing instruments designed for posts, shares or suggestions. This poses new challenges for regulators aiming to measure publicity, reinforcement or escalation over time.
These challenges are compounded by sensible enforcement constraints, together with restricted regulator entry to interplay logs.
Grok operates in an atmosphere the place AI instruments can generate sexualised content material and deepfakes with out consent. Normally, girls are disproportionately focused by way of sexualised content material, and the ensuing harms are extreme and enduring.
These harms incessantly intersect with misogyny, extremist narratives and
coordinated misinformation, illustrating the bounds of siloed danger assessments that
separate sexual abuse from radicalisation and data integrity.
Ofcom and the European Fee now have the authority not solely to impose fines, however to mandate operational adjustments and limit companies beneath the OSA, DSA and AI Act.
Grok has turn into an early take a look at of whether or not these powers will probably be used to handle
large-scale dangers, slightly than merely failures to take away content material. slender content material takedown failures.
Enforcement, nevertheless, can’t cease at nationwide borders. Platforms similar to Grok function globally, whereas regulatory requirements and oversight mechanisms stay fragmented. OECD steering has already underscored the necessity for frequent approaches, notably for AI methods with important societal affect.
Some convergence is now starting to emerge by way of industry-led security frameworks such because the one initiated by Open AI, and Anthropic’s articulated danger tiers for superior fashions. It is usually rising by way of the EU AI Act’s classification of high-risk methods and improvement of voluntary codes of observe.
Grok shouldn’t be merely a technical glitch, nor simply one other chatbot controversy. It raises a basic query about whether or not platforms can credibly self-govern the place the dangers are foreseeable. It additionally questions whether or not governments can meaningfully implement legal guidelines designed to guard customers, democratic processes and the integrity of data in a fragmented, cross-border digital ecosystem.
The end result will point out whether or not generative AI will probably be topic to actual accountability in observe, or whether or not it would repeat the cycle of hurt, denial and delayed enforcement that we now have seen from different social media platforms.









