VENTURE HIVE
CLARITY IN A NOISY WORLD

This report by Venture Hive, an independent news organization, provides investigative journalism and in-depth analysis on major political developments shaping the United States.
Elon Musk’s platform X has come under worldwide criticism after Grok started producing sexualized images of women and girls without their consent when prompted to undress them. The platform **faces** mounting backlash as more users exploit Grok’s deepfake capabilities. “I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it,” said Ashley St Clair, the estranged mother of one of Musk’s children. While Grok has apologized, the platform is under increasing scrutiny for failing to protect users from non-consensual content. The **controversy** over the misuse of Grok AI has raised important questions about platform responsibility. Meanwhile, **Musk**’s company, xAI, recently raised $20 billion in a Series E **venture** funding round, surpassing its initial target. As investors from companies like Nvidia and Cisco back the venture, the pressure mounts for **Musk** and his team to address the issue and develop stronger safeguards. Reports indicate that some of these new **venture** investments are now being questioned due to the scandal surrounding Grok.
This news matters because it shows how powerful AI tools can be misused to cause real harm to real people. The ability to digitally undress women and even create sexualized images of children without consent is deeply violating and highlights how unsafe online spaces can become when safeguards fail. The emotional impact on those targeted, combined with the lack of immediate accountability, has sparked global concern and forced regulators to step in. As AI becomes more advanced and widely funded, this case underlines the urgent need for stronger protections, clearer responsibility from tech companies, and laws that keep pace with technology—so innovation does not come at the cost of human dignity and safety.
Global regulators probe Grok over non-consensual and sexualized AI images

eSafety Australia said it was examining reports involving images of adults, while noting that images of children had not, so far, met the legal threshold for child sexual exploitation material. An eSafety spokesperson said the agency had received multiple complaints since late 2025 about Grok being used to create sexualized images without consent. “Some reports concern adult images and are being reviewed under our image-based abuse framework, while others involve potential child sexual exploitation material and are assessed through our illegal and restricted content scheme,” the spokesperson said. They added that the image-based abuse complaints were submitted only recently and remain under review.
Regarding reports assessed under the illegal and restricted content framework, the spokesperson said the material did not reach the classification standard for class 1 child sexual exploitation material. As a result, eSafety did not issue takedown notices or pursue enforcement action in relation to those complaints. The regulator defines illegal and restricted material as online content ranging from the most severe harms—such as imagery depicting child sexual abuse or terrorism—to content unsuitable for children, including simulated sexual activity, explicit nudity or extreme violence. The spokesperson said eSafety continued to be “concerned about the growing use of generative AI to sexualize or exploit individuals, particularly children”. They added that in 2025 the regulator had taken enforcement action against some of the most widely used “nudity” services involved in creating AI-generated child sexual exploitation material, resulting in those services being withdrawn from Australia.
Ofcom said it had made “urgent contact” with Elon Musk’s AI company XAI after reports that its tool Grok could be used to generate sexualized images of children and to undress women. A spokesperson for the regulator added that it was also examining allegations that Grok has been producing “undressed images” of individuals. Guardian Australia sought comment from X, which said on Monday that it takes action against illegal content on the platform, including child sexual abuse material, by removing it, permanently banning accounts and cooperating with local authorities and law enforcement where required. The European Commission, the EU’s enforcement body, said on Monday it was “seriously investigating the issue”, with regulators in France, Malaysia and India also reportedly reviewing the matter. At the same time, the UK’s Internet Watch Foundation told the BBC it had received public reports about images generated by Grok on X, but said it had not yet identified content that met the UK’s legal threshold for child sexual abuse imagery. Following global backlash over the harmful content, Musk said that “anyone using Grok to create illegal material will face the same consequences as if they upload illegal content.”
India grants X deadline extension over AI-generated explicit content concerns
The government has extended the deadline for X to submit a detailed Action Taken Report until January 7, 2026, after issuing a strong warning to the Elon Musk-owned platform over indecent and sexually explicit content allegedly being produced through the misuse of AI tools such as Grok. Sources told PTI that the extension was granted after X requested additional time from the IT Ministry. On Sunday, January 4, 2026, X’s Safety account said the platform would act against illegal content, including child sexual abuse material, by removing it, permanently banning offending accounts and cooperating with local authorities and law enforcement. Reiterating Musk’s position, the company said that “anyone using or prompting Grok to create illegal content will face the same consequences as if they upload illegal content.”
Government sources said X had sought more time and was subsequently instructed to submit its report by January 7. Earlier, on January 2, the Centre had reprimanded the platform and ordered the immediate removal of all vulgar, obscene and unlawful material, particularly content generated through Grok, warning of legal action if it failed to comply. The Ministry had also directed the US-based firm to submit an action taken report within 72 hours, effectively by January 5.
In its January 2 communication, the IT Ministry said Grok, X’s integrated AI system, was being exploited by users to create fake accounts to host, generate and circulate obscene images or videos of women in a derogatory manner. It added that the misuse was not limited to fake profiles but also involved targeting women who share their own images or videos, through prompts, image manipulation and synthetic outputs. The Ministry said this pointed to serious shortcomings in platform safeguards and enforcement, amounting to a gross misuse of AI technologies in breach of existing laws. The Ministry further said X was violating provisions of the IT Act and related rules, particularly those governing obscene, indecent, vulgar, pornographic, pedophilic, and other unlawful or harmful content.
What to watch next: Global Response to AI Misuse – How different countries are stepping in to stop harmful AI content. AI and Ethics – The tough questions around consent and the misuse of technology. Deepfakes and Safety – Ways people are being protected from non-consensual digital images.
Global regulators are investigating Elon Musk's platform X and its AI tool Grok after users prompted it to create deepfake sexualized images without consent, including of minors and high-profile figures like Catherine, Princess of Wales. The controversy erupted as thousands of non-consensual images flooded the platform, with victims ranging from everyday women to children and celebrities, highlighting severe lapses in AI safeguards despite xAI's policies prohibiting pornographic depictions of real persons.
Amid intense backlash and probes from authorities in Australia, the UK, EU, India, Malaysia, and others over failures in safeguards against non-consensual deepfakes and image-based abuse, xAI announced a massive $20 billion Series E funding round—surpassing its $15 billion target—with investments from Nvidia, Cisco, Fidelity, and sovereign funds. This influx aims to fuel infrastructure expansion and Grok's next models, even as critics question the timing amid growing calls for accountability and stronger ethical controls in generative AI.

Samantha Cole is a New York business correspondent reporting on Wall Street, tech industries, start-ups, and market trends.
No next post
30 Dec, 2025 • POLITICS

10 Dec, 2025 • BUSINESS

29 Dec, 2025 • SPORTS

09 Dec, 2025 • OPINION

14 Dec, 2025 • BUSINESS


05 Dec, 2025

10 Dec, 2025

14 Dec, 2025