A recent report by The Guardian reveals that Grok, X's AI-powered assistant, continues to be used to generate harmful and overall questionable content that is then published in the social media platform. The current reports confirm that little has changed in terms of content moderation policies at xAI and X, since Grok since the introduction of Aurora, xAI's image generation model. Grok's lack of recognizable guardrails captured the public's attention in August last year, shortly after the announcement that Black Forest Labs' FLUX.1 models would power Grok's image generation capabilities.

According to The Guardian, users have been asking Grok to generate racist imagery targeting soccer players and managers. The issue is so pervasive that the Premier League confirmed it has a dedicated team that tracks and reports online abuse, which can ultimately lead to legal action. The Premier League is believed to have made at least 1,500 reports and introduced filters to help players block abuse on their social media accounts.

Statements from Signify, an organization that tracks and reports digital hate, and the Center for Countering Digital Hate (CCDH) reveal just how concerning the matter is. Signify's stance is that we are just starting to see the problem, and that it will actually worsen over the course of the year. Callum Hood, head of research at CCDH, points out that with its revenue sharing program, X has managed to motivate and reward the spread of hate in the platform. With Grok's image generation capabilities, it has even given them the tools to make the task easier.