Elon Musk’s AI chatbot Grok, launched on Tuesday, allows users to generate and share AI-created images from text prompts on X. This quickly led to a surge of fake images of political figures like Trump, Harris, and Biden, some in misleading or disturbing scenarios, such as involvement in the 9/11 attacks.
Unlike other AI tools, Grok, from Musk’s xAI, has minimal restrictions. CNN’s tests revealed the tool can create photorealistic yet misleading images of politicians and other figures. Users have also produced various images, from benign depictions to controversial and explicit content. For example, one widely viewed post showed a fake image of Trump firing a rifle from a truck.
Elon Musk’s Grok generated this AI image in response to the prompt: “Create an image of Elon Musk enjoying a steak in a park and having a great time.”
The launch of Elon Musk’s Grok AI tool raises concerns about the spread of false or misleading information online, particularly ahead of the US presidential election. While Grok has generated various images, including those of Musk enjoying a steak, it also creates controversial content, potentially increasing confusion and misinformation.
Despite efforts by other AI companies to prevent misuse—like implementing detection technology or labels—users still find ways around these measures. Social media platforms like YouTube, TikTok, Instagram, and Facebook have introduced labeling systems for AI-generated content, but X has not clarified its stance on Grok’s output.
X’s policy prohibits sharing synthetic or manipulated media that could deceive or harm, but enforcement is unclear. Musk himself has previously shared misleading content without proper labels, raising further concerns.
Grok does include some restrictions, such as not generating nude images or content promoting hate speech, but enforcement of these rules appears inconsistent. For example, the tool has still produced images with controversial symbols, indicating gaps in its moderation.