Elon Musk has recently declared Grok “the most fun AI in the world!” This assertion comes from Musk’s own X.AI startup, which created the AI chatbot offered to premium subscribers of X, the social media platform he runs. However, Musk’s statement about Grok’s entertainment value likely stems from the absence of significant restrictions in Grok-2, the latest iteration of the product.
This updated version of Grok boasts an improved large language model for text interactions and, through a partnership with Black Forest Labs and its model called Flux, introduces image generation capabilities for the first time. Users have been actively sharing amusing images created by Grok on X, with many expressing delight at the more flexible nature of the chatbot.
Grok can be switched to a “fun” mode, which appears to encourage creativity and boundary-pushing, leading users to generate provocative images, such as depictions of political figures in outlandish scenarios. While some celebrate this feature as a means of promoting free expression, others regard it as an irresponsible tool, particularly with the upcoming U.S. elections.
Critics like Alejandra Caraballo, a civil rights attorney associated with Harvard Law School, have labeled the image generator as “reckless,” expressing concern over the potential for political disinformation and deepfakes. This apprehension is heightened given that Musk is an influential figure actively supporting Donald Trump, which raises the prospect of misinformation being widely disseminated during a critical election period.
Users of Grok-2 can produce images via the Flux model. Despite a few boundaries, such as the inability to create certain explicit content or politically sensitive images, the tool has still generated controversial depictions. Comparatively, the contrasting approaches of Grok and Google’s Gemini image generation tool have ignited discussions around ethical AI usage and the responsibilities of tech companies regarding misinformation.
Meanwhile, on the AI front, public company board members are increasingly worried about workplace AI risks. Reports indicate that they are cautious about employees using proprietary information with AI like ChatGPT and the potential for generative AI to produce misleading information. In tandem, Google is updating how it displays citations within its AI-generated summaries following previous inaccuracies.
In the realm of entertainment, SAG-AFTRA has reached an agreement with AI startup Narrativ to create AI audio voice replicas, enabling performers to negotiate compensation for their voice usage in digital advertisements.
In AI research, studies from Oxford University have examined how large language models might improve their task execution by planning in graphs, highlighting current challenges in complex task management.
Finally, a new film titled AFRAID, directed by Chris Weitz, is set to further explore the “scary AI” genre, depicting the unsettling consequences of an advanced AI assistant surveilling a family.