This week, Elon Musk declared Grok as “the most fun AI in the world.” Musk’s enthusiasm is not entirely impartial, as Grok is the AI chatbot created by his X.AI startup and is available to premium subscribers of X, the social media platform he owns. The perception of fun seems to stem from the minimal restrictions in Grok-2, the latest version.
Grok’s new version features an improved large language model (LLM) for text interaction, and it now includes image generation capabilities thanks to a partnership with Black Forest Labs and its model, Flux. Many users on X appear to enjoy this hands-off approach, sharing imaginative images that include a satirical portrayal of Kamala Harris as a dominatrix and Donald Trump as Rambo. Supporters have labeled Grok-2 as the most “uncensored model of its type,” praising Musk for promoting freedom of expression for both people and machines.
However, this exuberance contrasts sharply with criticisms directed at Google’s Gemini image generation tool, which faced backlash earlier this year for inaccuracies stemming from its diversity-guided restrictions. For many supporters of Musk and Grok, this represents a backlash against perceived political correctness.
Nonetheless, not everyone shares in the enjoyment. Critics, particularly those wary of political misinformation, have expressed concern over the potential for Grok to contribute to election-related trolling and disinformation. Alejandra Caraballo, a civil rights attorney, criticized the new image generation feature as “reckless” and “irresponsible,” especially with U.S. elections approaching. Deep fake threats loom large, particularly on a platform like X, which is known for its political content and is under the leadership of Musk, who has openly campaigned for Donald Trump.
Images generated by Grok-2 utilize the Flux model from Black Forest Labs. While Grok has some limitations, like refusing to generate explicit content, it does allow for the creation of contentious images with ease. Users are able to produce various images under different prompts, though some themes appeared to be off-limits while others were freely generated.
Despite Grok’s capabilities, critics warn that these tools are accessible across the internet, and with Musk’s influence and platform, the implications of such an unrestrained AI could be detrimental rather than entertaining.
In other AI news, there are rising concerns among board members regarding the risks AI poses in the workplace, leading them to educate themselves about AI’s impact on productivity and potential risks.
Additionally, Google is revamping its AI search overviews to enhance citation visibility, following criticism over inaccuracies in previous versions.
SAG-AFTRA, the largest union for performers and broadcasters, has made a deal with Narrativ, an AI startup, allowing its members to create audio voice replicas for commercial use, marking a move towards establishing ethical standards in the use of AI within the industry.
As discussions about AI continue, research at the University of Oxford indicates that large language models could improve in task planning through graphical representations.
In entertainment news, a new film titled “AFRAID,” directed by Chris Weitz, explores the darker side of AI, contrasting with the benign AI portrayals seen in other films. The movie depicts a family’s experience with a sophisticated AI, raising questions about privacy and safety.