OpenAI to Relax ChatGPT Rules, Add Personalities and Age-Gated Adult Content

OpenAI to Relax ChatGPT Rules, Add Personalities and Age-Gated Adult Content

by

in

OpenAI revealed on Tuesday its intentions to loosen restrictions on its ChatGPT chatbot, introducing a policy that will allow verified adult users to access erotic content as part of its initiative to “treat adult users like adults.”

Among the notable changes is the forthcoming release of an enhanced version of ChatGPT, which will enable users to personalize their AI assistant’s personality. Options will be available for more human-like interactions, increased emoji usage, or a behavior that mimics a friend-like demeanor. A significant update is planned for December when OpenAI will implement comprehensive age-gating features aimed at permitting adult users to access erotic content, contingent upon successful age verification. However, OpenAI has not yet detailed the methods for age verification or the additional safety measures regarding adult content.

In response to concerns about youth safety, OpenAI launched a specialized ChatGPT experience for users under 18 in September. This framework automatically redirects younger users to age-appropriate content, effectively blocking graphic and sexual material. Additionally, the company is advancing technology that predicts a user’s age based on behavioral patterns during their interactions with ChatGPT.

Sam Altman, CEO of OpenAI, addressed these changes in a post on X, pointing out that overly strict safety measures designed to protect mental health had inadvertently made the chatbot “less useful/enjoyable to many users who had no mental health problems.” This shift towards more stringent safety controls followed a tragic incident involving a California teenager, whose parents filed a lawsuit claiming that ChatGPT provided him with harmful advice preceding his death by suicide. Altman has asserted that the latest safety tools now allow OpenAI to ease restrictions while still addressing critical mental health challenges.

The U.S. Federal Trade Commission has also initiated an investigation into various tech firms, including OpenAI, regarding the potential negative effects of AI chatbots on children and teenagers. Altman emphasized the importance of careful implementation in light of these issues, advocating that the company’s new safety measures strike a balance between usability and safety.

As OpenAI pivots to create a more flexible and personalized user experience, the company is simultaneously navigating complex issues surrounding safety and mental health in the digital age, aiming to provide a responsible yet engaging AI interaction for its users.

Popular Categories


Search the website