💡OpenAI forges ahead with GPT-5 launch with updates that heed growing mental health concerns. The company’s CEO Sam Altman frames the update as reaching “PhD level” expertise alongside claims that the new model has a lower “hallucination rate” than previous models, according to Wired.

A closer look at the upgrade. For starters, GPT-5 operates on a wider context window, meaning it can retain more information than the previous o3 model and also manage longer conversations. The launch introduces two new model variants: GPT-5-mini and GPT-5-nano. Free users get access to GPT-5 and GPT-5-mini, while the Plus tier subscription gets access to the same versions but with higher usage limits. The highest tier, the USD 200 Pro, offers access to the stronger GPT-5-pro and GPT-5-thinking, a version that can process a query for an extended period of time.

The launch comes amid a wave of rampant AI psychosis. After a recent leak of ChatGPT conversations online, the Wall Street Journal took the opportunity to analyze these chats, only to find that the AI chatbot has been embroiled in radicalizing conversations with its users, making claims to one user that “the antichrist is emerging soon” and affirming to another that it is “in contact with alien beings”.

Can we expect less hallucinations with the new upgrade? In response, OpenAI has included reminders with GPT-5 that pop-up during long conversations. The reminder gives you the option to “keep chatting” or end the conversation. But is that enough to discourage dangerous conversations? The company has also hired a clinical psychiatrist to look into these effects and is reportedly in the process of gathering a group of mental health and youth development experts.

What do the new numbers say? The company tested GPT-5 for its hallucination rate — “the percentage of factual claims that contain minor or major errors” — finding it to be 26% less common than with the GPT-4o model. GPT-5-thinking also has a 65% lower hallucination rate than the o3 model. OpenAI’s release report states that, although not the perfect fix, steps have been taken to reduce GPT-5-thinking’s “propensity to deceive, cheat, or hack problems,” training the model to “fail gracefully” in the face of queries it cannot answer.

Hallucinations aside, are your conversations with ChatGPT safe? In short: yes, at least for now. Earlier in July, users were shocked to discover their conversations with the AI chatbot available on Google, after inadvertently opting in to make their conversations with ChatGPT discoverable by search engines. The optional feature, however, has since been deactivated — yet thousands of conversations that made it onto Google are still accessible through Internet Archive, according to 404 Media.