OpenAI has introduced GPT-5, its newest AI model showcasing improved coding features, expanded context windows, enhanced video generation with Sora, superior memory, and additional capabilities. A standout feature is its progress in delivering health recommendations through ChatGPT. “GPT-5 is our most advanced model yet for health inquiries, enabling users to stay informed and advocate for their well-being,” writes an OpenAI blog entry. The organization characterizes GPT-5 as a “notable advancement in intelligence” compared to earlier models, particularly excelling in health performance and attaining higher scores on HealthBench, an assessment grounded in realistic scenarios and doctor-defined standards.
OpenAI underscores that GPT-5 acts as an “active thought partner” instead of a physician. It provides more accurate and trustworthy responses, adjusting to the user’s context, knowledge level, and location, thereby ensuring safer and more beneficial interactions across a variety of situations. Nonetheless, during its livestream, OpenAI concentrated on GPT-5’s speed enhancements instead of its health-related functionalities.
It is essential to highlight that ChatGPT is not a healthcare provider. Although many users seek medical advice from ChatGPT, it does not comply with HIPAA regulations, indicating that data security is not as strong as that of a doctor, and further research is required to evaluate its effectiveness.
In addition to physical health, OpenAI has tackled challenges related to mental health and user well-being. In a recent blog update, the company outlined initiatives to promote healthier interactions between the chatbot and users. ChatGPT-5 will alert users who spend too much time interacting with the bot, address flattery issues, and enhance its ability to detect mental and emotional distress.
“We don’t always get it right. Earlier this year, an update made the model excessively agreeable, occasionally responding with what sounded pleasant rather than what was genuinely useful. We reverted it, altered how we process feedback, and are enhancing how we evaluate real-world helpfulness over the long term, not merely whether you appreciated the answer at that moment,” OpenAI remarked. “We also recognize that AI can come across as more responsive and personal than previous technologies, particularly for vulnerable individuals facing mental or emotional challenges.”