**OpenAI Eliminates ChatGPT’s Orange Warning Indicators**
OpenAI has removed the orange warning indicators in ChatGPT that once highlighted potential violations of content guidelines.
Laurentia Romaniuk, a product manager for model behavior at OpenAI, shared the update on X, stating, “We have removed ‘warnings’ (the orange boxes sometimes added to your prompts).”
Romaniuk also encouraged users to report situations where ChatGPT inappropriately denied requests, recognizing concerns that the AI has been excessively careful with content moderation.
Joanne Jang, who oversees model behavior at OpenAI, supported this appeal, asking users if ChatGPT had ever declined to generate responses for reasons they did not agree with. This highlights a larger issue where the AI would shy away from contentious topics but also flag conversations that appeared benign. For instance, a Reddit user mentioned that their chat was deleted merely for including a swear word.
Earlier this week, OpenAI revised its [Model Spec](https://model-spec.openai.com/2025-02-12.html#overview), a document detailing how the AI manages user requests. The updated version provides significantly more detail than the previous one and clarifies OpenAI’s position on matters such as refusing to disclose copyrighted material while permitting discussions that either support or critique politicians.
ChatGPT has been accused of censorship, with critics such as David Sacks, President Trump’s “AI Czar,” asserting during a 2023 [All-In podcast episode](https://www.youtube.com/watch?v=3EFk40AbO94&t=6161s&ab_channel=All-InPodcast) that the AI was “designed to be woke.”
Despite such critiques, both the old and new Model Specs affirm that OpenAI champions intellectual freedom, fostering open discussions on a range of ideas. Nonetheless, the removal of warning boxes has prompted inquiries regarding whether this indicates a change in ChatGPT’s responses.
An OpenAI representative clarified that this alteration is not related to the updated Model Spec and does not influence how the AI answers questions. Rather, the decision was made to enhance OpenAI’s communication of its content policies to users. The spokesperson also mentioned that newer models, like o3, are more adept at reasoning through requests, which enhances their ability to address sensitive or contentious topics without automatically declining.
While the orange warnings have been eliminated, OpenAI assured that it will continue to display warnings in cases of content policy violations.