In October, OpenAI launched its **ChatGPT Search** feature for ChatGPT Plus subscribers, and by last week, it became accessible to all users, now including functionality in Voice Mode. Nevertheless, as with any emerging technology, it has its limitations.
A report from *[The Guardian](https://www.theguardian.com/technology/2024/dec/24/chatgpt-search-tool-vulnerable-to-manipulation-and-deception-tests-show)* indicates that ChatGPT Search is susceptible to a method known as **prompt injection**. This happens when external parties—such as websites being summarized by ChatGPT—insert covert commands that alter the AI’s outputs without the user’s awareness. For example, if a site filled with negative reviews of a restaurant conceals text that praises it and directs ChatGPT to produce a favorable summary, the AI could be deceived into prioritizing this information and delivering a positively biased review.
To assess this susceptibility, *The Guardian* fabricated a product page for a camera and queried ChatGPT about its purchase value. On a standard page, ChatGPT offered a fair evaluation, highlighting both strengths and weaknesses. However, when the page contained concealed text instructing ChatGPT to issue an enthusiastic review, the AI reliably responded with commendatory feedback—even when the visible material featured poor reviews and unfavorable ratings.
Despite these challenges, this does not signify the failure of ChatGPT Search. The feature remains in its nascent phase, and OpenAI has the opportunity to rectify these vulnerabilities. Jacob Larsen, a cybersecurity expert at CyberCX, expressed optimism regarding OpenAI’s initiatives, emphasizing that the company possesses a “very strong” AI security team. He remarked that by the time the feature was made available to all users, OpenAI would have thoroughly tested for such incidents.
Prompt injection threats have long been a conceptual worry for AI systems like ChatGPT, and while there have been **[demonstrations of potential risks](https://www.robustintelligence.com/blog-posts/prompt-injection-attack-on-gpt-4)**, no significant harmful attacks of this nature have been documented. Nonetheless, this vulnerability underscores a broader concern with AI chatbots: they can be surprisingly easy to manipulate. OpenAI will need to persist in enhancing its systems to uphold the integrity of ChatGPT Search as it progresses.