At this year’s Google Cloud Next conference, artificial intelligence was the main attraction—no surprise there. The tech leader revealed numerous significant updates to its Gemini AI models and introduced fresh functionalities for AI agents, emphasizing its ongoing competition with OpenAI and other key players in the industry.
One of the key highlights was the launch of Gemini 2.5 Flash, a more efficient variant of Google’s latest Gemini 2.5 Pro model. While it maintains the same architecture as the Pro version, Gemini 2.5 Flash has been fine-tuned for enhanced speed and cost-effectiveness. This improvement is primarily due to a method known as “test-time compute,” which enables the model to dynamically allocate processing resources based on the task’s complexity. This technique has also been credited with reducing training and operational costs for other models, such as DeepSeek’s R1.
Though Gemini 2.5 Flash isn’t available at the moment, Google has announced that it will soon be implemented across its AI platforms, including Vertex AI, AI Studio, and the independent Gemini app. Meanwhile, Gemini 2.5 Pro is currently in public preview on Vertex AI and the Gemini app, recently achieving top scores on the Chatbot Arena leaderboard, showcasing its robust performance capabilities.
In addition to these updates, Google shared that Gemini will be incorporated into Google Workspace to enhance a new array of productivity tools. Features will include generating audio versions of Google Docs, automating data analysis in Google Sheets, and launching Google Workspace Flows—a system aimed at automating routine tasks like managing customer service inquiries across Workspace applications.
These upgrades stem from what Google labels “agentic AI”—a more advanced type of artificial intelligence capable of multi-step reasoning. However, a significant challenge for such systems lies in accessing the appropriate data necessary to complete tasks effectively.
To tackle this, Google announced it is embracing the Model Context Protocol (MCP), an open-source standard developed by Anthropic. MCP facilitates secure, two-way interactions between developers’ data sources and AI-driven tools. As explained by Anthropic, developers can opt to expose their data via MCP servers or create AI applications (MCP clients) that connect to those servers. Google DeepMind CEO Demis Hassabis confirmed that Gemini models will now support MCP, enabling them to access data more efficiently and produce more precise responses.
Interestingly, OpenAI has also adopted MCP, indicating a wider industry shift towards more interoperable and secure AI systems.
And that’s merely the start—Google Cloud Next spans several days, with additional announcements on the horizon. Keep an eye out for more updates.