Google Introduces Gemma 4: An Innovative Open-Source Model and Ways to Test It

On Thursday, Google introduced the newest iteration of its open AI model, Gemma 4. Remarkably, Gemma 4 is a fully open-source model licensed under Apache 2.0, which is rare for cutting-edge models.

Open models can function locally on users’ devices, and Google claims that Gemma 4 can operate on “billions of Android devices” along with certain laptop GPUs.

“This open-source license creates a basis for complete flexibility for developers and digital sovereignty; granting you total control over your data, infrastructure, and models,” a post on Google’s blog mentions. “It enables you to build freely and deploy securely in any environment, whether on-premises or in the cloud.”

Many people recognize Google’s well-known Gemini AI model due to the prevalent AI chatbot integrated into various Google products.

Gemma is also a substantial language model (LLM) developed using the same technology and research methods that Google DeepMind applied to Gemini 3.

Google portrays Gemma 4 as its “most powerful” open AI model to date.

Gemma vs. Gemini?

In what ways does Gemma differ from Gemini?

Gemini is Google’s proprietary subscription AI offering and refers to a series of multimodal AI models from Google. Gemini is embedded in almost all of Google’s primary products, including Google Search, Gmail, Google Docs, and Google Cloud.

In contrast, Gemma 4 is an open AI model, meaning the code and training data are accessible to its users. Gemma AI models can operate on a user’s local hardware, even without internet access. Anyone can acquire Gemma 4 and execute it on their device at no cost. These open AI models provide a more secure and private experience, as none of the conversations, uploaded materials, or responses are shared with external parties.

Developers have the option to utilize open AI models like Gemma 4 to embed AI into their applications without recurring subscription expenses.

What is Gemma 4?

Gemma 4 brings enhanced capabilities to Google’s open AI model line.

As per Google’s announcement, Gemma 4 features improved reasoning abilities, including multi-step planning and deep logic. Google asserts notable advancements in math and instruction-following benchmarks with Gemma 4.

Gemma 4 is also equipped to handle processes necessary for agentic workflows and localizes AI coding assistance. Furthermore, Gemma 4 can manage audio and video for speech recognition and interpret visuals, such as charts.

Gemma 4 is available in four variants based on the weight numbers powering the model: two billion, four billion, 26 billion, and 31 billion.

Hugging Face indicates that these open-weight models are provided in pre-trained and instruction-tuned versions, granting developers greater flexibility.

The AI model is trained on more than 140 languages and possesses a context window of up to 256,000 tokens, according to Google. (The smaller E2B and E4B variants feature a context window of 128,000.)

Gemma 4 is now open and open source

Open does not necessarily equate to open source for AI models.

Earlier versions of Gemma were open-weight (indicating that the training datasets were public) but still subject to Google’s terms, even though users could download the model. Users could alter the local LLM but needed to adhere to Google’s regulations on usage and redistribution.

With Gemma 4, Google has launched the model as open and open source.

Google is releasing Gemma 4 under the Apache 2.0 open source software license.

This license permits anyone to download, alter, and utilize Gemma 4 for any purpose, either personal or commercial. Gemma 4 can be redistributed without royalty obligations. The sole requirement under the Apache 2.0 license is attribution, and the license must accompany the AI model.

Searching for Gemma 4? How to access it.

Gemma 4 is accessible in Google AI studio and can be acquired from third-party platforms like Hugging Face, Kaggle, or Ollama.