A new foundation for AI on Android

The foundational model learns from a variety of data sources, creating an AI system that can adapt to a wide range of tasks rather than learning for one narrow use case. Today we announce Gemini, our most capable model ever. Gemini is designed for flexibility, so it can run on everything from data centers to mobile devices. Optimized for three sizes: Ultra, Pro, and Nano.

Gemini Nano optimized for mobile

The most efficient model built for on-device workloads, Gemini Nano runs directly on mobile silicon to support a variety of critical use cases. Running on the device enables features that require data never to leave the device, such as suggesting replies to messages in end-to-end encrypted messaging apps. It also delivers a consistent experience with deterministic latency so features are always available, even when you don’t have a network.

Gemini Nano is derived from the larger Gemini model and has been specifically optimized to run on mobile silicon accelerators. Gemini Nano supports powerful features such as high-quality text summarization, context-sensitive smart replies, advanced proofreading, and grammar correction. For example, Gemini Nano’s enhanced language understanding allows Pixel 8 Pro to succinctly summarize content in the Recorder app even when your phone’s network connection is offline.


Video image of the Gemini Nano used in the Sound Recorder app on Pixel 8 Pro devices.

Pixel 8 Pro using Gemini Nano in the Recorder app to summarize meeting audio without a network connection.
Gemini Nano has launched support for Smart Replies in Gboard on Pixel 8 Pro and is ready to be enabled as a developer preview in Settings. Android support will roll out for WhatsApp, Line, and KakaoTalk over the next few weeks, with more messaging apps coming in the new year. On-device AI models save time by suggesting high-quality responses through conversation recognition.One.

See also  Latest updates to support your growth on Google Play


Video image of using WhatsApp's Smart Replies on Gboard using Gemini Nano on a Pixel 8 Pro device.

Smart Reply from Gboard within WhatsApp using Gemini Nano on Pixel 8 Pro.

Android AICore, a new system service for on-device models

Android AICore is a new system service in Android 14 that provides easy access to your Gemini Nano. AICore simplifies integrating AI into apps by handling model management, runtime, safety features, and more.

AICore is private by design, following the example of Android’s private computing core, isolated from the network through open source APIs, providing transparency and auditability. As part of our commitment to building and deploying AI responsibly, we’ve also built dedicated safety features to make AI safer and more inclusive for everyone.


AICore Architecture

AICore manages models, runtime, and safety functions.
AICore supports Low Rank Adaptation (LoRA) fine-tuning using Gemini Nano. This powerful concept allows app developers to create small LoRA adapters based on their own training data. The LoRA adapter is loaded by AICore, resulting in a powerful, large-scale language model fine-tuned for the app’s own use cases.

AICore leverages new ML hardware such as the latest Google Tensor TPUs and NPUs from Qualcomm Technologies, Samsung S.LSI, and MediaTek silicon. AICore and Gemini Nano are coming to the Pixel 8 Pro, with more devices and silicon partners expected to be announced in the coming months.

Building with Gemini

We’re excited to combine cutting-edge AI research with easy-to-use tools and APIs to help Android developers build Gemini on their devices. If you’re interested in building apps using Gemini Nano and AICore, sign up for our early access program.


One It