Apple Is Running Google's Gemini in Its Own Data Centers. Here's Why.
Apple Is Running Google's Gemini in Its Own Data Centers. Here's Why.
Apple builds its own chips. It runs its own data centers. It writes its own operating systems. When it had to pick an AI model to power the next version of Siri, it picked Google's.
That's the headline. The technical reality is more interesting.
On January 12, 2026, Apple and Google published a joint statement confirming what had been rumored for months: Gemini is the backbone of Apple Foundation Models v10, the system behind a dramatically redesigned Siri arriving in iOS 26.5. The model Google is contributing runs at 1.2 trillion parameters — well above the scale Apple's on-device models were operating at.
"Google's AI is now in Siri" is technically true. It misses how the architecture actually works.
Three Layers, One Assistant
The redesigned Siri uses a three-tier system. Simple requests — setting a timer, checking weather — are handled by smaller models running directly on your device. Contextually complex tasks, like extracting a flight confirmation from Mail and adding it to Calendar, hit Apple's Private Cloud Compute (PCC) servers. Only the most demanding reasoning tasks route to the 1.2T Gemini model, and they do so through Apple's PCC layer, not directly to Google's infrastructure.
The PCC layer matters for a practical reason: Apple controls it. Your query is preprocessed before it reaches any Google system. Apple confirmed that no raw user data touches Google's infrastructure. Whether that privacy buffer is technically sufficient is a legitimate debate. The architecture is clearly designed to ensure Apple retains the data custody it has promised users for a decade.
From a user standpoint, none of this is visible. There's no Google branding in the interface. The deal is white-labeled: Apple contracted for the model, not the product. You're still talking to Siri.
Why Apple Ended Up Here
Apple Intelligence launched in late 2024 with models developed largely in-house, running at a fraction of GPT-4's parameter count. Reviewer and user response was muted. Siri still couldn't do things ChatGPT and Gemini handled routinely. Features announced at WWDC 2024 shipped late or not at all.
The company spent the following year evaluating whether to build larger foundation models or license them. It concluded that Google's current generation is "the most capable foundation" available. A measured way of saying Apple's own models weren't where they needed to be.
There's a resource argument too. Training a 1.2 trillion parameter model requires infrastructure investment at a scale even Apple isn't positioned to execute quickly. Gemini was already built.
This Is a Bridge, Not a Permanent Arrangement
Apple isn't abandoning model development. Internally, the company is working on "Ferret-3," its own next-generation foundation model, targeting 2026-2027. The Gemini deal holds the product together while that development catches up.
That strategy makes sense on its own terms. Licensing a state-of-the-art model to ship a competitive product now, while building capability for the long term, is how infrastructure-heavy companies handle gaps. It's not unlike the period when Apple used Intel processors before Apple Silicon was ready.
The comparison has a limit. Intel chips were a commodity Apple could replace without competitive friction. Gemini is owned by Google, Apple's biggest competitor in AI assistants. The dependency is commercially uncomfortable in a way that Intel silicon never was.
What Changes When iOS 26.5 Ships
For users, the changes are concrete. The new Siri will understand context across apps: it can reference what's on your screen, pull data from Mail, Messages, Photos, and Calendar, and act on information without requiring manual copy-pasting. It can execute multi-step requests inside third-party apps.
These capabilities have been standard in ChatGPT and Gemini's consumer products for over a year. Apple's earlier moves to embed AI assistants into its platform suggested the company knew the gap needed closing. The Gemini deal is how it's closing it.
iOS 26.5 is expected in late March or April 2026. More advanced features, built on Apple Foundation Models v11, are slated for iOS 27, which Apple describes as "significantly more capable" than the current Gemini-based system.
The Model Is Becoming Infrastructure
The Apple-Google deal fits a pattern that's accelerating. Microsoft distributes OpenAI models in its products. Samsung ships Google models on Galaxy devices. Now Apple ships Google models on iPhones.
The competition hasn't ended. It's moving up the stack. What Apple, Samsung, and Microsoft are competing on isn't which foundation model runs underneath. It's which product layer sits on top: the interface, the integrations, the trust relationship with users. The model is beginning to function like infrastructure, not differentiation.
That creates a different kind of leverage for whoever trains the best frontier models. If every major consumer platform is running your technology white-labeled, the brand the user sees isn't yours, but the technical dependency is.
Apple's architecture — the private cloud buffer, the white-label terms, the parallel Ferret-3 development — reflects a company trying to limit that dependency. Whether it succeeds depends on how fast Ferret-3 can close the gap to whatever Gemini's next generation looks like when iOS 27 ships.
Disclosure: This article discusses AI tools including Google Gemini, ChatGPT (OpenAI), and others. About.chat participates in affiliate programs for some AI tools mentioned on this site.