Apple Is Rebuilding Siri as a Real Chatbot — Here Is What WWDC 26 Will Show

Apple has spent two years watching ChatGPT, Gemini, and Claude reshape how people interact with software. At WWDC 26 on June 8, the company appears ready to respond with something more substantive than the incremental Apple Intelligence features it has shipped so far.

Reports from Bloomberg and other sources describe a redesigned Siri built around two connected capabilities: a standalone chatbot app with a persistent chat interface, and a system-wide AI agent that can operate across apps with full context of what is on screen. The project is reportedly codenamed "Project Campos." If the reports are accurate, WWDC 26 will be Apple's most significant AI announcement since the original Siri launch in 2011.

The Standalone App Problem

Apple's core challenge with Siri has been architectural. The existing Siri is a voice-first, session-less assistant: it responds to individual requests and retains no memory of prior exchanges. That design made sense in 2011, when most use cases involved setting timers and sending texts. It is poorly suited to the workflows that ChatGPT and Gemini have normalized: drafting, iterating, retrieving context, and following multi-step instructions.

The new Siri app described in the reports addresses this directly. Users would access previous conversations, search their interaction history, and switch between voice and text. Sending documents or photos for analysis is reportedly included. The interface is described as chat-like in the vein of Messages, which is a deliberate framing: Apple wants Siri to feel like a conversation, not a command-line input.

Whether users will adopt it as a primary chatbot depends on execution. The persistent interface is necessary but not sufficient. Response quality, speed, and the reliability of on-screen context handling will determine whether people reach for the new Siri or continue opening standalone apps like ChatGPT.

The System-Wide Agent

The more technically ambitious component is the AI agent layer. According to Bloomberg, the new Siri will function as a "system-wide AI agent" capable of understanding on-screen context and taking actions across apps. A user looking at a flight confirmation in Mail could ask Siri to add it to their calendar and update a related note without manually switching between applications.

This capability requires what Apple is calling on-screen awareness: the ability to read and reason about content displayed across the operating system, not just within specific app integrations. To support this, Apple is reportedly replacing its CoreML framework with a new system called CoreAI. The underlying model powering the experience is described as Apple Foundation Model v11, designed to run locally on Apple Silicon.

Running inference locally has been Apple's consistent position, and it carries real advantages for privacy-sensitive use cases. The tradeoff is that local models face hardware constraints that cloud-based systems do not. The scope of what AFM v11 can handle on-device, versus what it defers to cloud processing, will likely determine where the agent experience performs well and where it falls short.

Gemini as the External Brain

For queries that exceed what AFM v11 handles locally, Apple has reportedly structured a $1 billion deal with Google, positioning Gemini as the primary external knowledge engine for Siri. This effectively makes Gemini the default AI for broad knowledge retrieval on Apple devices — a substantial distribution win for Google.

The arrangement also appears to reposition OpenAI's ChatGPT integration, announced with considerable fanfare in 2024, as a secondary or optional layer. If Gemini handles the bulk of complex queries, ChatGPT's role within the Apple ecosystem narrows to specific creative or coding use cases where users actively choose it.

The competitive implications extend further. A Gemini-integrated Siri available on over a billion active devices would create a meaningful new surface for Google's AI, even as Apple retains the primary user relationship. For developers building on Apple platforms, Gemini-backed Siri capabilities could reduce the need to build separate AI integrations into their apps.

What WWDC 26 Needs to Deliver

Apple's track record with AI features since the original Apple Intelligence announcement has been uneven. Several capabilities were announced and then delayed. The best AI chatbots are measured on what they ship, not what they announce, and Apple will be judged the same way.

WWDC 26 needs to show working software, not a roadmap presentation. A convincing demo of the on-screen agent handling multi-step tasks, combined with a clear ship date for the standalone Siri app, would establish credibility. A vague preview with fall availability caveats would reinforce the narrative that Apple is still catching up to ChatGPT and Gemini.

The underlying architecture, local model with Gemini as external layer and cross-app agent capabilities, is directionally correct for where the market is heading. The question is whether Apple can execute it at the reliability level users expect from its software. Reading and acting on arbitrary on-screen content across a broad installed base is a significantly harder engineering problem than responding to discrete voice queries.

Assuming the reports are accurate, iOS 27 will represent the most substantive change to Siri since its introduction. Whether it is competitive with what OpenAI, Google, and Anthropic will have shipped by fall 2026 is a question that June's announcement will begin to answer, but not fully resolve.

For a comparison of the major AI chatbots currently available, see About.chat's guide to the best AI chatbots of 2026. The chatbot.gallery directory tracks capabilities and updates for Siri and 100+ other AI assistants.