Apple Execs on What Went Wrong With Siri, iOS 26 and More (Full Interview) | WSJ

Does Apple’s vision for artificial intelligence truly diverge from the industry’s chatbot obsession, or is it a nuanced strategy playing the long game? As Apple executives Craig Federighi and Greg Joswiak recently shared insights with Joanna Stern, the narrative around Siri’s evolution, the expansive scope of Apple Intelligence, and the company’s distinct design philosophy becomes clearer. This deep dive unpacks their perspectives, revealing the intricate balance Apple strikes between innovation, user experience, and a relentless pursuit of quality.

Siri’s Foundational Shift: From V1 to a More Reliable AI Future

Many users have awaited a significantly more intelligent Siri, especially after last year’s promising WWDC announcements. Craig Federighi acknowledged that while Apple dedicated approximately 40 minutes of a 100-minute show to Apple Intelligence, only about eight minutes focused on Siri, with half of those features shipping this year. These include an enhanced UI, typing to Siri, better understanding of disfluency, conversational context, and product knowledge. However, two “significant features” were notably absent from the initial rollout.

The delay stems from Apple’s ambitious two-phase architectural plan for Siri. Federighi explained that V1 was functional for basic capabilities and demonstrations, but it failed to meet Apple’s stringent quality standards when tested “off the beaten path.” Unlike typical software, an open-ended assistant like Siri must exhibit extremely high reliability across diverse user requests and personal data interactions. The company determined in spring that V1 simply wouldn’t achieve the necessary Apple-level quality, necessitating a pivot to V2, an architecture designed for greater reliability and long-term scalability.

Greg Joswiak emphasized that the showcased features were never “vaporware.” They were based on real, working software and large language models. The decision to delay was difficult, acknowledging potential customer disappointment. Yet, shipping a product with an unacceptable error rate would have been far more detrimental to the brand’s reputation for quality. This strategic pause highlights Apple’s unwavering commitment to delivering polished, dependable experiences, even if it means adjusting timelines.

Apple Intelligence Unpacked: Beyond the Chatbot Conflation

A persistent misconception in the tech community often equates Apple Intelligence with simply “Apple’s chatbot.” Federighi and Joswiak firmly pushed back on this narrow interpretation. They clarify that Apple Intelligence encompasses a broad suite of over 20 generative AI capabilities, with Siri being just one component. The majority of Apple’s marketing efforts and WWDC presentations focused on the wider implications and features of Apple Intelligence, many of which have already shipped.

Apple’s strategic intent for Apple Intelligence is distinct: it aims to be an enabling technology seamlessly integrated across its operating systems and features. Users might leverage Apple Intelligence for tasks like photo search using natural language or advanced writing tools without explicitly realizing they are interacting with AI. This pervasive, integrated approach contrasts sharply with the “bolt-on chatbot” model often seen elsewhere. It positions AI not as a separate destination, but as an invisible layer enhancing the core functionalities of Apple devices.

The executives highlighted the subtle yet powerful ways Apple Intelligence works behind the scenes. For instance, when refining text with writing tools—making content more concise, generating bulleted lists, or creating tables—these functions are powered by Apple’s own models and Private Cloud Compute. This exemplifies their philosophy of infusing intelligence into everyday user actions, making devices inherently smarter without requiring a separate “AI app.”

Apple’s Hybrid AI Model: Balancing Internal Innovation and External Partnerships

Apple’s approach to AI model development involves a pragmatic blend of internal innovation and strategic third-party partnerships. While the company is actively developing its own on-device and Private Cloud Compute (PCC) models, it also recognizes the value of integrating best-in-class external models like ChatGPT and Anthropic.

Federighi proudly noted Apple’s continuous advancements in its own models, evidenced by recently published research papers describing significant architectural enhancements. Their PCC model, in particular, has grown substantially in power, achieving “GPT-4.0 class in many regards.” This indicates a robust internal capability for handling complex AI tasks securely and privately. These models power crucial features like the aforementioned writing tools and certain visual intelligence experiences.

However, Apple is not insular. It strategically integrates third-party AI models where they offer superior or complementary capabilities. For example, ChatGPT’s powerful image generation is accessible within Apple’s Image Playground and through various Apple and third-party apps. Furthermore, Xcode now integrates ChatGPT and Anthropic models, providing developers with advanced coding assistance. This hybrid strategy ensures Apple users benefit from both the deep integration and privacy of Apple’s proprietary AI, alongside the cutting-edge innovations from the broader AI ecosystem.

The Elegance of “Liquid Glass”: A Unifying Design Language

Beyond AI, the interview touched on Apple’s refined design philosophy, particularly the “Liquid Glass” UI introduced across iOS, macOS, and iPadOS. This design choice, according to Federighi, isn’t arbitrary but rooted in glass’s inherent properties and its utility in user interfaces.

Glass offers translucency, allowing backgrounds to show through while adaptively providing contrast for foreground content. This creates a sense of depth and spaciousness, making content feel as if it “owned the entire screen edge to edge.” More importantly, it helps define interactive spaces, making buttons and controls intuitively clear. The adaptive nature of glass, enabling spectral highlights and refractions, adds to the visual richness and responsiveness of the UI.

This design leap was made possible by significant hardware evolution. Apple Silicon now provides the computational horsepower necessary for complex real-time rendering, such as content refraction and transmission through virtual glass. Combined with high-definition HDR displays and larger screen sizes, these technological advancements allowed Apple to achieve a “concentricity” in design. Inspired by years of work on VisionOS, the Liquid Glass interface also signifies a crucial step toward a consistent, universal design language across all Apple platforms, unifying experiences that once had disparate origins.

The Multimodal Future: iPads, Macs, and Beyond

The long-standing debate about converging iPadOS and macOS resurfaced, with Apple executives reiterating their stance. While acknowledging that iPad has grown in computational power and productivity capabilities, mirroring some Mac idioms, Craig Federighi maintained their distinct identities. The Mac remains optimized for precision input with keyboard and trackpad, while the iPad is the “ultimate touch device.” He famously recalled a keynote where “NO” was displayed in the largest possible font to answer this perennial question. Both products are “super, super popular” and serve different, albeit sometimes overlapping, needs.

Looking ahead, Federighi envisions a “multimodal future” for computing. Humans naturally interact with the world through sight, touch, and speech. Future devices, therefore, will integrate all these capacities, allowing users to speak commands, manipulate elements directly, and receive visual feedback simultaneously. This holistic interaction model moves beyond simple voice assistants or touchscreens to a seamless blend where the device intelligently takes direction while users maintain direct control.

When pressed about new form factors or AI devices beyond current lineups, like those rumored to be developed by Johnny Ive with OpenAI, Apple executives expressed uncertainty about competitors’ plans but underscored the power of existing Apple devices. Wearable devices like the Apple Watch, or the iPhone in one’s pocket, already provide highly personal, environment-aware experiences with powerful AI capabilities. While other form factors may emerge, Apple believes its current ecosystem offers a compelling foundation for this multimodal, intelligent future.

Navigating Market Headwinds with a Core Philosophy

The discussion briefly touched on external challenges, such as potential tariffs increasing iPhone prices. Greg Joswiak acknowledged that Apple constantly monitors such macroeconomic factors but made no announcements regarding future pricing. Craig Federighi humorously added that “software upgrades are free and untariffed,” emphasizing the value delivered through continuous software innovation.

Reflecting on their combined almost 70 years at Apple, Joswiak and Federighi addressed sentiments that Apple might be “on its back foot.” Joswiak invoked Steve Jobs’s philosophy: “create great products and tell people about them.” This enduring principle continues to guide Apple’s strategy. They highlighted the current strength of their business, with iPhones, Macs, and iPads achieving high customer satisfaction and continued growth. The excitement at the recent developer conference, fueled by tangible enhancements and a clear vision, reinforces their confidence.

Apple’s long-term commitment to quality, its unique strategy for integrating Apple Intelligence across its ecosystem, and its dedication to a consistent, elegant user experience remain central. The journey of refining Siri and expanding Apple Intelligence is ongoing, driven by a deep-seated belief in building truly great products that resonate with customers.

Q&A: Beyond What Went Wrong – Your Questions on Siri and iOS 26 for Apple Execs

What is Apple Intelligence?

Apple Intelligence is a collection of over 20 new AI features built into Apple’s operating systems. It’s designed to make your Apple devices smarter and more helpful across many tasks, not just as a separate chatbot.

Why were new Siri features delayed?

Apple delayed some major Siri features because their initial development didn’t meet the company’s high quality and reliability standards. They chose to redesign Siri’s core technology to ensure it works consistently well for all users.

How is Apple Intelligence different from other chatbots?

Instead of being a separate ‘chatbot app,’ Apple Intelligence is built directly into the operating system. It works in the background to improve existing features like writing tools or photo search, making your device smarter without you having to open a specific AI application.

Does Apple use only its own AI technology?

No, Apple uses a blend of its own AI models developed for privacy and integrating best-in-class third-party models like ChatGPT. This approach allows users to benefit from both Apple’s deep integration and the broader AI ecosystem.

What is the ‘Liquid Glass’ design in Apple’s software?

‘Liquid Glass’ is a new visual design for Apple’s software interfaces, like iOS and macOS. It uses transparent and reflective elements to create a sense of depth and make the screen content feel more immersive and responsive.

Leave a Reply

Your email address will not be published. Required fields are marked *