Among the Magnificent 7 companies, Apple has been slower to embrace Generative AI compared to its peers. That changed this summer when Apple introduced Apple Intelligence at WWDC in June, signaling its long-awaited response to integrating Generative AI into its ecosystem. This week at the Glowtime event, Apple introduced the newest version of its iPhone, showcasing how Apple Intelligence would be integrated into its ecosystem and bringing Generative AI to a wider global audience.
The iPhone 16 is equipped with Generative AI capabilities through Apple Intelligence, enabling users to perform tasks like summarizing emails and editing images. The system is built around four core pillars: language, images (much of which we are already familiar with via ChatGPT or Co-pilot), action, and personal context. It’s the personal context pillar that could be a real game-changer when they are eventually made available to users, as it allows Apple Intelligence, which it is said will be screen aware, to tailor responses and interactions based on a deeper understanding of the iPhone owner. As Vox explained, “the example that Apple continues to offer for this future of Apple Intelligence is that you’ll be able to ask Siri to send photos of a specific group of people at a specific event to a specific person (e.g., 'Text Grandma the pictures of our family from last weekend’s barbecue'), and Siri will just do it.”
Of greater significance in my view, to Generative AI, is Apple’s commendable commitment to privacy, which stands in contrast to their competitors. The company has built an impressive privacy-first infrastructure with its Private Cloud Compute (PCC) system (read more here). Competitors like Google, OpenAI, and Microsoft rely heavily on vast user data to fuel their models, often at the expense of privacy and intellectual property.
But this focus on privacy could create some limitations for users over time. Apple Intelligence handles most tasks directly on the iPhone to protect privacy, but for more complex tasks, it uses the Private Cloud Compute (PCC) system for secure cloud processing. For the most advanced queries, Apple integrates OpenAI's ChatGPT. While it's still unclear how these systems will work together, this approach may affect performance, battery life, speed, and personalization compared to other AI models that rely entirely on cloud computing processing power.
Apple’s strong commitment to privacy also presents challenges for its long term competitiveness. Companies like OpenAI, Google, and Microsoft continuously analyze vast amounts of user data, pulling information from across the web to quickly evolve their models and release new features. This kind of real-time learning and frequent updates are harder to achieve with Apple’s privacy-first approach. As a result, Apple may face slower innovation in its generative AI models, making it difficult to keep pace with other companies.
This creates an intriguing race in the generative AI space. On one side, we have cloud-based models from companies like OpenAI, Google, and Microsoft, which rely heavily on user data. On another, we see a new class of privacy-focused generative AI models, led by Apple and companies like Safe Superintelligence Inc. (SSI), which is committed to developing AI systems that prioritize safety and privacy. A third group consists of open-source initiatives that prioritize transparency and collaboration.
The big question is, which approach will win in the long run? Will Apple be able to stick to its privacy commitments and positioning and maintain their market leading position?
Here’s hoping they will.