How Apple Intelligence’s Privacy Stacks Up Against Android’s ‘Hybrid AI’
- by Wired
- Jul 04, 2024
- 0 Comments
- 0 Likes Flag 0 Of 5
Save
At its Worldwide Developers Conference on June 10, Apple announced a late but major move into AI with âApple Intelligence,â confirming months-long rumors that it would partner with OpenAI to bring ChatGPT to iPhones.
Elon Musk, one of the cofounders of OpenAI, was quick to respond on X, branding ChatGPT-powered Apple AI tools âcreepy spywareâ and an âunacceptable security violation.â
âIf Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,â Musk wrote.
But at a time when the privacy of AI is under the spotlight, the iPhone maker says Apple Intelligence offers a new way of protecting peopleâs data, with the firm working out which core tasks can be processed on the device.
For more complex requests, Apple has developed a cloud-based system called Private Cloud Compute (PCC) running on its own silicon servers, which the company says is an innovative new way to protect privacy in the nascent AI age.
Apple senior vice president of software engineering Craig Federighi calls its strategy âa brand-new standard for privacy in AI.â Are Appleâs claims valid, and how does the iPhone makerâs strategy compare to âhybrid AIâ offerings available on devices including Samsungâs Galaxy range?
AI, Meet E2E
With PCC, Apple has designed âa new end-to-end AI architectureâ and âa private cloud enclave extension of a user's iPhone,â allowing more control over data, says Zak Doffman, CEO of Digital Barriers, which specializes in real-time surveillance video storage and analysis.
In practice, this means Apple can mask the origin of AI prompts and prevent anyone, including the iPhone maker itself, from accessing your data. âIn theory, this is as close to end-to-end encryption for cloud AI as you can get,â Doffman says.
Apple has put together a âpretty impressive privacy systemâ for its AI, says Bruce Schneier, chief of security architecture at Inrupt. âTheir goal is for AI useâeven in their cloudâto be no less secure than the phone's security. There are a lot of moving parts to it, but I think they've done pretty well.â
And so far, thereâs nothing else quite like it. âHybrid AIâ used on Samsung Galaxy devices running Google Android and Googleâs Nano range sees some AI processes handled locally, leveraging cloud when necessary to enable more advanced capabilities.
The idea is to provide as much privacy as possible while offering powerful AI functionality, says Camden Woollven, group head of AI at GRC International Group, an IT governance firm. âThis means you could potentially see some pretty sophisticated AI, even on midrange smartphones.â
But this type of hybrid AI processing still may pose risks because some data is sent to cloud servers without the levels of accountability that Apple offers with its PCC. âWith hybrid AI, some data must leave the device and be processed elsewhere, making it more susceptible to interception or misuse,â says Riccardo Ocleppo, founder and director of Open Institute of Technology, which provides technology-focused courses.
Most Popular Chris Baraniuk
Yet Google and its hardware partners argue privacy and security are a major focus of the Android AI approach. VP Justin Choi, head of the security team, mobile eXperience business at Samsung Electronics, says its hybrid AI offers users âcontrol over their data and uncompromising privacy.â
Choi describes how features processed in the cloud are protected by servers governed by strict policies. âOur on-device AI features provide another element of security by performing tasks locally on the device with no reliance on cloud servers, neither storing data on the device nor uploading it to the cloud,â Choi says.
Google says its data centers are designed with robust security measures, including physical security, access controls, and data encryption. When processing AI requests in the cloud, the company says, data stays within secure Google data center architecture and the firm is not sending your information to third parties.
Meanwhile, Galaxyâs AI engines are not trained with user data from on-device features, says Choi. Samsung âclearly indicatesâ which AI functions run on the device with its Galaxy AI symbol, and the smartphone maker adds a watermark to show when content has used generative AI.
The firm has also introduced a new security and privacy option called Advanced Intelligence settings to give users the choice to disable cloud-based AI capabilities.
Google says it âhas a long history of protecting user data privacy,â adding that this applies to its AI features powered on-device and in the cloud. âWe utilize on-device models, where data never leaves the phone, for sensitive cases such as screening phone calls,â Suzanne Frey, vice president of product trust at Google, tells WIRED.
Frey describes how Google products rely on its cloud-based models, which she says ensures âconsumer's information, like sensitive information that you want to summarize, is never sent to a third party for processing.â
âWeâve remained committed to building AI-powered features that people can trust because they are secure by default and private by design, and most importantly, follow Googleâs responsible AI principles that were first to be championed in the industry,â Frey says.
Apple Changes the Conversation
Rather than simply matching the âhybridâ approach to data processing, experts say Appleâs AI strategy has changed the nature of the conversation. âEveryone expected this on-device, privacy-first push, but what Apple actually did was say, it doesnât matter what you do in AIâor whereâitâs how you do it,â Doffman says. He thinks this âwill likely define best practice across the smartphone AI space.â
Even so, Apple hasnât won the AI privacy battle just yet: The deal with OpenAIâwhich sees Apple uncharacteristically opening up its iOS ecosystem to an outside vendorâcould put a dent in its privacy claims.
Apple refutes Muskâs claims that the OpenAI partnership compromises iPhone security, with âprivacy protections built in for users who access ChatGPT.â The company says you will be asked permission before your query is shared with ChatGPT, while IP addresses are obscured and OpenAI will not store requestsâbut ChatGPTâs data use policies still apply.
Partnering with another company is a âstrange moveâ for Apple, but the decision âwould not have been taken lightly,â says Jake Moore, global cybersecurity adviser at security firm ESET. While the exact privacy implications are not yet clear, he concedes that âsome personal data may be collected on both sides and potentially analyzed by OpenAI.â
Most Popular
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Sponsored
Popular Post
Tesla: Buy This Dip, Energy Growth And Margin Recovery Are Vastly Underappreciated
28 ViewsJul 29 ,2024