Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks such as editing photos and wishing a friend a happy birthday.
But for this to work, these companies need something from you: more data.
In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will combine information from many apps you use. And an Android phone can listen to a call in real time to alert you to a scam.
Is this information you are willing to share?
This change has significant implications for our privacy. To provide new tailored services, companies and their devices need more persistent and intimate access to our data than before. In the past, the way we used apps and retrieved files and photos on phones and computers was relatively isolated. AI needs the big picture to connect the dots between what we do across apps, websites and communications, security experts say.
“Do I feel safe providing this information to this company?” Cliff Steinhauer, director of the National Cybersecurity Alliance, a nonprofit organization focused on cybersecurity, spoke about companies' AI strategies.
All of this is happening because OpenAI's ChatGPT disrupted the tech industry nearly two years ago. Apple, Google, Microsoft and others have since overhauled their product strategies, investing billions in new services under the umbrella term artificial intelligence. They are convinced that this new type of computer interface, which constantly studies what you are doing to offer assistance, will become indispensable.
The biggest potential security risk with this change comes from a subtle change in how our new devices work, experts say. Because AI can automate complex actions, like deleting unwanted objects from a photo, it sometimes requires more computing power than our phones can handle. This means that more of our personal data may have to leave our phones to be handled elsewhere.
The information is transmitted to the so-called cloud, a network of servers that processes the requests. Once information reaches the cloud, it could be seen by others, including company employees, bad actors, and government agencies. And while some of our data has always been stored in the cloud, our most deeply personal and intimate data that was once for our eyes only – photos, messages and emails – can now be linked and analyzed by a company on its servers.
Tech companies say they have gone to great lengths to protect people's data.
For now, it's important to understand what will happen to our information when we use AI tools, so I got more information from companies about their data practices and interviewed security experts. I plan to wait and see if the technologies work well enough before deciding if it's worth sharing my data.
Here's what to know.
Apple's intelligence
Apple recently announced Apple Intelligence, a suite of artificial intelligence services and its first major entry into the AI race.
The new AI services will be integrated into its faster iPhones, iPads and Macs starting this fall. People will be able to use it to automatically remove unwanted objects from photos, create summaries of web articles, and write replies to text messages and emails. Apple is also overhauling its voice assistant, Siri, to make it more conversational and give it access to data across apps.
During the conference this month where Apple unveiled Apple Intelligence, the company's senior vice president of software engineering, Craig Federighi, explained how it might work: Federighi received an email from a colleague asking him to postpone a meeting, but he was supposed to see a show that evening starring his daughter. His phone then pulled up her calendar, a document containing details about the show, and a maps app to predict whether he would be late to the show if he agreed to meet at a later time.
Apple has said it is trying to process most of its AI data directly on its phones and computers, which would prevent others, including Apple, from having access to the information. But for tasks that need to be sent to servers, Apple said, it has developed security measures, including encoding the data using encryption and immediately deleting it.
Apple has also taken measures to ensure that its employees do not have access to the data, the company said. Apple also said it will allow security researchers to test its technology to make sure it delivers on its promises.
Apple's commitment to deleting user data from its servers sets it apart from other companies that retain data. But Apple was unclear about what new Siri requests might be sent to the company's servers, said Matthew Green, a security researcher and associate professor of computer science at Johns Hopkins University, who was briefed by Apple on its new technology. Anything that leaves your device is inherently less secure, he said.
Apple said that when Apple Intelligence is released, users will be able to see a report of which requests leave the device to be processed in the cloud.
Microsoft's AI laptops
Microsoft is bringing artificial intelligence to old-fashioned laptops.
Last week it began rolling out Windows computers called Copilot+ PCs, which start at $1,000. Computers contain a new type of chip and other devices that Microsoft says will keep your data private and secure. PCs can generate images and rewrite documents, among other new AI-powered capabilities.
The company also introduced Recall, a new system to help users quickly find documents and files they've worked on, emails they've read, or websites they've browsed. Microsoft likens Recall to having photographic memory built into your PC.
To use it, you can type random phrases, like “I'm thinking about a video call I had with Joe recently while he was holding a coffee cup that said “I Love New York.” The computer will then retrieve the recording of the video call containing those details.
To achieve this, Recall takes screenshots every five seconds of what the user is doing on the machine and compiles those images into a searchable database. The snapshots are stored and analyzed directly on the PC, so the data is not examined by Microsoft or used to improve its artificial intelligence, the company said.
However, security researchers have warned of the potential risks, explaining that the data could easily expose everything you typed or viewed if it were hacked. In response, Microsoft, which had intended to launch Recall last week, postponed its release indefinitely.
The PCs are equipped with the new Windows 11 operating system from Microsoft. It has multiple layers of security, said David Weston, a company executive who oversees security.
Google's artificial intelligence
Google last month also announced a suite of AI services.
One of its biggest revelations was a new AI-based scam detector for phone calls. The tool listens to phone calls in real time, and if the caller seems like a potential scammer (for example, if he asks for a bank PIN), the company alerts you. Google said people should turn on the scam detector, which is completely managed by the phone. This means that Google will not listen to calls.
Google announced another feature, Ask Photos, which requires sending information to the company's servers. Users can ask questions like “When did my daughter learn to swim?” to bring out the first images of their baby swimming.
Google said its employees could, in rare cases, review Ask Photos conversations and photo data to address abuse or harm, and the information could also be used to help improve its photos app. To put it another way, your question and the photo of your child swimming could be used to help other parents find pictures of their children swimming.
Google said its cloud was locked down with security technologies such as encryption and protocols to limit employee access to data.
“Our approach to protecting privacy applies to our AI features, whether they are installed on your device or in the cloud,” Suzanne Frey, a Google executive who oversees trust and privacy, said in a statement.
But Green, the security researcher, said Google's approach to AI privacy seems relatively opaque.
“I don't like the idea of my very personal photos and research ending up in a cloud that's not under my control,” he said.