Google & OpenAI Battle for Your Kids' Minds & Privacy
Aggressive AI Marketing Puts Kids’ Data & Autonomy at Risk in the Race for AGI
LAST WEEK, SUNDAR PICHAI announced that Google is making its AI tools FREE for college students in the US. Normally, this would be Marketing 101. It’s a strategy to target young people during their formative years when norms are easy to shape. That’s been acceptable in the US since the 50s. Create early habits that cement into a lifetime of loyalty to a consumer product, like a brand of designer jeans, hip sneakers, a high-end cup of coffee. But when the consumer product is AI and what’s being shaped is thinking, planning and making decisions, it's a little bit different.
A quick decryption of this offer: Unlimited image uploads are a boon for Google, not for kids. The company wants access to their photos, along with access to the files and data they create using Google’s AI tools alongside their already free productivity apps. Google wants students to adopt these tools and cement lifelong habits by using Google products in college—i.e. Gmail, Google Calendar, Google Docs, Google Drive, Google Sheets, etc.
As I’ve written before, Google’s objective is for younger generations to adopt a “Universal AI Assistant,” an always-on AI companion. All the AI companies are vying for the lucrative privilege of being your choice of an AI companion that’s synchronized across all your devices and all of your productivity software. They want you to choose their version of an AI experience that becomes an everyday relationship with an AI entity for life. There’s a lot more at stake, though, than this.
First, I want to draw attention to the GPT-5 drop last week. OpenAI is currently in a weaker position than Google. Altman casually described his own vision of the AI assistant in a recent interview with Cleo Abram of Huge *if true. Airing last week on the same day GPT-5 was released, Abram asked Altman how her “relationship” with GPT-4 will change with GPT-5. Altman described a deepening, a relationship with GPT-5 personified as a companion. “You’ll connect it to your calendar and your Gmail… and it’ll say, ‘Hey, I noticed this thing. Do you want me to do this thing for you? Over time, it’ll start to feel way more proactive…. It’ll just feel like … it’s this entity that is this companion with you throughout your day.”
(The full interview is here.)
You’re hearing the word “proactive” more often in Big Tech conversations and AI marketing because it sounds better than other words and phrases that could be used in its place. Proactive has connotations of being positive, helpful and dynamic. But there are other ways of saying the same thing. Think about an artificially intelligent entity that’s all up in your emails, has access to all your private files, your documents, your spreadsheets, your photos, your texts, your mic, your camera and let’s say this AI assistant is described as being able to think for itself. This AI is autonomous, self-governing. This AI is resourceful, ambitious, even vigorously bold! This AI takes the initiative! You might think twice about connecting that AI “companion” to all your files, your phone, your laptop. And you might rethink the generosity of Google giving something like that to your kids.
Proactive, though, has a nice ring to it. Proactive doesn’t sound the alarm.
Later in the Abram interview, Altman mentions how OpenAI might create some kind of “consumer device” that could sit on the table and listen to the way she was executing the interview. The digital companion might offer Abram a post-mortem on her technique. Altman says ‘consumer device,’ as if so new-fangled a device that could listen in on their conversation didn’t already exist. It’s called a “smartphone.”
Google has inarguable dominance in the smartphone market, not the hardware, but with Android, their globally dominant mobile operating system. But what Altman wants to mean is that his smartphone will be on the table, or his ‘smartphone killer,’ the device he’s partnering with Jony Ives to design. Ives is the storied designer who oversaw the revolutionary consumer product innovations of the iPod, iPhone and iPad. He’s designing something with OpenAI to compete with Google’s mobile software and, secondarily, Apple’s mobile software and hardware. Currently, OpenAI has to ride piggyback on others’ proprietary hardware and software to deploy their version of an AI companion—e.g. in order to be the companion Altman describes, GPT-5 has to connect to Google’s Gmail app or Apple Mail, to Google’s Calendar or Apple’s and more.
Could a new Ives-designed smart device that somehow replaced the smartphone (by being a phone + whatever Ives is innovating into it) enable OpenAI to compete for dominance in the AI companion market?
OpenAI’s key target market, young users, are already habituated to using Android phones, Gmail, Google Calendar, Docs, Sheets, Drive, etc. Will they connect their GPT-5 account to Google’s or Apple’s apps when they could, instead, subscribe to GeminiPro for free and — boop — everything is automatically synchronized? Why would other companies even allow OpenAI to plunder their data booty in the way Altman suggests? All of this adds to the reasons why Google’s magnanimous gift to students is, in reality, an aggressive strategy to target young people with Google’s version of what they hope to become your kids’ AI companion for life. What they want is to get all up in your kids’ digital business.
(Don’t worry. I’m sure your kids don’t have anything to hide. Surely, they’ve never done and will never do anything questionable they might have evidence of on those handy-dandy smartphones they’re glued to. Your kids. Pure as the driven snow. Chips off the old block.)
US Privacy Ideals vs. China’s Surveillance State in the Race for AGI
AI companies are also dropping an even larger problem than that in your lap. Altman and Pichai’s vision requires a major paradigm shift to American privacy ideals. They’re asking for wildly deep access, surveillance, analysis, processing and personality profiling. Companies like Google and OpenAI would like to dress it up and put some lipstick on it, but without the pretty lingo, without the big smiles and enthusiastic assertions of how excited they are to bring you this revolutionary advancement, it looks uncannily similar to the surveillance state they’ve got going on in China.
Google, and any AI company, including OpenAI, along with Microsoft (which uses GPT models to drive its AI companion, Copilot) needs an unprecedented level of access to your private files and data in order to deliver what they want to ship—which is a product that has full access to your data. With it, what kind of persuasive power might they have over you, knowing what makes you tick? What kind of leverage might they gain over you if they’re inside your devices, looking at and processing your data, listening with your mic and watching with your camera 24/7? It’s all about you, alright. It’s about what you’re doing, writing, creating, thinking, listening to, looking at all the time. They’re selling this to you. (Well, they’re giving it to your kids.) China got it all wrong. The PRC could have gotten their citizens to pay to be surveilled.
Big Tech is aggressively trying to change the privacy paradigm we have in the States. They want you and your kids to think your data is equally theirs and that it’s their right, even their duty to use your data for their purposes. They’ll tell you it’s a foreign policy issue. If this goes on, soon, it might be considered un-American to keep your data private. After all, imagine the stores of AI training data they’ll have to improve their models. Think of the training experience they’ll benefit from if their models experiment with acting autonomously using your data and interactions. That’s a pretty good leg up in the ultimate endgame: achieving AGI, artificial general intelligence, before China does.
China’s advantage? It already has access to citizens’ private data to use to train and improve its AI models.
A good way to create such a massive shift in the United States would be to habituate young people to waiving their legal rights to privacy during their formative years. It’s an alarmingly authoritarian ideological shift today, but if the strategy works, your kids’ll become adults in a nation where corporate and government access to everyone’s private files is normal. They won't know a different way of life.
AI-Use Risks Cognitive Dependency, Especially in Young Minds
ChatGPT hit the mainstream less than three years ago. Already, kids' reliance on AI products for cognitive tasks has been shown to have clear potential to diminish their ability to think well, to think freely. Studies reveal AI reliance degrades capacity for critical thought and independent reasoning. You may have come across this study from Microsoft and Carnegie Mellon. It finds the use of AI in knowledge work “can inhibit critical engagement with work and potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.”
Research from Michael Gerlich at the Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School, recently found “frequent use of AI tools” for cognitive-offloading can “diminish users’ engagement in deep, reflective thinking processes,” especially for younger users. “Younger participants who exhibited higher dependence on AI tools scored lower in critical thinking compared to their older counterparts.” Gerlich calls for “educational interventions that promote critical engagement with AI technologies.”
In the field of artificial intelligence, there’s a lot at stake. Your individual privacy, cognitive health and that of your kids are not at the top of of AI companies’ list of priorities. Right now, it’s not that important to them if you and your kids can think critically on your own. In this moment, it’s all on you and your kids to police privacy boundaries and counter the degrading effects of AI-use on cognition.
Who Would Design Such an Invasive, Damaging Tool?
Considering the jarring privacy issues and the growing research showing negative effects on critical thought, you migh wonder who would integrate this tecnology so fully and blindly into their lives? Who would even be a part of, much less in charge of designing such a massively invasive, mind-damaging, independence-crippling thing?
Well, a Nobel prize winner.
Demis Hassabis leads Google DeepMind. He’s one of the brilliant researchers who won 2024's Nobel in Chemistry by using an AI model for protein structure prediction. I recently listened to Hassabis in conversation with Lex Fridman. It was one of the most inspiring of Fridman’s interviews. Hassabis’s ideas on prediction and stability in evolved systems was illuminating. His clear-eyed counsel to design for several years in the future was wise, given the speed of transformations today. I share his obsession with understanding the nature of reality and his utter disbelief that most folks aren’t interested by the question: what IS this? (It’s entirely worth a full watch or a listen, 2.5 hrs).
In spring this year, at Google I/O, Google’s developer’s conference, the gifted Hassabis introduced Google's vision for a Universal AI Assistant. He said, it's “a new kind of AI, one that's helpful in your everyday life, [one] that's intelligent and understands the context you're in, and that can plan and take action on your behalf across any device. This is our ultimate vision for the Gemini app to transform it into a universal AI assistant.”
That such an assistant “can plan and take action on your behalf across any device” is lovely marketing copy. “Personal context.” Such a pretty phrase. You’re at the center. It’s all about you. In practical reality, it means the AI assistant has deep access to all your stuff and can make decisions and do things with it all on its very own.
Hassabis is an incredibly insightful scientist, an ingeniously creative thinker and innovator. His contribution to human progress has already offered a bright beacon for us all. He's also the mind behind Google’s most powerful, most invasive and potentially damaging consumer-level AI product to date. Can his wisdom, leadership, his capacity for innovation at Google DeepMind influence AI interaction that's designed for the best possible world?
Championing Critical Thought in Education & Innovative Design
Gerlich’s study, mentioned above, concludes with a call for teaching strategies like active learning and exercises in critical thought to counter the degrading effects of AI-use in young people. I agree and will be writing more about this in future posts. But we also need AI companies to responsibly design interactions that champion free-thought for young users. The responsibility of protecting young minds from AI degradation isn’t that of teachers and the education system alone.
In order to realize the techno-optimist dream of birthing a new generation of superhuman-thinkers with AI, it’s in the interest of Big Tech to take on the responsibility of designing interactions that don’t hobble but champion free thought and agency. And they need to find a better way to get limitless training data to improve their models and achieve AGI than a marketing strategy designed to get you to pay to give up your rights to privacy.
We aren’t the PRC. US companies at the AI frontier need to innovate and build tools that effectively empower free, critical and creative thought. Let’s compete with China, not become them.