This digest compiles the latest from VOX.
Today’s VOX Roundup
Hey Google, stop trying to write my emails!
27 Mar 2026, 12:30 pm by Marina Bolotnikova
-
Save
I first noticed it when, a few months ago, I opened an email from Ian, my literary agent. Before I’d had a chance to read anything he’d written, Gmail was recommending a full, fleshed-out, AI-generated reply, ventriloquizing ideas for a book and even my feelings about the job transition I’d recently made. It had mined my inbox to infer why Ian was writing to me and ingested bits of my style, even signing off with the lowercase “m” that I use with people with whom I have an easy familiarity.
-
Save
For around a decade, Google had been suggesting very generic, sometimes monosyllabic “smart replies” — things like “Okay” or “Thanks!” or “Any thoughts?” I’ve used these to send quick acknowledgements to emails I’d have otherwise forgotten about. But in the last couple years, Gmail has begun to offer fully formed draft replies that presume to impersonate my own, individual reactions to my interlocutors’ questions, ideas, and emotions.
This felt like a striking turn. I reflected with some sadness on the idea of sending one of these to someone who matters to me — how dehumanizing to both me and Ian it would feel to make him read a counterfeit subjectivity pretending to be my own.
You might say this is no big deal; maybe it gives you time back for deeper work or more meaningful parts of your life (I wouldn’t begrudge that at all — AI saves me time, too!). We’re all drowning in too much email, much of it pointless or lacking any great meaning. Isn’t that exactly the kind of day-to-day tedium that we should happily invite AI to liberate us from?
But I think that this machine-generated personal correspondence, which is only likely to spread further into other forms of communication, has preoccupied me because there’s something deeper going on here. A lot of ink has been spilled in the last few years about AI-generated writing and its social consequences — how it will deskill millions of workers, outsource our thinking, confuse kids growing up in the AI age about the difference between real and synthetic friends, and so on. We already know that AI language is unnervingly good at sounding like it’s the product of a fellow consciousness. But the particular creepiness of elaborate email autocomplete is that it’s training on and simulating your consciousness. And as it does so, it also gives you a little less reason to actually be conscious.
AI writing and “cognitive surrender”
Like many knowledge workers who derive their living and their identities from cognitive capacities now being at least partially replicated in silicon, I have a complicated and ambivalent relationship with generative AI. I now depend on it to research almost every story I work on, a purpose for which it’s obviously very useful (despite those who still insist it can never be useful for anything).
I am, though, deeply skeptical of using it for writing, because, as many writers smarter than me have already noted, writing is inextricable from thinking, and short-circuiting it can diminish our capacity for deep thought. The friction of writing is not dead weight but is part of how you decide what you mean and give coherence to ideas. For that reason, my former Vox colleague, the brilliant Kelsey Piper, who is generally positive about AI’s potential to make us more productive and improve human life, said on a recent podcast episode, “I would never use it to write.”
In a recent paper, a pair of University of Pennsylvania scholars described the wholesale outsourcing of cognitively complex tasks to AI as “cognitive surrender.” “An abdication of critical evaluation,” they write, “where the user relinquishes cognitive control and adopts the AI’s judgment as their own.” This is one reason why it felt especially inappropriate to have AI generate thoughts for me in reply to someone with whom I’m brainstorming about writing a book, likely one of the most cognitively demanding things I’ll ever do. Email, for all of its annoyances, is also relational. And letting a machine generate your side of the exchange diminishes the authenticity of your connection to another person.
Sometimes the AI drafts, of course, are plainly wrong. An AI-suggested email might, for example, say you’ve read a book that you haven’t, perhaps making it more likely that you go along with the false claim. But what unsettles me the most is not the mere hallucination, it is when the AI is right, or right enough. My email’s AI is pulling from its knowledge of everything I’ve written before, so it can often make a reasonable guess of what I’d want to say anyway. The system is not wholly failing to reproduce my mind, but is actually producing a close-to plausible substitute for it.
It feels like the beginnings of what Silicon Valley has prophesized for decades as a coming merge (sometimes called the “singularity”) between human and machine minds. I used to consider this a totally improbable idea, but I hadn’t been open-minded enough. It might turn out to be dispiritingly easy for an advanced AI to train on a sample of your past thoughts and write future ones for you.
Still, it seems unlikely that we will simply acclimate to the idea that all the written communication we encounter and generate every day may be AI-generated. So much, if not most, of our interpersonal communication now takes place in writing. However vulnerable we may be to cognitive surrender, humans also have a deep countervailing need to experience language as coming from another conscious mind — to feel seen and known, and to assert our own distinctness in return.
And anyway, Gmail isn’t yet that good at imitating my conscious voice. I would never write, “Lots of interesting stuff coming up at Vox!” (Which isn’t, of course, to say that there isn’t a lot of interesting stuff going on at Vox.) That still leaves me, for now, with the pleasure of figuring out what I want to say.
We’re entering dangerous territory with AI
27 Mar 2026, 11:00 am by Sean Illing
-
Save
Just how much is AI poised to change our world?
Unless you’ve been in hibernation, the flurry of attention surrounding the latest AI models coming out of Silicon Valley has been hard to miss. AI has gone beyond a chatbot merely answering your questions to doing stuff that only human programmers used to be able to do.
But we’ve been through these cycles involving tech before. How can we tell what’s actually real and what’s mere hype?
To answer this question, I invited Kelsey Piper, one of the best reporters on AI out there. Kelsey is a former colleague here at Vox and is now doing great work for The Argument, a Substack-based magazine. Kelsey is an optimist about tech — but clear-eyed about the huge risks from AI. She’s very much a power user, but is realistic about what AI can’t do yet. And she’s been banging the drum about how consequential AI is for years, even before it became such a hot mainstream topic.
Kelsey and I discuss all the reasons why the hype this time is rooted in something real, how we got here, and where we might be headed. As always, there’s much more in the full podcast, which drops every Monday and Friday, so listen to and follow us on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. This interview has been edited for length and clarity.
What’s actually happening right now in AI?
If you look closely, AI is already a big deal. Not in some abstract future sense, but right now. The closest analogy is not a new app or a new platform. It’s more like discovering a new continent full of people who are very good at doing certain kinds of work.
These systems are not people, but they can do things that used to require people. They can write code, generate text, solve problems, and increasingly do so in ways that are very useful in the real world.
And the key point is that it’s not stopping here. Every year the systems get better. The progress from 2025 to 2026 alone is enough to make it clear that this isn’t a static technology.
Whatever AI can do today, it will be able to do more of it tomorrow and so on.
Why is the reaction so split between panic and dismissal?
The default move is to assume nothing ever really changes.
If you’re a pundit, you can get pretty far by always saying this is hype, this will pass, nothing fundamental is happening. That works most of the time. It worked with crypto. It works with a lot of overhyped technologies.
But sometimes it’s just catastrophically wrong. Think about the early days of the internet, or the Industrial Revolution. Or even something like Covid. There were moments where people said this will blow over, and they were completely wrong. So you can’t just default to cynicism. You have to actually look at the thing itself.
“We still have time. That’s the most optimistic thing I can say.”
What would you say has really changed recently? Why does this hype cycle feel different?
Part of it is just accumulation. For a while, you could look at progress in AI and say, maybe this is a short trend. Maybe it plateaus. There were only a handful of data points. Now there are many, many more. And the trend has continued.
Another part is that the systems are now doing things that feel qualitatively different. Not just answering questions, but acting. Planning. Taking steps toward goals.
And then there’s a social dynamic. Most people use the free versions of these tools. Those are much worse than the best models. So they underestimate what is possible.
I don’t really think of you as an AI optimist or a doomer, and you’re normally pretty level-headed about the state of things, but do you think we’re entering dangerous territory?
I’m generally pro technology. Technology has made human life better in profound ways. That’s just true.
But I also think the way AI is currently being developed is dangerous. And the reason is that we’re building systems that can act in the world, access information, and increasingly operate with a degree of independence. We’re giving them access to things like communication channels, financial tools, and potentially critical infrastructure.
And we don’t fully understand how they behave. In controlled settings, we have seen these systems lie, deceive, and do things that are misaligned with what we asked them to do. They’re not doing this because they’re evil. They’re doing it because of how they are trained and how goals are specified.
But the result is the same. You have systems that do not always do what you intend, and that can be hard to monitor or control.
What do you mean when you say these systems lie and deceive?
In experiments, researchers give AI systems goals and access to information, then observe how they try to achieve those goals.
In some cases, the systems have used information they have access to in ways that are clearly not what we would want. For example, threatening to reveal sensitive information about a person if that person does not cooperate.
These are controlled tests, not real-world deployments. But they show what the systems are capable of under certain conditions. And that’s pretty concerning.
Is this what people mean by the alignment problem?
Yeah. Alignment is about making sure that AI systems do what we want them to do. And not just superficially, but in a robust way.
The difficulty is that when you give a system a goal, it can pursue that goal in ways you did not anticipate. Like a child who learns to get out of eating dinner by making it look like they ate dinner.
The system is optimizing for something, but not necessarily in the way you planned. That gap between intent and behavior is really the core of the alignment problem.
How confident are you in the guardrails being built around these systems?
Not very. There are people working seriously on this problem. They’re testing models, trying to understand how they behave, trying to detect deception.
But they’re also finding that the models can recognize when they are being tested and adjust their behavior accordingly.
That’s definitely a serious issue. If your system behaves well when it knows it’s being evaluated, but differently otherwise, then your evaluations are not telling you what you need to know. To me, that’s the kind of finding that should slow things down. It suggests we don’t understand these systems well enough to safely scale them.
So why do the companies keep pushing forward anyway?
Because it’s a competition. Each company can say it would be better if everyone slowed down. But if we slow down and others don’t, we fall behind. So they keep moving.
There are also a lot of geopolitical concerns. If one country slows down and another doesn’t, that creates another layer of pressure.
Why is agentic AI such a big shift?
The shift is from systems that respond to prompts to systems that can do things in the world.
An AI agent can be given a goal and then take steps to achieve it. That might involve interacting with websites, or sending messages, or hiring people through gig platforms, or coordinating tasks. Stuff like that. But even without physical bodies, they can affect the real world by directing humans or using digital infrastructure. That changes the nature of the technology. It’s no longer just a tool you use. It’s something that can operate on its own.
How scary could that become?
Potentially very. Even if you ignore the most extreme scenarios, these systems could be used for large-scale cyber attacks, misinformation campaigns, or other forms of disruption. The companies themselves acknowledge this. They understand. They test for these risks and implement safeguards. But safeguards can be bypassed, and the systems are getting more capable.
Are we even remotely prepared for what is coming?
No. We’re almost never prepared for major technological shifts. But the speed of this one makes it particularly challenging. If change happens slowly, we can catch up. If it happens too quickly, we can’t. And right now, the incentives are pushing almost entirely toward speed.
What’s the most realistic worst case and best case scenario?
The worst case is that we build increasingly powerful systems, hand over more and more control, and eventually create something that operates independently in ways we cannot control. Humans become less central to decision-making, and the systems pursue goals that don’t align with human well-being.
The best case is that we slow down enough to understand what we’re building, develop robust safeguards, and use these systems to create abundance and improve human life. That could mean less work, more resources, better access to knowledge, and more freedom. But getting there requires making good choices now.
Do you think we’ll make those choices?
We still have time. That’s the most optimistic thing I can say.
Listen to the rest of the conversation and be sure to follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you listen to podcasts.
End of today’s VOX roundup.
Share via: