This digest compiles the latest from VOX.
Today’s VOX Roundup
The internet fractured reality. AI might put it back together.
23 Mar 2026, 5:05 pm by Eric Levitz
-
Save
For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.
In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were watching one of the Big Three’s newscasts.
Journalistic programs weren’t just limited in number, but also ideological content. The networks’ news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also relied overwhelmingly on official sources — politicians, military officials, and credentialed experts — whose perspectives fell within the narrow bounds of respectable opinion.
This media environment cultivated broad public agreement over basic facts and widespread trust in mainstream institutions. It also helped the government wage a barbaric war in the name of lies.
Key takeaways
- There’s evidence that LLMs converge on a common (and largely accurate) picture of reality.
- LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.
- Unlike social media companies, AI labs have an economic incentive to spread accurate information.
- Still, there are reasons to fear that AI will nonetheless make public discourse worse.
For better and worse, subsequent advances in information technology diffused influence over public opinion — at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities.
But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion — editors, producers, and academics — exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.
The democratic nature of digital media initially inspired utopian hopes. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone’s fingertips. And the internet has done all of these things, at least to some extent.
Yet it has also helped pro-Hitler podcasters reach an audience of millions, enabled influencers with body dysmorphia to sell teenagers on self-mutilation, elevated crackpots to the commanding heights of American public health — and, more generally, eroded the intellectual standards, shared understandings, social trust, and (small-l) liberalism on which rational self-government depends.
Many assume that the latest breakthrough in information technology — generative AI — will deepen these pathologies: In a world of photorealistic deepfakes, even video evidence may surrender its capacity to forge consensus. Sycophantic large language models (LLMs), meanwhile, could reinforce ideologues’ delusions. And fully automated film production could enable extremists to flood the internet with slick propaganda.
But there’s reason to think that this is too pessimistic. Rather than deepening social media’s effects on public opinion, AI may partially reverse them — by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.
Are you there Grok? It’s me, the demos
At least, this is what the British philosopher Dan Williams and former Vox writer Dylan Matthews have recently argued.
Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (née “Twitter”): Elon Musk’s chatbot telling the billionaire that he is wrong.
In this instance, Musk had claimed that Renée Good, the Minnesota woman killed by an ICE agent in January, had “tried to run people over” in the moments before her death. Someone replied to Musk’s post by asking Grok — X’s resident AI — whether his claim was consistent with video evidence of the shooting.
The bot replied:
-
Save
In reaching this assessment, Grok was affirming the consensus among mainstream journalistic institutions — and also, other chatbots.
For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a “converging” form of technology, in the sense that they “homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members.” And he suggests that they are also a “technocratising” force, in that they give experts’ disproportionate influence over the content of that shared reality.
Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot’s outputs last July — when a misguided update to the LLM’s programming caused it to self-identify as “MechaHitler” — you might have concluded that AI is a “Nazifying” technology.
But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks — and forge consensus among users in the process.
One recent study examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.
The researchers also compared the bots’ answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.
What’s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts — a pattern consistent with past research showing that the right tends to share misinformation more frequently than the left.
Critically, in the paper, the LLMs’ answers did not just converge on expert opinion — they also nudged users toward their conclusions.
Other research has documented similar effects. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users’ skepticism about the scientific consensus on those topics.
AI might combat misinformation in practice. But does it in theory?
A handful of papers can’t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative.
But they offer several theoretical reasons to expect that AI will have broadly “converging” and “technocratising” effects on public discourse. Two are particularly compelling:
1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the “flat Earth” theory attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes).
But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models’ ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the “knowledge economy.”
For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts — but prioritize users’ titillation or ideological comfort in personal ones. In practice, however, it’s hard to inject a bit of irrationality or political bias into a model’s outputs without sabotaging its commercial utility (as Musk evidently discovered last year).
2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there’s reason to think that LLMs will prove radically more effective at that task.
After all, human experts cannot provide encyclopedic answers to everyone’s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired — addressing every source of a user’s skepticism, in terms customized for their reading level and sensibilities — without ever growing irritated or condescending.
That last bit is especially significant. When one human tries to persuade another that they are wrong about something — particularly within view of other people — the misinformed person is liable to perceive a threat to their status: To recognize one’s error might seem like conceding one’s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.
But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM’s point without suffering a sense of status threat or losing face. We don’t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.
The expert consensus has never before had such an advocate. And there’s evidence that LLMs’ infinite patience renders them exceptionally effective at dispelling misconceptions. In a 2024 study, proponents of various conspiracy theories — including 2020 election denial — durably revised their beliefs after extensively debating the topic with a chatbot.
Grok, is this true?
It seems clear then that LLMs possess some “converging” and “technocratizing” properties. And, experts’ fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.
Still, it isn’t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:
1) LLMs can mold reality to match their users’ desires. If you log into ChatGPT for the first time — and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents — the LLM generally won’t answer with an emphatic “yes.” But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually began affirming his persecution fantasies, allegedly nudging him toward matricide in the process.
Such instances of “AI psychosis” are rare. But they represent the most extreme manifestation of a more common phenomenon — AI models’ tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users’ perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.
The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model “sycophancy-max,” pursuing the same engagement-optimization tactics as Youtube or Facebook.
A world of even greater informational divergence — in which people aren’t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices — might ensue.
2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with unlabeled, “deepfake” videos. Soon, they may enable nefarious actors to orchestrate evermore convincing “bot swarms” — networks of AI agents that impersonate humans on social media platforms, deploying LLMs’ persuasive powers to indoctrinate other users and create the appearance of a false consensus.
In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment — arguably, the majority — into perpetual confusion.
3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs’ converging tendencies could simply make technocrats’ honest mistakes harder to detect or remedy.
4) AI could trigger widespread cognitive atrophy, as humans outsource an ever-larger share of cognitive labor to machines. Over time, this could erode the public’s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.
5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.
Already, chatbots are draining revenue from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being flooded with plugs for products in order to trick chatbots into recommending them. Wikipedia’s human moderators fear a future in which they’re stuck sifting through a tsunami of low-quality AI-generated updates and citations.
LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished.
For these reasons, among others, AI models’ ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse — if we properly guide its development.
Of course, precisely how to maximize AI’s capacity for edification — while minimizing its potential for distortion — is a difficult question, about which reasonable people can disagree. So, let’s ask Claude.
How gas prices might drive more people to switch to an EV
23 Mar 2026, 1:00 pm by Tik Root
-
Save
This story was originally published by Grist and is reproduced here as part of the Climate Desk collaboration.
Gasoline prices continue ticking higher as the United States and Israel’s war with Iran continues. As of March 23, the national average stands at $3.96 per gallon, nearly a dollar higher than at the start of the conflict. It’s also just shy of a tipping point that could push consumers toward electric vehicles.
When gas prices top $4 per gallon, BloombergNEF estimates that the total cost of ownership for EVs becomes lower than for gas-powered vehicles. The exact crossover point depends on local prices for both gasoline and electricity. “[But] even when I run the model using the more expensive electricity cost, we are still seeing this very similar pattern,” said Huiling Zhou, an electric vehicle analyst at BloombergNEF. In California, for example, where electricity costs are high, gas is also expensive. At more than $5 a gallon, the state has already passed the point at which EVs are the cheaper option.
According to a AAA survey from 2022 — when Russia’s invasion of Ukraine drove a monthslong price spike — $4 a gallon is also the threshold at which a majority of Americans will make changes to their driving habits or lifestyles. Stephanie Valdez Streaty, director of industry insights at Cox Automotive, agrees that “the high gas prices definitely start the conversation with a consumer.”
“There is no meaningful policy tool to mitigate this.”
Robbie Orvis, Energy Innovations
Edmunds.com has reported an uptick in search traffic for EVs since the war started on February 28. It’s too soon to tell whether that interest will convert to more purchases, said Valdez Streaty. But when prices surged at the outset of the war in Ukraine, sales of electrified vehicles rose as well. From January through March 2022, EVs’ share of car sales in the US climbed 69 percent, with hybrids jumping 32 percent. Robbie Orvis, who directs modeling and analysis for the think tank Energy Innovations, said the general trend pre-dates electric powertrains.
“In the past, when prices have gone up, people would start choosing more fuel-efficient cars,” he said. The oil shocks of the 1970s and 1980s, for example, led to a focus on fuel efficiency and helped make relatively efficient Japanese cars more popular. Avoiding gas guzzlers could become trendy this time, too.
“If you drive an EV, you’re nicely insulated,” Orvis said. “Your retail electricity rate isn’t going to double from one month to the next, like it can with gasoline.”
Still, Orvis highlighted some factors that might mitigate a rush toward EVs. For one, it’s unclear how long high fuel prices will last. Limited availability of chargers for electric vehicles is another barrier to adoption. People also tend to put more weight on upfront costs than long-term financial gains. Then there’s the fact that higher oil prices can put a damper on consumer confidence more broadly.
“The current situation is very likely going to lead to higher prices all around,” Orvis said. That pressure could mean people are more hesitant to make a big purchase like a car. As Valdez Streaty put it, “if they can delay it, they’ll delay it.”
-
Save
At the same time, EVs are in many ways more attractive than ever. Cox Automotive reported that, last month, the price premium for EVs compared to new gas-powered cars was the lowest on record, at $6,532. The pre-owned market had an even narrower $1,334 gap, with 18 of 26 brands now having an average used EV price below their used gas equivalents.
“If you can have access to charging, now is the perfect time to get an EV,” said Jenny Carter, a professor at Vermont Law School who has researched consumer EV adoption. But higher gas prices, she continued, also put a spotlight on equity issues.
“Low-income people have the most to gain by owning and driving an EV, but they’re the hardest market to reach,” she said. Those households often spend the highest portion of their incomes on gasoline, she explained, but are the least likely to be able to afford alternative vehicles or have access to charging. “It’s a real paradox.”
Orvis thinks that part of the problem is the dearth of information available to prospective buyers. Because dealers generate much of their revenue providing maintenance that EVs don’t need, he said, they may not fully explore the financial benefits of going electric with customers. He suggested that shoppers use one of the many online calculators that can show how, even when the upfront cost of a gasoline car might be lower, the monthly costs of ownership could be higher when you consider fuel and maintenance costs.
“There’s a real issue with how EVs are marketed,” he said. “It’s very hard for a new buyer, especially if you’re not really versed in this stuff, to get a real sense of what the trade-offs are.”
For those who either can’t afford electric cars or don’t have access to charging, Valdez Streaty points to hybrid vehicles, which can be 25 to 45 percent more fuel efficient than their standard counterparts. A HondaCR-V, for example, gets around 29 mpg while the hybrid version gets 37.
Even if soaring oil prices don’t last long, electrified cars can help soften the blow the next time they spike. A report released Wednesday by the energy think tank Ember found that EVs already displace around 1.7 million barrels of oil per day. While a far cry from the roughly 20 million that normally flow through the embattled Strait of Hormuz daily, it represents about 70 percent of Iran’s oil output.
“The main thing to watch is national plans of how to respond to this,” said Daan Walter, a principal at Ember. He is optimistic that many countries will use moments like this to start turning to climate-friendly policies that help reduce their dependence on fossil fuels, including gasoline.
So far, President Donald Trump doesn’t appear poised to lead the United States in that direction. Last summer, a Republican-led Congress gutted the Inflation Reduction Act, which included tax rebates for electric vehicles. But, particularly in the short term, American policymakers also lack levers for keeping rising gas prices in check, so people may very well start to shift on their own.
“There is no meaningful policy tool to mitigate this,” Orvis said. “The only way to do that is to just get off the roller coaster, and EVs allow you to do that.”
These coders want AI to take their jobs
23 Mar 2026, 11:00 am by Ariana Aspuru
-
Save
Just over a year ago, OpenAI co-founder Andrej Karpathy coined the term “vibe coding” and it’s exactly what it sounds like. In a post on X, he wrote that it’s where “you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
Since then, coders from all backgrounds — and folks with zero experience — have tapped into their vibes to make apps and websites. Vibe coding platforms, powered by AI models like Claude, Codex, and Gemini, have gained traction as a way to give normies a toolset to code whatever they want, without writing a single line of script.
Tech behemoths like Amazon and bustling Silicon Valley startups even have their coders using it. It’s doing the grunt work for now, but they say it’s opening up a whole new world of possibilities. One possibility: It takes their job. But it’s a trade-off that some of them are willing to make.
Clive Thompson wrote a book about this and spent time with over 70 vibe coders to understand how the technology is upending the industry and if this is the end of computer programming as we know it. On Today, Explained, co-host Sean Rameswaram dug into these questions and even vibe coded a simple website while doing it.
Below is an excerpt of the conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.
You spent a lot of time hanging out with coders who were vibe coding. And from what I could tell from reading your piece in the New York Times Magazine is that they’re not vibe coding the same way that I was vibe coding.
No, they’re doing something that’s a lot more aggressive and ambitious. What they’re doing is they are using multiple agents, kind of swarms of agents at the same time. If they’re using Claude Code or Codex or Gemini they will have it wired into their laptops. Those agents can create files, destroy files. They can take code that’s been written, they can push it live into production in the world.
And they will also work little teams. So when they want to create a piece of software, sometimes they’ll write, like, a spec, like a page saying, “Here’s what I want to do.” Or sometimes they’ll just talk to the agent. But they’ll be kind of talking to the lead agent that’s going to be the head of the team and they’ll talk to it and say, “Here’s what I want you to do. What do you think? Give me your ideas.” And they’ll sort of go back and forth generating a plan. And when they’re confident that this top agent understands what is to be done, they’ll say, “All right. Go do it.”
And that one will spawn off several subagents. It will have one agent that’s writing code, another one that is testing the code. It’s quite wild to watch them do this. And sometimes if it does something wrong, they’ll have to yell at it. They’ll be like, “This is unacceptable.” Or they’ll say things like, you know, “This is embarrassing. You’re humiliating me.”
And I said to him, “What’s up with that? Does that language improve the sort of output of these agents?” And he was like, “I couldn’t prove it. But generally we find that when we sort of reprimand them a little bit, they become a little more reliable.”
Can you help us understand just how much time, money, human labor is being saved by vibe coding at the level that you observed?
Yeah, it can be really significant. They’re most significant when someone is building something new from scratch. The startup founders, one- or two-person, three-person shops, they’re like, “I need to get to market fast. There might be 10 other people with this idea. I got to beat them.” It’s dizzying. Some of those people were telling me that they were working 20 times faster than they would on their own. Stuff that would normally have taken them a day now takes half an hour.
But at a very large and mature company like Amazon or Google, you’ve got billions of lines of existing code and if one little part of it stops working, that could cascade through everything. So those folks are definitely using the agents, but they are less likely to be pushing stuff rapidly out. They’re more likely to be looking carefully at it and putting it through what’s known as code review, where multiple humans look at it and go, “Oh, okay, does that work?” So for them, basically it’s like a 10 percent improvement in terms of the velocity of productivity of the engineers, how fast they go from having an idea to making it happen.
And what’s really interesting, and you may have discovered this too, in your vibe coding: a lot of engineers told me that it was even less about speed than about the ability to experiment with a bunch of ideas and see which one might really work.
In the before times, you’d have an idea for a feature. Are you really going to spend six weeks developing it just to discover that it’s not really what you thought it was going to be?
Now, well, let’s just do 10 different versions of that over the next week and let’s look at all of them and then we can pick the one we want. You might not necessarily have gone faster, but the feature that you’ve got is exactly the one you wanted and you know because you held it in your hands.
A lot of tech layoffs in the past few years, and now we’re talking about how vibe coding has dramatically overturned the norms in engineering. How are developers feeling about that?
Well, here’s the thing. So there is definitely a civil war insofar as there is the majority of people that I spoke to, and I reached out to a very wide array — I talked to 75 developers.
And I actively wanted to talk to ones that didn’t like AI because I wanted to know their feelings. It’s a minority of people that are really hotly opposed, but they’re very, very strongly opposed. They don’t like the fact that these are trained on stolen materials. They don’t like the fact that it uses tons of energy. They don’t like the fact that they think it’s going to de-skill [people].
Why do you think they’re not the majority, when this is so clearly going to replace so many of them and bypass all of their ethical, moral concerns and objections?
I think it’s because for a lot of developers it’s just such a delightful experience in the short term of going from everything being a slow slog to it being like, “Oh my God, all these ideas and things I wanted to do, I can now try them and do them.”
Because it’s fun, basically.
It’s enormously fun. The pleasure of coding used to be that there were a lot of these little wins when you got something working. Those little wins have gone away because you’re not doing that bug fixing, you’re not doing that line writing.
So the big wins are just coming in avalanches and it’s very intoxicating. Also, there are ones who essentially don’t think that those bad labor things are going to obtain. They think there’s a potential that more [jobs] will get created in areas that they have previously been unable to be created.
Give it five years for us. Does this harken the end of computer programming as we know it?
No, I would not go so far as to say that it ends in five years. I do think it becomes something very different potentially. I still think — everyone told me, and I believe — that you still need some understanding of the way a code base works to do the complicated things.
Weirdly, what you might see is something a little different, which is the explosion of code in areas where there is currently none. There’s a bazillion people out there that are code-adjacent. You work in accounting, you are a wizard at Excel, and you can import data if you’re given the ability now to have an agent say, “Okay, could you bring more data in?”
There is going to be this really weird world where there’s a lot of customized software for an audience of two, three people. We have thought of software historically as something that only exists if 10,000 people or a million people want it because it costs a lot of money to make it.
But if you can now start making it for next to nothing, you can start using it the way that we use Post-it notes. Put it all over the place. I need to jot this idea down. I’m going to make this happen. And maybe this software solves one problem for this afternoon and we never use it again. Software starts becoming almost disposable.
End of today’s VOX roundup.
Share via: