Table of Contents
VOX Digest: March ,

VOX Digest: March 26, 2026

This digest compiles the latest from VOX.

Today’s VOX Roundup

The Supreme Court is scared it’s going to break the internet

26 Mar 2026, 2:25 pm by Ian Millhiser

The Supreme Court is scared it’s going to break the internet
  • Save

The Supreme Court tossed out a billion-dollar verdict against an internet service provider (ISP) on Wednesday, in a closely watched case that could have severely damaged many Americans’ access to the internet if it had gone the other way. 

Wednesday’s decision in Cox Communications v. Sony Music Entertainment is part of a broader pattern. It is one of a handful of recent Supreme Court cases that threatened to break the internet — or, at least, to fundamentally harm its ability to function as it has for decades. In each case, the justices took a cautious and libertarian approach. And they’ve often done so by lopsided margins. All nine justices joined the result in Cox, although Justices Sonia Sotomayor and Ketanji Brown Jackson criticized some of the nuances of Justice Clarence Thomas’s majority opinion.

Some members of the Court have said explicitly that this wary approach stems from a fear that they do not understand the internet well enough to oversee it. As Justice Elena Kagan said in a 2022 oral argument, “we really don’t know about these things. You know, these are not like the nine greatest experts on the internet.”

Thomas’s opinion in Cox does a fine job of articulating why this case could have upended millions of Americans’ ability to get online. The plaintiffs were major music companies who, in Thomas’s words, have “struggled to protect their copyrights in the age of online music sharing.” It is very easy to pirate copyrighted music online. And the music industry has fought online piracy with mixed success since the Napster Wars of the late 1990s.

Before bringing the Cox lawsuit, the music company plaintiffs used software that allowed them to “detect when copyrighted works are illegally uploaded or downloaded and trace the infringing activity to a particular IP address,” an identification number assigned to online devices. The software informed ISPs when a user at a particular IP address was potentially violating copyright law. After the music companies decided that Cox Communications, the primary defendant in Cox, was not doing enough to cut off these users’ internet access, they sued.

Two practical problems arose from this lawsuit. One is that, as Thomas writes, “many users can share a particular IP address” — such as in a household, coffee shop, hospital, or college dorm. Thus, if Cox had cut off a customer’s internet access whenever someone using that client’s IP address downloaded something illegally, it would also wind up shutting off internet access for dozens or even thousands of innocent people.

Imagine, for example, a high-rise college dormitory where just one student illegally downloads the latest Taylor Swift album. That student might share an IP address with everyone else in that building.

The other reason the Cox case could have fundamentally changed how people get online is that the monetary penalties for violating federal copyright law are often astronomical. Again, the plaintiffs in Cox won a billion-dollar verdict in the trial court. If these plaintiffs had prevailed in front of the Supreme Court, ISPs would likely have been forced into draconian crackdowns on any customer that allowed any internet users to pirate music online — because the costs of failing to do so would be catastrophic. 

But that won’t happen. After Cox, college students, hospital patients, and hotel guests across the country can rest assured that they will not lose internet access just because someone down the hall illegally downloads “The Fate of Ophelia.” Thomas’s decision does not simply reject the music industry’s suit against Cox, it nukes it from orbit.

Cox, moreover, is the most recent of at least three decisions where the Court showed similarly broad skepticism of lawsuits or statutes seeking to regulate the internet.

The Supreme Court is an internet-based company’s best friend

The most striking thing about Thomas’s majority opinion in Cox is its breadth. Cox does not simply reject this one lawsuit, it cuts off a wide swath of copyright suits against internet service providers. 

Thomas argues that, in order to prevail in Cox, the music industry plaintiffs would have needed to show that Cox “intended” for its customers to use its service for copyright infringement. To overcome this hurdle, the plaintiffs would have needed to show either that internet service providers “promoted and marketed their [service] as a tool to infringe copyrights” or that the only viable use of the internet is to illegally download copyrighted music. 

Thomas also adds that the mere fact that Cox may have known that some of its users were illegally pirating copyrighted material is not enough to hold them liable for that activity.

As a legal matter, this very broad holding is dubious. As Sotomayor argues in a separate opinion, Congress enacted a law in 1998 which creates a safe harbor for some ISPs that are sued for copyright infringement by their customers. Under that 1998 law, the lawsuit fails if the ISP “adopted and reasonably implemented” a system to terminate repeat offenders of federal copyright law.

The fact that this safe harbor exists suggests that Congress believed that ISPs which do not comply with its terms may be sued. But Thomas’s opinion cuts off many lawsuits against defendants who do not comply with the safe harbor provision.

Still, while lawyers can quibble about whether Thomas or Sotomayor have the best reading of federal law, Thomas’s opinion was joined by a total of seven justices. And it is consistent with the Court’s previous decisions seeking to protect the internet from lawsuits and statutes that could undermine its ability to function.

In Twitter v. Taamneh (2023), a unanimous Supreme Court rejected a lawsuit seeking to hold social media companies liable for overseas terrorist activity. Twitter arose out of a federal law permitting suits against anyone “who aids and abets, by knowingly providing substantial assistance” to certain acts of “international terrorism.” The plaintiffs in Twitter claimed that social media companies were liable for an ISIS attack that killed 39 people in Istanbul, because ISIS used those companies’ platforms to post recruitment videos and other content.

Thomas also wrote the majority opinion in Twitter, and his opinion in that case mirrors the Cox decision’s view that internet companies generally should not be held responsible for bad actors who use their products. “Ordinary merchants,” Thomas wrote in Twitter, typically should not “become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer.”

Indeed, several key justices are so protective of the internet — or, at least, so cautious about interfering with it — that they’ve taken a libertarian approach to internet companies even when their own political party wants to control online discourse.

In Moody v. Netchoice (2024) the Court considered two state laws, one from Texas and one from Florida, that sought to force social media companies to publish conservative and Republican voices that those companies had allegedly banned or otherwise suppressed. As Texas’s Republican Gov. Greg Abbott said of his state’s law, it was enacted to stop a supposedly “dangerous movement by social media companies to silence conservative viewpoints and ideas.”

Both laws were blatantly unconstitutional. The First Amendment does not permit the government to force Twitter or Facebook to unban someone for the same reason the government cannot force a newspaper to publish op-eds disagreeing with its regular columnists. As the Court held in Miami Herald Publishing Co. v. Tornillo (1974), media outlets have an absolute right to determine “the choice of material” that they publish.

After Moody reached the Supreme Court, however, the justices uncovered a procedural flaw in the plaintiffs’ case that should have required them to send the case back down to the lower courts without weighing in on whether the two state laws are constitutional. Yet, while the Court did send the case back down, it did so with a very pointed warning that the US Court of Appeals for the Fifth Circuit, which had backed Texas’s law, “was wrong.” 

Six justices, including three Republicans, joined a majority opinion leaving no doubt that the Texas and Florida laws violate the First Amendment. They protected the sanctity of the internet, even when it was procedurally improper for them to do so.

This Supreme Court isn’t normally so protective of institutions

One reason why the Court’s hands-off-the-internet approach in Cox, Twitter, and Moody is so remarkable is that the Supreme Court’s current majority rarely shows such restraint in other cases, at least when those cases have high partisan or ideological stakes.

In two recent decisions — Mahmoud v. Taylor (2025) and Mirabelli v. Bonta (2026) — for example, the Court’s Republican majority imposed onerous new burdens on public schools, which appear to be designed to prevent those schools from teaching a pro-LGBTQ viewpoint to students whose parents find gay or trans people objectionable. I’ve previously explained why public schools will struggle to comply with Mahmoud and Mirabelli, and why many might find compliance impossible. Neither opinion showed even a hint of the caution that the Court displayed in Cox and similar cases.

Similarly, in Medina v. Planned Parenthood (2025), the Court handed down a decision that is likely to render much of federal Medicaid law unenforceable. If taken seriously, Medina overrules decades of Supreme Court decisions shaping the rights of about 76 million Medicaid patients, including a decision the Court handed down as recently as 2023 — though it remains to be seen if the Court’s Republican majority will apply Medina’s new rule in a case that doesn’t involve an abortion provider.

The Court’s Republican majority, in other words, is rarely cautious. And it is often willing to throw important American institutions such as the public school system or the US health care system into turmoil, especially in highly ideological cases.

But this Court does appear to hold the internet in the same high regard that it holds religious conservatives and opponents of abortion. And that means that the internet is one institution that these justices will protect.

Read full article →


4 reasons why AI (probably) won’t take your job

26 Mar 2026, 10:12 am by Eric Levitz

4 reasons why AI (probably) won’t take your job
  • Save

This story was originally published in The Highlight, Vox’s member-exclusive magazine. To get access to member-exclusive stories every month, become a Vox Member today.

AI is coming for the laptop class. While you clack away at your keyboard — writing code or drafting memos or making spreadsheets or scrolling X or perusing DoorDash or reading Vox or dreading death — machines are teaching themselves how to do your job.

Over the past four years, chatbots have gone from neat parlor tricks to hyperproductive polymaths. AI models can now generate new software out of a single English sentence, summarize case law in seconds, read CT scans with superhuman accuracy, and coordinate complex office workflows with scant human oversight. 

Large language models (LLMs) — today’s premier form of artificial intelligence — still have their limitations. They can’t reliably fulfill most white-collar workers’ every function. But AI progress is compounding on itself. As LLMs automate the process of building better LLMs, they will kick off a feedback loop of exponential self-improvement

Key takeaways

  • Despite AI’s rapid advances, it still hasn’t substantially increased unemployment. 
  • You don’t necessarily have to outperform AI at your job in order to keep it.
  • The go-to evidence for exponential AI progress has serious methodological flaws.

Thus, by the end of next year — if not this one — AI will render much of America’s professional class obsolete and push unemployment to 20 percent. Within a decade, the technology could wipe out virtually all forms of knowledge work.

Or so many of AI’s champions and detractors believe. 

In recent weeks, the drumbeat of catastrophic labor-market forecasts has grown louder, with tech CEOs, financial analysts, and journalists penning viral predictions of an impending unemployment crisis.

In my view, the threat of AI-induced unemployment is worth taking seriously. And I’ve sketched out the case for alarm in past essays.

If the AI doomers’ concerns are warranted, however, their certainty is misplaced. Artificial intelligence could trigger mass white-collar layoffs in the near future. But there are plausible arguments against that scenario.

To inject some balance into the AI discourse — and/or, reassure myself that my hard-won verbal skills aren’t about to be less economically valuable than my flimsy biceps — I’ve sought out reasons for optimism about the white-collar labor market. Here are the four that I found most compelling:

1) You can see the AI age everywhere except in the jobs data

The first reason to doubt the doomer scenario for AI and unemployment is that it keeps not happening. 

Or, more precisely: Despite the astounding capacities of today’s LLMs, there still aren’t many signs of large-scale, AI-induced job loss. 

It takes time for firms to adopt new technologies, of course. But generative AI has been remarkably powerful for a while now. As of late 2024, it could already automate many coding tasks, generate research reports, write ad copy, review legal documents, and make terrible music at a near-human level

Yet America’s unemployment rate has barely budged over the past two years, hovering near 4 percent.

Even in the industries most suited to AI-driven automation, employment shifts have been modest. Job postings for software developers have actually increased over the past year. Employment in market research, meanwhile, went up after ChatGPT hit the market. Even customer service representatives — arguably, the workers most threatened by chatbots — have not suffered massive job losses: Although employment in the field fell 10 percent from 2023 to 2024, it has held steady since then and remains close to its pre-pandemic level.

What’s more, there are few indications that mass, white-collar layoffs are on the horizon. In a December survey by the accounting firm KPMG, 92 percent of CEOs said they were planning to grow their head counts, even as 69 percent were dedicating a large share of their budgets to AI deployment. 

Similarly, a January survey from EY-Parthenon found that 69 percent of CEOs expected that AI would lead them to either maintain or expand their payrolls.

One could dismiss this as sunny bluster. But there is evidence that these executives’ ostensible intuition — that AI adoption and downsizing don’t necessarily go together — holds true in practice. In a study of 12,000 European businesses published in February, firms that adopted AI saw a 4 percent increase in labor productivity — yet did not reduce their staffing in response. 

Granted, if you scour the jobs data for portents of an AI-driven unemployment crisis, you can come up with a few. For one, between November 2022 and January 2026, America’s core white-collar industries — finance, insurance, information, and professional and business services — cut their staffing by 1.9 percent. This is unusual; outside of recessions, these sectors have historically added jobs at a steady rate. 

For another, a Stanford Digital Economy Lab study suggests that young workers in heavily AI-exposed fields have seen declining job prospects, relative to those in other sectors, since ChatGPT debuted. 

Forecasts of an impending white-collar “bloodbath” tend to put a lot of weight on these data points. And yet, both developments likely have less to do with AI adoption than with monetary policy. 

As two economists at Google recently observed, America’s most AI-exposed industries began to slash hiring six months before ChatGPT hit the market in November 2022. And white-collar job postings fell most precipitously in 2023, when corporate deployment of LLMs had barely begun; in the fourth quarter of that year, fewer than 10 percent of large businesses said they were even planning to use AI in the next six months.

This timeline is hard to square with the theory that AI drove the slowdown in white-collar hiring. By contrast, the timing neatly aligns with the Federal Reserve’s tightening cycle. 

In March 2022, the central bank began hiking interest rates at a historically aggressive pace. A little over one month later, job postings began to fall in white-collar fields. When the Fed paused its hikes in 2024, that decline bottomed out; when the central bank began cutting rates in 2025, job openings started rebounding. 

Chart shows federal funds rate vs. job postings by AI exposure quintile
  • Save

Critically, interest rate hikes disproportionately impact AI-exposed industries. The sectors most susceptible to artificial intelligence — tech, finance, and professional services — are also among the most sensitive to tightening financial conditions. And when companies come under strain, they often pause entry-level hiring.

A pullback in employment caused by the Fed could therefore look a lot like one triggered by LLMs.

None of this is to deny that artificial intelligence has reduced employment in some occupations (for example, AI is almost certainly implicated in the recent decline of computer programming jobs). The point is just that the overall labor market impacts have been remarkably modest, given the scale of AI’s current capacities. 

2) White-collar workers don’t need to outperform AI to remain economically valuable

The absence of a one-to-one correlation between increases in AI’s capabilities — and declines in white-collar employment — isn’t entirely surprising. 

To remain economically valuable, a human worker does not need to outperform a machine at their job’s core tasks; they merely need to usefully complement that machine’s operations.

Consider translators. LLMs can convert text from one language to another at a speed and cost that no human could ever match. For many tasks, if corporations, authors, and publishers were forced to choose between having access to AI — or the world’s most gifted linguist — they would choose the bot.

And yet, a human translator working with an LLM still produces better text than the machine does by itself. While the latter blitzes through a first draft, the former can correct excessively literal translations of idiomatic expressions, tailor tone to the intended audience, and catch subtle errors that invite confusion or legal risk.

So long as human translators retain this utility, AI progress won’t necessarily reduce demand for their services. In fact, the technology could conceivably increase such demand.

That claim might seem unintuitive. After all, it surely takes fewer people to translate any given quantity of text in the age of generative AI than it did in years prior.

Yet humanity’s appetite for translated text is not fixed. If you drastically increase the efficiency of translation — and thus, reduce its cost — then people will purchase more of it.

And indeed, since the introduction of ChatGPT in 2022, demand for translation has surged. Perhaps for this reason, even as machines have come to match or exceed the skills of human translators across several dimensions, employment in the industry has grown in the European Union and stayed roughly level in the US. 

And you can tell a similar story about myriad other fields. 

AI can read medical images faster — and, for some types of cancer, more accurately — than any human. Still, a radiologist working with an AI yields better diagnoses than the machine working alone. And as LLMs have made radiology more efficient, demand for imaging has spiked — and with it, radiology employment

3) People want some things done by people

In some domains, white-collar workers may retain an advantage over AI simply because they are human. 

As the economist Adam Ozimek notes, many contemporary occupations could have been automated out of existence long ago, were technology the only concern. We’ve had player pianos and recorded music since the late 19th century. Yet many hotels and bars still pay human beings to tickle the keys for their customers.

“People are often willing to pay a premium for the ‘human touch.’” 

For decades, it’s been easy to book your own travel online, relying on aggregators like Expedia and reviews on Yelp. Yet 67,500 Americans still make a living as travel agents. Workout videos make it possible for anyone to perform yoga at home, yet many hire personal instructors. Mechanical reproductions of famous paintings can be had at a low cost, yet people shell out millions for visibly indistinguishable versions that were produced by a specific human hand.

You could have asked ChatGPT to give you four reasons why AI won’t cause mass unemployment, and it would have instantly spit out a listicle. Instead, you’re reading an artisanally crafted explainer that Vox Media Inc. paid me to produce.

In other words, people are often willing to pay a premium for the “human touch.” 

This won’t preempt an AI-induced employment crisis, all by itself. Consumers don’t typically care how their smartphone apps were coded or insurance claims were processed or tax returns were prepared. But a market for explicitly human-produced goods and services is likely to persist in many realms — including sales, medicine, legal services, and entertainment.

Heck, there might even be durable demand for journalism that’s conspicuously free of AI’s bizarre syntactical tics. That’s not just cope — it’s a serious possibility. 

4) AI progress won’t necessarily be exponential

All these arguments may count for little, if AI’s capacities are truly growing at an exponential rate. 

After all, exponential processes tend to creep up on you. When 32 cases of a supervirus become 64, almost no one notices. If that bug keeps doubling every couple days, however, the world will wake up a month later to 2.1 million infections. In that scenario, a glance at the pathogen’s impact on day three would have told you little about its consequences four weeks later. 

In a world where AI progress is exponential, similar principles apply. Look around three years after ChatGPT’s debut and you might see little job loss. But if artificial intelligence is recursively self-improving — such that every advance accelerates the next — then today’s AI is only a pale imitation of 2030’s. The former may be to the latter as a hot-air balloon is to a space shuttle.

If so, then examining AI’s impact on jobs over the past four years wouldn’t shed much light on its effects over the next four. Likewise, the fact that white-collar workers can usefully complement AI today would scarcely guarantee their utility in the future.

But it’s not clear that AI has actually been improving at an exponential rate, much less that it will keep doing so, for years on end.

Without question, LLMs’ capabilities have been growing rapidly. But claims that this progress has been exponential tend to rest on a single, widely cited benchmark. 

The AI research institute METR has long been the authority on the speed of AI progress. To gauge that pace, it tracks the duration of tasks that LLMs can complete with at least 50 percent accuracy. In this context, duration is measured by how long it would take a skilled human worker to complete the same assignment. 

METR’s charts of how this has changed over time are ubiquitous in discussions of AI. And the trends are eye-popping. 

Chart showing the length of tasks AI can do is doubling every 7 months
  • Save
Chart of the time horizon of software tasks different LLMs can complete 50% of the time
  • Save

Faced with these vertiginous slopes, many jump straight to wondering whether they will enjoy life as a “machine God’s” pet — forgetting to first ask themselves, “Wait, how does METR know that?”

Which is unfortunate, since the short answer is it doesn’t.

METR isn’t spying on every white-collar laborer in America, implanting bugs and honeypots in their break rooms, so as to determine how long it takes each worker to perform their jobs’ tasks. 

Rather, to generate its estimates, the institute presents human software engineers with a bucket of coding assignments, measures how long they take to complete their tasks, and then sees whether AI models can perform the same feats. Through this process, METR estimates that the latest version of Claude can autonomously perform tasks that would take a skilled worker up to 14.5 hours to execute. 

And yet, as NYU’s Nathan Witkin argues, there are massive problems with METR’s methodology, defects that severely limit what its findings can actually tell us about AI’s capabilities. To name just a few:

METR’s tasks are unrealistically basic. In METR’s own analysis, the bulk of their sample tasks differ from real-world engineering problems in systematic ways. Specifically, the former occur in static environments, require no coordination with other people (or agents), and include few resource constraints. METR also largely excluded tasks in which a single mistake could derail the entire project, so as to “reduce the expected cost of collecting human baselines.”

When the institute charted AI’s progress on its “messiest” tasks — which is to say, its most realistic ones — this was the result:

Chart of 50%. most messy tasks
  • Save

Viewed like this, AI progress does not look terribly exponential.

METR’s human baselines are unreliable. The sample of engineers who established METR’s baseline for human performance was neither large nor representative. Rather, as of 2025, its testing included only 140 people, recruited primarily from METR staffers’ professional networks. 

More critically, on the more complex tasks, these recruits were often operating outside of their areas of expertise. In real life, these assignments would typically be handled by specialists, who would surely complete them more rapidly than random engineers with little domain knowledge.

Making matters worse, METR paid its baseliners on a per-hour basis, giving them an incentive to drag out their tasks. 

AI could have simply memorized the answers to many of its assigned tasks. About one-third of the tested tasks had publicly available solutions. For these assignments, the models may have just been recalling answers they had encountered on the internet, in which case their success wouldn’t necessarily reflect growth in their general capabilities. (If a high-school student gains access to a calculus test in advance, and memorizes the answer, their performance on that problem wouldn’t tell us much about their general math skills.)

None of this is meant to disparage METR’s intentions, or to suggest that its data has zero utility. The pace of AI progress is not an easy thing to measure. And the organization is making an admirable effort. 

Still, the fact that its charts are AI boosters’ (and doomers’) go-to evidence for exponential progress — despite the extreme limitations of its figures — calls the existence of that progress into question.

Moreover, even if we knew that AI has been improving exponentially over the past three years, we still couldn’t take a continuation of that trend for granted. Technologies routinely improve at an exponential rate for a period, only to stall out at a certain level of capability.

Machines might still replace us

These arguments don’t prove that the laptop class is going to be fine. They merely offer a basis for believing that it might be.

Indeed, everything I just wrote could be true — and AI could still drastically erode knowledge workers’ economic prospects. 

Even if most white-collar laborers still usefully complement AI, a large minority may not. Meanwhile, those who remain employable might command drastically lower wages than they once did: When building software merely requires the ability to write instructions in plain English — rather than mastering complicated coding languages — programmers’ bargaining power may plummet.

And while AI-driven productivity gains might increase demand for certain goods and services, Americans’ latent appetite for tax advice, HR compliance audits, and contract review is not infinite. In these areas, AI’s boosts to efficiency are liable to yield job losses.

Finally, AI might not be improving at an exponential rate. But over time, linear gains may be sufficient to drastically reduce knowledge workers’ economic utility.

All this said, as the world’s most influential business leaders and intellectuals discuss the impending elimination of white-collar work as though it were no more hypothetical than tomorrow’s sunrise, it’s worth keeping their narrative’s liabilities in mind: This doomsday scenario has scant support in existing employment trends, sits in tension with multiple economic principles, and relies on dubious assumptions about the pace of AI progress.

In other words, while it’s past time for policymakers to prepare for AI-induced unemployment spikes, knowledge workers don’t yet need to toss our keyboards and learn to plumb.

Read full article →

End of today’s VOX roundup.

Table of Contents

Recent Posts

Related Posts

Scroll top
Taorem's Blog
Taorem's Blog AI Assistant
Hi! Ask me anything about the blog.
Share via
Copy link
Powered by Social Snap