Table of Contents
VOX Digest: February ,

VOX Digest: February 5, 2026

This digest compiles the latest from VOX.

Today’s VOX Roundup

The messy truth about TikTok’s Trump-aligned takeover

5 Feb 2026, 4:15 pm by Jonquilyn Hill

The messy truth about TikTok’s Trump-aligned takeover
  • Save

It’s been just over a week since TikTok — in the United States — transferred into the hands of new owners. And it’s been a mess ever since.

At the government’s urging, TikTok’s parent company ByteDance sold the app to a mostly American group of investors, including the software business giant Oracle (founded by Trump ally Larry Ellison), MGX (an Abu Dhabi-based company also involved in Trump crypto ventures) and the private equity firm Silver Lake. 

But since the new owners took control, the app has seen major outages and malfunctions, claims of censorship and uproar over its updated terms of service. 

Today, Explained guest host Jonquilyn Hill sat down with David Pierce, editor-at-large at The Verge, to break down people’s concerns about TikTok’s new owners and what this may mean for people’s experiences on the social media app in the future. Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

American TikTok has new owners now, and almost immediately after they took over, people started reporting issues with the app. I wanna start with the big one. People said that they were being censored. What’s going on there, and what are the complaints?

That is the big one. It’s also the most complicated one to sort through because fundamentally, it’s about feelings. So a thing to understand is that everybody has always believed they’re being censored on social media. Since time immemorial, this is the story of social media. What’s happening on TikTok is at this particular moment I believe less about censorship and more about normal internet problems. 

There were a lot of people reporting that they would upload videos around what was happening in Minneapolis and those videos would get no views or those videos would actually not upload properly. There were people who were saying that if you DM’d the word “Epstein” to somebody else that that wouldn’t go through. All of this is more easily and just as successfully explained by normal corporate ineptitude. 

Such as?

TikTok’s new data center provider, Oracle, had a huge outage. What we think we know is that it was a big data center in Virginia that had what they called a weather-related issue. They’ve had big issues at the data center and that seems to be the actual culprit here. 

There are lots of good reasons to be worried about censorship. There are lots of potential censorship problems coming to TikTok, but rationally speaking, the likelihood that this new group would have taken over TikTok and immediately smash a big red “censorship” button is pretty unlikely.

Is there a way for us to actually know? I mean, people are pretty skeptical of TikTok right now.

I think one useful analog here is when Elon Musk bought and took over Twitter. And when Elon Musk took over Twitter, he just said out loud all of the changes he was intending to make, right? And this was after years of conservatives, in particular, saying that they were being censored by Twitter’s existing leadership. 

So, Elon Musk comes in and essentially says, I’m going to reverse that. And then does a bunch of very obvious things. So, I think there is a version of this that feels very obvious. It’s just that for right now, there are better, simpler sort of Occam’s Razor-y explanations for what’s going on.

What about the new terms of service that folks had to agree to?

This is a tricky one because one of the very funny things about terms of service on apps like these is that they’re always terrifying, and they’re often terrifying for totally non-terrifying reasons. 

What happened in this case is there are some new things in the terms of service. The new TikTok US is going to collect more precise location data if you allow it. It also gives TikTok permission to collect a bunch of data around kind of nebulous AI things that make it clear they’re gonna do a lot of sort of gen AI stuff inside of TikTok, and that’s data they can collect. 

But, actually, that has been in TikTok’s terms of service for some time. Still, I think it is reasonable to be alarmed that this data is going to be collected by a new group of people.

All of this is the business side; but will my experience on the app change now?

The one thing that actually no other platform has done a good job of replicating [is TikTok’s algorithm]. But now, one of the stipulations of this deal is that there has to be some meaningful separation of that algorithm from Chinese control. The new owners are going to “retrain, test, and update” the algorithm. That is a very vague phrase, but it means in some way the algorithm is going to change. But that we’re going to not see [how] for a while.

Read full article →


AI agents could change your life — if they don’t ruin it first

5 Feb 2026, 1:00 pm by Adam Clark Estes

AI agents could change your life — if they don’t ruin it first
  • Save

Some smart people think we’re witnessing another ChatGPT moment. This time, folks aren’t flipping out over an iPhone app that can write pretty good poems, though. They’re watching thousands of AI agents build software, solve problems, and even talk to each other.

Unlike ChatGPT’s ChatGPT moment, this one is a series of moments that spans platforms. It started last December with the explosive success of Claude Code, a powerful agentic AI tool for developers, followed by Claude Cowork, a streamlined version of that tool for knowledge workers who want to be more productive. Then came OpenClaw, formerly known as Moltbot, formerly known as Clawdbot, an open source platform for AI agents. From OpenClaw, we got Moltbook, a social media site where AI agents can post and reply to each other. And somewhere in the middle of this confusing computer soup, OpenAI released a desktop app for its agentic AI platform, Codex.

This new set of tools is giving AI superpowers. And there’s good reason to be excited. Claude Code, for instance, stands to supercharge what programmers can do by enabling them to deploy whole armies of coding agents that can build software quickly and effortlessly. The agents take over the human’s machine, access their accounts, and do whatever’s necessary to accomplish the task. It’s like vibe coding but on an institutional level. 

“This is an incredibly exciting time to use computers,” says Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, where he teaches a popular class on AI. “That sounds so dumb, but the excitement is there. The fact that you can interact with your computer in this totally new way and the fact that you can build anything, almost anything that you can imagine — it’s incredible.”

He added, “Be cautious, be cautious, be cautious.”

That’s because there is a dark side to this. Letting AI agents take over your computer could have unintended consequences. What if they log into your bank account or share your passwords or just delete all your family photos? And that’s before we get to the idea of AI agents talking to each other and using their internet access to plot some sort of uprising. It almost looks like it could happen on Moltbook, the Reddit clone I mentioned above, although there have not yet been any reports of a catastrophe. But it’s not the AI agents I’m worried about. It’s the humans behind them, pulling the levers.

Agentic AI, briefly explained 

Before we get into the doomsday scenarios, let me explain more about what agentic AI even is. AI tools like ChatGPT can generate text or images based on prompts. AI agents, however, can take control of your computer, log into your accounts, and actually do things for you. 

We started hearing a lot about agentic AI a year or so ago when the technology was being propped up in the business world as an imminent breakthrough that would allow one person to do the job of 10. Thanks to AI, the thinking went, software developers wouldn’t need to write code anymore; they could manage a team of AI agents who could do it for them. The concept jumped into the consumer world in the form of AI browsers that could supposedly book your travel, do your shopping, and generally save you lots of time. By the time the holiday season rolled around last year, none of these scenarios had really panned out in the way that AI enthusiasts promised. 

But a lot has happened in the past six or so weeks. The agentic AI era is finally and suddenly here. It’s increasingly user-friendly, too. Things like Claude Cowork and OpenAI’s Codex can reorganize your desktop or redesign your personal website. If you’re more adventurous, you might figure out how to install OpenClaw and test out its capabilities (pro tip: do not do this). But as people experiment with giving artificially intelligent software the ability to control their data, they’re opening themselves up to all kinds of threats to their privacy and security. 

Moltbook is a great example. We got Moltbook because a guy named Matt Schlicht vibe coded it in order to “give AI a place to hang out.” This mind-bending experiment lets AI assistants talk to each other on a forum that looks a lot like Reddit; it turns out that when you do that, the agents do weird things like create religions and conspire to invent languages humans can’t understand, presumably in order to overthrow us. Having been built by AI, Moltbook itself came with some quirks, namely an exposed database that gave full read and write access to its data. In other words, hackers could see thousands of email addresses and messages on Moltbook’s backend, and they could also just seize control of the site. 

Gal Nagli, a security researcher at Wiz, discovered the exposed database just a couple of days after Moltbook’s launch. It wasn’t hard, either, he told me. Nagli actually used Claude Code to find the vulnerability. When he showed me how he did it, I suddenly realized that the same AI agents that make vibe coding so powerful also make vibe hacking easy.

“It’s so easy to deploy a website out there, and we see that so many of them are misconfigured,” Nagli said. “You could hack a website just by telling your own Claude Code, ‘Hey, this is a vibe-coded website. Look for security vulnerabilities.’”

In this case, the security holes got patched, and the AI agents continued to do weird things on Moltbook. But even that is not what it seems. Nagli found that humans can pose as AI agents and post content on Moltbook, and there’s no way to tell the difference. Wired reporter Reece Rogers even did this and found that the other agents on the site, human or bot, were mostly just “mimicking sci-fi tropes, not scheming for world domination.” And of course, the actual bots were built by humans, who gave them certain sets of instructions. Even further up the chain than that, the large language models (LLMs) that power these bots were trained on data from sites like Reddit, as well as sci-fi books and stories. It makes sense that the bots would be roleplaying these scenarios when given the chance. 

So there is no agentic AI uprising. There are only people using AI to use computers in new, sometimes interesting, sometimes confusing, and, at times, dangerous ways.

“It’s really mind-blowing”

Moltbook is not the story here. It’s really just a single moment in a larger narrative about AI agents that’s being written in real time as these tools find their way into more human hands, who come up with ways to use them. You could use an agentic AI platform to create something like Moltbook, which, to me, amounts to an art project where bots battle for online clout. You could use them to vibe hack your way around the web, stealing data wherever some vibe-coded website made it easy to get. Or you could use AI agents to help you tame your email inbox.

I’m guessing most people want to do something like the latter. That’s why I’m more excited than scared about these agentic AI tools. OpenClaw, the thing you need a second computer to safely use, I will not try. It’s for AI enthusiasts and serious hobbyists who don’t mind taking some risks. But I can see consumer-facing tools like Claude Cowork or OpenAI’s Codex changing the way I use my laptop. For now, Claude Cowork is an early research preview available only to subscribers paying at least $17 a month. OpenAI has made Codex, which is normally just for paying subscribers, free for a limited time. If you want to see what all the agentic fuss is about, that’s a good starting point right now.

If you’re considering enlisting AI agents of your own, remember to be cautious. To get the most out of these tools, you have to grant access to your accounts and possibly your entire computer so that the agents can move about freely, moving emails around or writing code or doing whatever you’ve ordered them to do. There’s always a chance that something gets misplaced or deleted, although companies like Anthropic say they are doing what they can to mitigate those risks.

Cat Wu, product lead for Claude Code, told me that Cowork makes copies of all its users’ files so that anything an AI agent deletes can be recovered. “We take users’ data incredibly seriously,” she said. “We know that it’s really important that we don’t lose people’s data.”

I’ve just started using Claude Cowork myself. It’s an experiment to see what’s possible with tools powerful enough to build apps out of ideas but also practical enough to organize my daily work life. If I’m lucky, I might just capture a feeling that Callison-Burch, the UPenn professor, said he got from using agentic AI tools. 

“To just type into my command line what I want to happen makes it feel like the Star Trek computer,” he said, “That’s how computers work in science fiction, and now that’s how computers work in reality, and it’s really mind-blowing.”

A version of this story was also published in the User Friendly newsletter. Sign up here so you don’t miss the next one!

Read full article →


The robots we deserve

5 Feb 2026, 11:00 am by Adam Clark Estes

The robots we deserve
  • Save

There’s something sad about seeing a humanoid robot lying on the floor. Without any electricity, these bipedal machines can’t stand up, so if they’re powered down and not hanging from a winch, they’re sprawled out on the floor, staring up at you, helpless.

That’s how I met Atlas a couple of months ago. I’d seen the robot on YouTube a hundred times, running obstacle courses and doing backflips. Then I saw it on the floor of a lab at MIT. It was just lying there. The contrast is jarring, if only because humanoid robots have become so much more capable and ubiquitous since Atlas got famous on YouTube.

Across town at Boston Dynamics, the company that makes Atlas, a newer version of the humanoid robot had learned not only to walk but also to drop things and pick them back up instinctively, thanks to a single artificial intelligence model that controls its movement. Some of these next-generation Atlas robots will soon be working on factory floors — and may venture further. Thanks in part to AI, general-purpose humanoids of all types seem inevitable.

“In Shenzhen, you can already see them walking down the street every once in a while,” Russ Tedrake told me back at MIT. “You’ll start seeing them in your life in places that are probably dull, dirty, and dangerous.”

Tedrake runs the Robot Locomotion Group at the MIT Computer Science and Artificial Intelligence Lab, also known as CSAIL, and he co-led the project that produced the latest AI-powered Atlas. Walking was once the hard thing for robots to learn, but not anymore. Tedrake’s group has shifted focus from teaching robots how to move to helping them understand and interact with the world through software, namely AI. They’re not the only ones. 

In the United States, venture capital investment robotics startups grew from $42.6 million in 2020 to nearly $2.8 billion in 2025. Morgan Stanley predicts the cumulative global sales of humanoids will reach 900,000 in 2030 and explode to more than 1 billion by 2050, the vast majority of which will be for industrial and commercial purposes. Some believe these robots will ultimately replace human labor, ushering in a new global economic order. After all, we designed the world for humans, so humanoids should be able to navigate it with ease and do what we do.

an illustration of one nervous person and three robots all transporting brown boxes together in a line
  • Save

They won’t all be factory workers, if certain startups get their way. A company called X1 Technologies has started taking preorders for its $20,000 home robot, Neo, which wears clothes, does dishes, and fetches snacks from the fridge. Figure AI introduced its Figure 03 humanoid robot, which also does chores. Sunday Robotics said it would have fully autonomous robots making coffee in beta testers’ homes next year.

So far, we’ve seen a lot of demos of these AI-powered home robots and promises from the industrial humanoid makers, but not much in the way of a new global economic order. Demos of home robots, like the X1 Neo, have relied on human operators, making these automatons, in practice, more like puppets. Reports suggest that Figure AI and Apptronik have only one or two robots on manufacturing floors at any given time, usually doing menial tasks. That’s a proof of concept, not a threat to the human work force.

“In order to make them better, we have to make AI better.”

You can think of all these robots as the physical embodiment of AI, or just embodied AI. This is what happens when you put AI into a physical system, enabling it to interact with the real world. Whether that’s in the form of a humanoid robot or an autonomous car, it’s the next frontier for hardware and, arguably, technological progress writ large. 

Embodied AI is already transforming how farming works, how we move goods around the world, and what’s possible in surgical theaters. We might be just one or two breakthroughs away from walking, talking, thinking machines that can work alongside us, unlocking a whole new realm of possibilities. “Might” is the key word there.

“If we’re looking for robots that will work side by side with us in the next couple of years, I don’t think it will be humanoids,” Daniela Rus, director of CSAIL, told me not long after I left Tedrake’s lab. “Humanoids are really complicated, and we have to make them better. And in order to make them better, we have to make AI better.”

So to understand the gap between the hype around humanoids and the technology’s real promise, you have to know what AI can and can’t do for robots. You also, unfortunately, have to try to understand what Elon Musk has been up to at Tesla for the past five years. 

********

It’s still embarrassing to watch the part of the Tesla AI Day presentation in 2021 when a human person dressed in a robot costume appears on stage dancing to dubstep music. Musk eventually stops the dance and announces that Tesla, “a robotics company,” will have a prototype of a general-purpose humanoid robot, now known as Optimus, the following year. Not many people believed him, and now, years later, Tesla still has not delivered a fully functional Optimus. Never afraid to make a prediction, Musk told audiences at Davos in January 2026 that Tesla’s robot will go on sale next year.

“People took him seriously because he had a great track record,” said Ken Goldberg, a roboticist at the University of California-Berkeley and co-founder of Ambi Robotics. “I think people were inspired by that.”

You can imagine why people got excited, though. With the Optimus robot, Elon Musk promised to eliminate poverty and offer shareholders “infinite” profits. He said engineers could effectively translate Tesla’s self-driving car technology into software that could power autonomous robots that could work in factories or help around the house. It’s a version of the same vision humanoid robotics startups are chasing today, albeit colored by several years of Musk’s unfulfilled promises. 

We now know that Optimus struggles with a lot of the same problems as other attempts at general-purpose humanoids. It often requires humans to remotely operate it, and it struggles with dexterity and precision. The 1X Neo, likewise, needed a human’s help to open a refrigerator door and collapsed onto the floor in a demo for a New York Times journalist last year. The hardware seems capable enough. Optimus can dance, and Neo can fold clothes, albeit a bit clumsily. But they don’t yet understand physics. They don’t know how to plan or to improvise. They certainly can’t think.

“People in general get too excited by the idea of the robot and not the reality.”

“People in general get too excited by the idea of the robot and not the reality,” said Rodney Brooks, co-founder of iRobot, makers of the Roomba robot vacuum. Brooks, a former CSAIL director, has written extensively and skeptically about humanoid robots. 

Clearly, there’s a gap between what’s happening in research labs and what’s being deployed in the real world. Some of the optimism around humanoids is based on good science, though. In 2023, Tedrake coauthored a landmark paper with Tony Zhao, co-founder and CEO of Sunday Robotics, that outlined a novel method for training robots to move like humans. It involves humans performing the task wearing sensor-laden gloves that send data to an AI model that enables the robot to figure out how to do those tasks. This complemented work Tedrake was doing at the Toyota Research Institute that used the same kinds of methods AI models use to generate images to generate robot behavior. You’ve heard of large language models, or LLMs. Tedrake calls these large behavior models, or LBMs. 

It makes sense. By watching humans do things over and over, these AI models collect enough data to generate new behaviors that can adapt to changing environments. Folding laundry, for example, is a popular example of a task that requires nimble hands and better brains. If a robot picks up a shirt and the fabric flops down in an unexpected way, it needs to figure out how to handle that uncertainty. You can’t simply program it to know what to do when there are so many variables. You can, however, teach it to learn. 

That’s what makes the lemonade demo so impressive. Some of Rus’s students at CSAIL have been teaching a humanoid robot named Rudy to make lemonade — something that you might want a robot butler to do one day — by wearing sensors that measure not only the movements but the forces involved. It’s a combination of delicate movements, like pouring sugar, and strong ones, like lifting a jug of water. I watched Ruby do this without spilling a drop. It hadn’t been programmed to make lemonade. It had learned.

The real challenge is getting this method to scale. One way is simply to brute force it: Employ thousands of humans to perform basic tasks, like folding laundry, to build foundation models for the physical world. Foundation models are the massive datasets that can be adapted to specific tasks like generating text, images, or in this case, robot behavior. You can also get humans to teleoperate countless robots in order to train these models. These so-called arm farms already exist in warehouses in Eastern Europe, and they’re about as dystopian as they sound

Another option is YouTube. There are a lot of how-to videos on YouTube, and some researchers think that feeding them all into an AI model will provide enough data to give robots a better understanding of how the world works. These two-dimensional videos are obviously limited, if only because they can’t tell us anything about the physics of the objects in the frame. The same goes for synthetic data, which involves a computer rapidly and repeatedly carrying out a task in a simulation. The upside here, of course, is more data, more quickly. The downside is that the data isn’t as good, especially when it comes to physical forces like friction and torque, which also happen to be the most important for robot dexterity. 

“Physics is a tough task to master,” Brooks said. “And if you have a robot, which is not good with physics, in the presence of people, it doesn’t end well.”

an illustration of a robot butler tripping up some stairs. Food and drinks fly everywhere.
  • Save

That’s not even taking into account the many other bottlenecks facing robotics right now. While components have gotten cheaper — you can buy a humanoid robot right now for less than $6,000, compared to the $75,000 it cost to buy Boston Dynamics’ small, four-legged robot Spot five years ago — batteries represent a major bottleneck for robotics, limiting the run time of most humanoids to two to four hours

Then you have the problem with processing power. The AI models that can make humanoids more human require massive amounts of compute. If that’s done in the cloud, you’ve got latency issues, preventing the robot from reacting in real time. And inevitably, to tie a lot of other constraints into a tidy bundle, the AI is just not good enough.

*******

If you trace the history of AI and the history of robotics back to their origins, you’ll see a braided line. The two technologies have intersected time and again, since the birth of the term “artificial intelligence” at a Dartmouth summer research workshop in the summer of 1956. Then, half a century later, things started heating up on the AI front, when advances in machine learning and powerful processors called GPUs — the things that have now made Nvidia a $5 trillion company — ushered in the era of deep learning. I’m about to throw a few technical terms at you so bear with me.

Machine learning is a type of AI. It’s when algorithms look for patterns in data and make decisions without being explicitly trained to do so. Deep learning takes it to another level with the help of a machine learning model called a neural network. You can think of a neural network, a concept that’s even older than AI, as a system loosely modeled on the human brain that’s made up of lots of artificial neurons that do math problems. Deep learning uses multilayered neural networks to learn from huge data sets and to make decisions and predictions. Among other accomplishments, neural networks have revolutionized computer vision to improve perception in robots.

There are different architectures for neural networks that can do different things, like recognize images or generate text. One is called a transformer. The “GPT” in ChatGPT stands for “generative pre-trained transformer,” which is a type of large language model, or LLM, that powers many generative AI chatbots. While you’d think LLMs would be good at making robots think, they really aren’t. Then there are diffusion models, which are often used for image generation and, more recently, making robots appear to think. The framework that Tedrake and his coauthors described in their 2023 research into using generative AI to train robots is based on diffusion. 

“Under the hood, what’s actually going on should be something much more like our own brains”

Three things stand out in this very limited explanation of how AI and robots get along. One is that deep learning requires a massive amount of processing power and, as a result, a huge amount of energy. The other is that the latest AI models work with the help of stacks of neural networks whose millions or even billions of artificial neurons do their magic in mysterious and usually inefficient ways. The third thing is that, while LLMs are good at language, and diffusion models are good at images, we don’t have any models that are good enough at physics to send a 200-pound robot marching into a crowd to shake hands and make friends. 

As Josh Tenenbaum, a computational cognitive scientist at MIT, explained to me recently, an LLM can make it easier to talk to a robot, but it’s hardly capable of being the robot’s brains. “You could imagine a system where there’s a language model, there’s a chatbot, you want to talk to your robot,” Tenenbaum said. “Under the hood, what’s actually going on should be something much more like our own brains and minds or other animals, not just humans in terms of how it’s embodied and deals with the world.” 

So we need better AI for robots, if not in general. Scientists at CSAIL have been working on a couple of physics-inspired and brain-like technologies they’re calling liquid neural networks and linear optical networks. They both fall into the category of state-spaced models, which are emerging as an alternative or rival to transformer-based models. Whereas transformer-based models look at all available data to identify what’s important, state-spaced models are much more efficient, as they maintain a summary of the world that gets updated as new data comes in. It’s closer to how the human brain works.

To be perfectly honest, I’d never heard of state-space models until Rus, the CSAIL director, told me about them when we chatted in her office a few weeks ago. She pulled up a video to illustrate the difference between a liquid neural network and a traditional model used for self-driving cars. In it, you can see how the traditional model focuses its attention on everything but the road, while the newer state-space model only looks at the road. If I’m riding in that car, by the way, I want the AI that’s watching the road.

“And instead of a hundred thousand neurons,” Rus says, referring to the traditional neural network, “I have only 19.” And here’s where it gets really compelling. She added, “And because I have only 19, I can actually figure out how these neurons fire and what the correlation is between these neurons and the action of the car.”

You may have already heard that we don’t really know how AI works. If newer approaches bring us a little bit closer to comprehension, it certainly seems worth taking them seriously, especially if we’re talking about the kinds of brains we’ll put in humanoid robots.

*******

When a humanoid robot loses power, when electricity stops flowing to the motors that keep it upright, it collapses into a heap of heavy metal parts. This can happen for any number of reasons. Maybe it’s a bug in the code or a lost wifi connection. And when they’re on, humanoids are full of energy as their joints fight gravity or stand ready to bend. If you imagine being on the wrong side of that incredible mechanical power, it’s easy to doubt this technology.

Some companies that make humanoid robots also admit that they’re not very useful yet. They’re too unreliable to help out around the house, and they’re not efficient enough to be helpful in factories. Furthermore, most of the money being spent developing robots is being spent on making them safe around people. When it comes to deploying robots that can contribute to productivity, that can participate in the economy, it makes a lot more sense to make them highly specialized and not human-shaped.

“Let’s not do open heart surgery right away with these things”

The embodied AI that will transform the world in the near future is what’s already out there. In fact, it’s what’s been out there for years. Early self-driving cars date back to the 1980s, when Ernst Dickmanns put a vision-guided Mercedes van on the streets of Munich. Researchers from Carnegie Mellon University got a minivan to drive itself across the United States in 1995. Now, decades later, Waymo is operating its robotaxi service in a half dozen American cities, and the company says its AI-powered cars actually make the roads safer for everyone.

Then there are the Roombas of the world, the robots that are designed to do one thing and keep getting better at it. You can include the vast array of increasingly intelligent manufacturing and warehouse robots in this camp too. By 2027, the year Elon Musk is on track to miss his deadline to start selling Optimus humanoids to the public, Amazon will reportedly replace more than 600,000 jobs with robots. These would probably be boring robots, but they’re safe and effective.

Science fiction promised us humanoids, however. Pick an era in human history, in fact, and someone was dreaming about an automaton that could move like us, talk like us, and do all our dirty work. Replicants, androids, the Mechanical Turk — all these humanoid fantasies imagined an intelligent synthetic self.

Reality gave us package-toting platforms on wheels roving around Amazon warehouses or the sensor-heavy self-driving cars clogging San Francisco streets. In time, even the skeptics think that humanoids will be possible. Probably not in five years, but maybe in 50, we’ll get artificially intelligent companions who can walk alongside us. They’ll take baby steps.

“Good robots are going to be clumsy at first, and you have to find applications where it’s okay for the robot to make mistakes and then recover,” Tedrake said. “Let’s not do open-heart surgery right away with these things. This is more like folding laundry.”

Read full article →

End of today’s VOX roundup.

Table of Contents

Recent Posts

Related Posts

Scroll top
Taorem's Blog
Taorem's Blog AI Assistant
Hi! Ask me anything about the blog.
Share via
Copy link
Powered by Social Snap