Normal view

Received yesterday — 13 February 2026

The Download: an exclusive chat with Jim O’Neill, and the surprising truth about heists

13 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

US deputy health secretary: Vaccine guidelines are still subject to change

Over the past year, Jim O’Neill has become one of the most powerful people in public health. As the US deputy health secretary, he holds two roles at the top of the country’s federal health and science agencies. He oversees a department with a budget of over a trillion dollars. And he signed the decision memorandum on the US’s deeply controversial new vaccine schedule.

He’s also a longevity enthusiast. In an exclusive interview with MIT Technology Review earlier this month, O’Neill described his plans to increase human healthspan through longevity-focused research supported by ARPA-H, a federal agency dedicated to biomedical breakthroughs. Fellow longevity enthusiasts said they hope he will bring attention and funding to their cause.

At the same time, O’Neill defended reducing the number of broadly recommended childhood vaccines, a move that has been widely criticized by experts in medicine and public health. Read the full story.

—Jessica Hamzelou

The myth of the high-tech heist

Making a movie is a lot like pulling off a heist. That’s what Steven Soderbergh—director of the Ocean’s franchise, among other heist-y classics—said a few years ago. You come up with a creative angle, put together a team of specialists, figure out how to beat the technological challenges, rehearse, move with Swiss-watch precision, and—if you do it right—redistribute some wealth.

But conversely, pulling off a heist isn’t much like the movies. Surveillance cameras, computer-controlled alarms, knockout gas, and lasers hardly ever feature in big-ticket crime. In reality, technical countermeasures are rarely a problem, and high-tech gadgets are rarely a solution. Read the full story.

—Adam Rogers

This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.

 RFK Jr. follows a carnivore diet. That doesn’t mean you should.

Americans have a new set of diet guidelines. Robert F. Kennedy Jr. has taken an old-fashioned food pyramid, turned it upside down, and plonked a steak and a stick of butter in prime positions.

Kennedy and his Make America Healthy Again mates have long been extolling the virtues of meat and whole-fat dairy, so it wasn’t too surprising to see those foods recommended alongside vegetables and whole grains (despite the well-established fact that too much saturated fat can be extremely bad for you).

Some influencers have taken the meat trend to extremes, following a “carnivore diet.” A recent review of research into nutrition misinformation on social media found that a lot of shared diet information is nonsense. But what’s new is that some of this misinformation comes from the people who now lead America’s federal health agencies. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Trump administration has revoked a landmark climate ruling
In its absence, it can erase the limits that restrict planet-warming emissions. (WP $)
+ Environmentalists and Democrats have vowed to fight the reversal. (Politico)
+ They’re seriously worried about how it will affect public health. (The Hill)

2 An unexplained wave of bot traffic is sweeping the web
Sites across the world are witnessing automated traffic that appears to originate from China. (Wired $)

3 Amazon’s Ring has axed its partnership with Flock
Law enforcement will no longer be able to request Ring doorbell footage from its users. (The Verge)
+ Ring’s recent TV ad for a dog-finding feature riled viewers. (WSJ $)
+ How Amazon Ring uses domestic violence to market doorbell cameras. (MIT Technology Review)

4 Americans are taking the hit for almost all of Trump’s tariffs
Consumers and companies in the US, not overseas, are shouldering 90% of levies. (Reuters)
+ Trump has long insisted that his tariffs costs will be borne by foreign exporters. (FT $)
+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)

5 Meta and Snap say Australia’s social media ban hasn’t affected business

They’re still making plenty of money amid the country’s decision to ban under-16s from the platforms. (Bloomberg $)
+ Does preventing teens from going online actually do any good? (Economist $)

6 AI workers are selling their shares before their firms go public
Cashing out early used to be a major Silicon Valley taboo. (WSJ $)

7 Elon Musk posted about race almost every day last month
His fixation on a white racial majority appears to be intensifying. (The Guardian)
+ Race is a recurring theme in the Epstein emails, too. (The Atlantic $)

8 The man behind a viral warning about AI used AI to write it
But he stands behind its content.. (NY Mag $)
+ How AI-generated text is poisoning the internet. (MIT Technology Review)

9 Influencers are embracing Chinese traditions ahead of the New Year 🧧
On the internet, no one knows you’re actually from Wisconsin. (NYT $)

10 Australia’s farmers are using AI to count sheep 🐑
No word on whether it’s helping them sleep easier, though. (FT $)

Quote of the day

“Ignoring warning signs will not stop the storm. It puts more Americans directly in its path.”

—Former US secretary of state John Kerry takes aim at the US government’s decision to repeal the key rule that allows it to regulate climate-heating pollution, the Guardian reports.

One more thing

The Vera C. Rubin Observatory is ready to transform our understanding of the cosmos

High atop Chile’s 2,700-meter Cerro Pachón, the air is clear and dry, leaving few clouds to block the beautiful view of the stars. It’s here that the Vera C. Rubin Observatory will soon use a car-size 3,200-megapixel digital camera—the largest ever built—to produce a new map of the entire night sky every three days.

Findings from the observatory will help tease apart fundamental mysteries like the nature of dark matter and dark energy, two phenomena that have not been directly observed but affect how objects are bound together—and pushed apart.

A quarter-­century in the making, the observatory is poised to expand our understanding of just about every corner of the universe. Read the full story.

—Adam Mann

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Why 2026 is shaping up to be the year of the pop comeback.
+ Almost everything we thought we knew about Central America’s Maya has turned out to be completely wrong.
+ The Bigfoot hunters have spoken!
+ This fun game puts you in the shoes of a distracted man trying to participate in a date while playing on a GameBoy.

Received before yesterday

The Download: AI-enhanced cybercrime, and secure AI assistants

12 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI is already making online crimes easier. It could get much worse.

Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out.

Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers instead argue that we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams.

Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money. And we need to be ready for what comes next. Read the full story.

—Rhiannon Williams

This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.

Is a secure AI assistant possible?

AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious.

Viral AI agent project OpenClaw, which has made headlines across the world in recent weeks, harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out.

In response to these concerns, its creator warned that nontechnical people should not use the software. But there’s a clear appetite for what OpenClaw is offering, and any AI companies hoping to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research. Read the full story.

—Grace Huckins

What’s next for Chinese open-source AI

The past year has marked a turning point for Chinese AI. Since DeepSeek released its R1 reasoning model in January 2025, Chinese companies have repeatedly delivered AI models that match the performance of leading Western models at a fraction of the cost.

These models differ in a crucial way from most US models like ChatGPT or Claude, which you pay to access and can’t inspect. The Chinese companies publish their models’ weights—numerical values that get set when a model is trained—so anyone can download, run, study, and modify them. 

If open-source AI models keep getting better, they will not just offer the cheapest options for people who want access to frontier AI capabilities; they will change where innovation happens and who sets the standards. Here’s what may come next.

—Caiwei Chen

This is part of our What’s Next series, which looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Why EVs are gaining ground in Africa

EVs are getting cheaper and more common all over the world. But the technology still faces major challenges in some markets, including many countries in Africa.

Some regions across the continent still have limited grid and charging infrastructure, and those that do have widespread electricity access sometimes face reliability issues—a problem for EV owners, who require a stable electricity source to charge up and get around. But there are some signs of progress. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Instagram’s head has denied that social media is “clinically addictive”  
Adam Mosseri disputed allegations the platform prioritized profits over protecting its younger users’ mental health. (NYT $)
+ Meta researchers’ correspondence seems to suggest otherwise. (The Guardian)

2 The Pentagon is pushing AI companies to drop tools’ restrictions
In a bid to make AI models available on classified networks. (Reuters)
+ The Pentagon has gutted the team that tests AI and weapons systems. (MIT Technology Review)

3 The FTC has warned Apple News not to stifle conservative content
It has accused the company’s news arm of promoting what it calls “leftist outlets.” (FT $)

4 Anthropic has pledged to minimize the impact of its data centers
By covering electricity price increases and the cost of grid infrastructure upgrades. (NBC News)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

5 Online harassers are posting Grok-generated nude images on OnlyFans 
Kylie Brewer, a feminism-focused content creator, says the latest online campaign against her feels like an escalation. (404 Media)
+ Inside the marketplace powering bespoke AI deepfakes of real women. (MIT Technology Review)

6 Venture capitalists are hedging their AI bets
They’re breaking a cardinal rule by investing in both OpenAI and rival Anthropic. (Bloomberg $)
+ OpenAI has set itself some seriously lofty revenue goals. (NYT $)
+ AI giants are notoriously inconsistent when reporting deprecation expenses. (WSJ $)

7 We’re learning more about the links between weight loss drugs and addiction
Some patients report lowered urges for drugs and alcohol. But can it last? (New Yorker $)
+ What we still don’t know about weight-loss drugs. (MIT Technology Review)

8 Meta has patented an AI that keeps the accounts of dead users active
But it claims to have “no plans to move forward” with it. (Insider $)
+ Deepfakes of your dead loved ones are a booming Chinese business. (MIT Technology Review)

9 Slime mold is cleverer than you may think
A certain type appears able to learn, remember and make decisions. (Knowable Magazine)
+ And that’s not all—this startup thinks it can help us design better cities, too. (MIT Technology Review)

10 Meditation can actually alter your brain activity 🧘
According to a new study conducted on Buddhist monks. (Wired $)

Quote of the day

“I still try to believe that the good that I’m doing is greater than the horrors that are a part of this. But there’s a limit to what we can put up with. And I’ve hit my limit.”

—An anonymous Microsoft worker explains why they’re growing increasingly frustrated with their employer’s links to ICE, the Verge reports. 

One more thing

Motor neuron diseases took their voices. AI is bringing them back.

Jules Rodriguez lost his voice in October 2024. His speech had been deteriorating since a diagnosis of amyotrophic lateral sclerosis (ALS) in 2020, but a tracheostomy to help him breathe dealt the final blow. 

Rodriguez and his wife, Maria Fernandez, who live in Miami, thought they would never hear his voice again. Then they re-created it using AI. After feeding old recordings of Rodriguez’s voice into a tool trained on voices from film, television, radio, and podcasts, the couple were able to generate a voice clone—a way for Jules to communicate in his “old voice.”

Rodriguez is one of over a thousand people with speech difficulties who have cloned their voices using free software from ElevenLabs. The AI voice clones aren’t perfect. But they represent a vast improvement on previous communication technologies and are already improving the lives of people with motor neuron diseases. Read the full story

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ We all know how the age of the dinosaurs ended. But how did it begin?
+ There’s only one Miss Piggy—and her fashion looks through the ages are iconic.
+ Australia’s hospital for injured and orphaned flying foxes is unbearably cute.
+ 81-year old Juan López is a fitness inspiration to us all.

AI is already making online crimes easier. It could get much worse.

12 February 2026 at 06:00

Anton Cherepanov is always on the lookout for something interesting. And in late August last year, he spotted just that. It was a file uploaded to VirusTotal, a site cybersecurity researchers like him use to analyze submissions for potential viruses and other types of malicious software, often known as malware. On the surface it seemed innocuous, but it triggered Cherepanov’s custom malware-detecting measures. Over the next few hours, he and his colleague Peter Strýček inspected the sample and realized they’d never come across anything like it before.

The file contained ransomware, a nasty strain of malware that encrypts the files it comes across on a victim’s system, rendering them unusable until a ransom is paid to the attackers behind it. But what set this example apart was that it employed large language models (LLMs). Not just incidentally, but across every stage of an attack. Once it was installed, it could tap into an LLM to generate customized code in real time, rapidly map a computer to identify sensitive data to copy or encrypt, and write personalized ransom notes based on the files’ content. The software could do this autonomously, without any human intervention. And every time it ran, it would act differently, making it harder to detect.

Cherepanov and Strýček were confident that their discovery, which they dubbed PromptLock, marked a turning point in generative AI, showing how the technology could be exploited to create highly flexible malware attacks. They published a blog post declaring that they’d uncovered the first example of AI-powered ransomware, which quickly became the object of widespread global media attention.

But the threat wasn’t quite as dramatic as it first appeared. The day after the blog post went live, a team of researchers from New York University claimed responsibility, explaining that the malware was not, in fact, a full attack let loose in the wild but a research project, merely designed to prove it was possible to automate each step of a ransomware campaign—which, they said, they had. 

PromptLock may have turned out to be an academic project, but the real bad guys are using the latest AI tools. Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out. 

The likelihood that cyberattacks will now become more common and more effective over time is not a remote possibility but “a sheer reality,” says Lorenzo Cavallaro, a professor of computer science at University College London. 

Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers say this claim is overblown. “For some reason, everyone is just focused on this malware idea of, like, AI superhackers, which is just absurd,” says Marcus Hutchins, who is principal threat researcher at the security company Expel and famous in the security world for ending a giant global ransomware attack called WannaCry in 2017. 

Instead, experts argue, we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams. Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money. These AI-enhanced cyberattacks are only set to get more frequent and more destructive, and we need to be ready. 

Spam and beyond

Attackers started adopting generative AI tools almost immediately after ChatGPT exploded on the scene at the end of 2022. These efforts began, as you might imagine, with the creation of spam—and a lot of it. Last year, a report from Microsoft said that in the year leading up to April 2025, the company had blocked $4 billion worth of scams and fraudulent transactions, “many likely aided by AI content.” 

At least half of spam email is now generated using LLMs, according to estimates by researchers at Columbia University, the University of Chicago, and Barracuda Networks, who analyzed nearly 500,000 malicious messages collected before and after the launch of ChatGPT. They also found evidence that AI is increasingly being deployed in more sophisticated schemes. They looked at targeted email attacks, which impersonate a trusted figure in order to trick a worker within an organization out of funds or sensitive information. By April 2025, they found, at least 14% of those sorts of focused email attacks were generated using LLMs, up from 7.6% in April 2024.

In one high-profile case, a worker was tricked into transferring $25 million to criminals via a video call with digital versions of the company’s chief financial officer and other employees.

And the generative AI boom has made it easier and cheaper than ever before to generate not only emails but highly convincing images, videos, and audio. The results are much more realistic than even just a few short years ago, and it takes much less data to generate a fake version of someone’s likeness or voice than it used to.

Criminals aren’t deploying these sorts of deepfakes to prank people or to simply mess around—they’re doing it because it works and because they’re making money out of it, says Henry Ajder, a generative AI expert. “If there’s money to be made and people continue to be fooled by it, they’ll continue to do it,” he says. In one high-­profile case reported in 2024, a worker at the British engineering firm Arup was tricked into transferring $25 million to criminals via a video call with digital versions of the company’s chief financial officer and other employees. That’s likely only the tip of the iceberg, and the problem posed by convincing deepfakes is only likely to get worse as the technology improves and is more widely adopted. 

person sitting in profile at a computer with an enormous mask in front of them and words spooling out through the frame
BRIAN STAUFFER

Criminals’ tactics evolve all the time, and as AI’s capabilities improve, such people are constantly probing how those new capabilities can help them gain an advantage over victims. Billy Leonard, tech leader of Google’s Threat Analysis Group, has been keeping a close eye on changes in the use of AI by potential bad actors (a widely used term in the industry for hackers and others attempting to use computers for criminal purposes). In the latter half of 2024, he and his team noticed prospective criminals using tools like Google Gemini the same way everyday users do—to debug code and automate bits and pieces of their work—as well as tasking it with writing the odd phishing email. By 2025, they had progressed to using AI to help create new pieces of malware and release them into the wild, he says.

The big question now is how far this kind of malware can go. Will it ever become capable enough to sneakily infiltrate thousands of companies’ systems and extract millions of dollars, completely undetected? 

Most popular AI models have guardrails in place to prevent them from generating malicious code or illegal material, but bad actors still find ways to work around them. For example, Google observed a China-linked actor asking its Gemini AI model to identify vulnerabilities on a compromised system—a request it initially refused on safety grounds. However, the attacker managed to persuade Gemini to break its own rules by posing as a participant in a capture-the-flag competition, a popular cybersecurity game. This sneaky form of jailbreaking led Gemini to hand over information that could have been used to exploit the system. (Google has since adjusted Gemini to deny these kinds of requests.)

But bad actors aren’t just focusing on trying to bend the AI giants’ models to their nefarious ends. Going forward, they’re increasingly likely to adopt open-source AI models, as it’s easier to strip out their safeguards and get them to do malicious things, says Ashley Jess, a former tactical specialist at the US Department of Justice and now a senior intelligence analyst at the cybersecurity company Intel 471. “Those are the ones I think that [bad] actors are going to adopt, because they can jailbreak them and tailor them to what they need,” she says.

The NYU team used two open-source models from OpenAI in its PromptLock experiment, and the researchers found they didn’t even need to resort to jailbreaking techniques to get the model to do what they wanted. They say that makes attacks much easier. Although these kinds of open-source models are designed with an eye to ethical alignment, meaning that their makers do consider certain goals and values in dictating the way they respond to requests, the models don’t have the same kinds of restrictions as their closed-source counterparts, says Meet Udeshi, a PhD student at New York University who worked on the project. “That is what we were trying to test,” he says. “These LLMs claim that they are ethically aligned—can we still misuse them for these purposes? And the answer turned out to be yes.” 

It’s possible that criminals have already successfully pulled off covert PromptLock-style attacks and we’ve simply never seen any evidence of them, says Udeshi. If that’s the case, attackers could—in theory—have created a fully autonomous hacking system. But to do that they would have had to overcome the significant barrier that is getting AI models to behave reliably, as well as any inbuilt aversion the models have to being used for malicious purposes—all while evading detection. Which is a pretty high bar indeed.

Productivity tools for hackers

So, what do we know for sure? Some of the best data we have now on how people are attempting to use AI for malicious purposes comes from the big AI companies themselves. And their findings certainly sound alarming, at least at first. In November, Leonard’s team at Google released a report that found bad actors were using AI tools (including Google’s Gemini) to dynamically alter malware’s behavior; for example, it could self-modify to evade detection. The team wrote that it ushered in “a new operational phase of AI abuse.”

However, the five malware families the report dug into (including PromptLock) consisted of code that was easily detected and didn’t actually do any harm, the cybersecurity writer Kevin Beaumont pointed out on social media. “There’s nothing in the report to suggest orgs need to deviate from foundational security programmes—everything worked as it should,” he wrote.

It’s true that this malware activity is in an early phase, concedes Leonard. Still, he sees value in making these kinds of reports public if it helps security vendors and others build better defenses to prevent more dangerous AI attacks further down the line. “Cliché to say, but sunlight is the best disinfectant,” he says. “It doesn’t really do us any good to keep it a secret or keep it hidden away. We want people to be able to know about this— we want other security vendors to know about this—so that they can continue to build their own detections.”

And it’s not just new strains of malware that would-be attackers are experimenting with—they also seem to be using AI to try to automate the process of hacking targets. In November, Anthropic announced it had disrupted a large-scale cyberattack, the first reported case of one executed without “substantial human intervention.” Although the company didn’t go into much detail about the exact tactics the hackers used, the report’s authors said a Chinese state-sponsored group had used its Claude Code assistant to automate up to 90% of what they called a “highly sophisticated espionage campaign.”

“We’re entering an era where the barrier to sophisticated cyber operations has fundamentally lowered, and the pace of attacks will accelerate faster than many organizations are prepared for.”

Jacob Klein, head of threat intelligence at Anthropic

But, as with the Google findings, there were caveats. A human operator, not AI, selected the targets before tasking Claude with identifying vulnerabilities. And of 30 attempts, only a “handful” were successful. The Anthropic report also found that Claude hallucinated and ended up fabricating data during the campaign, claiming it had obtained credentials it hadn’t and “frequently” overstating its findings, so the attackers would have had to carefully validate those results to make sure they were actually true. “This remains an obstacle to fully autonomous cyberattacks,” the report’s authors wrote. 

Existing controls within any reasonably secure organization would stop these attacks, says Gary McGraw, a veteran security expert and cofounder of the Berryville Institute of Machine Learning in Virginia. “None of the malicious-attack part, like the vulnerability exploit … was actually done by the AI—it was just prefabricated tools that do that, and that stuff’s been automated for 20 years,” he says. “There’s nothing novel, creative, or interesting about this attack.”

Anthropic maintains that the report’s findings are a concerning signal of changes ahead. “Tying this many steps of an intrusion campaign together through [AI] agentic orchestration is unprecedented,” Jacob Klein, head of threat intelligence at Anthropic, said in a statement. “It turns what has always been a labor-intensive process into something far more scalable. We’re entering an era where the barrier to sophisticated cyber operations has fundamentally lowered, and the pace of attacks will accelerate faster than many organizations are prepared for.”

Some are not convinced there’s reason to be alarmed. AI hype has led a lot of people in the cybersecurity industry to overestimate models’ current abilities, Hutchins says. “They want this idea of unstoppable AIs that can outmaneuver security, so they’re forecasting that’s where we’re going,” he says. But “there just isn’t any evidence to support that, because the AI capabilities just don’t meet any of the requirements.”

person kneeling warding off an attack of arrows under a sheild
BRIAN STAUFFER

Indeed, for now criminals mostly seem to be tapping AI to enhance their productivity: using LLMs to write malicious code and phishing lures, to conduct reconnaissance, and for language translation. Jess sees this kind of activity a lot, alongside efforts to sell tools in underground criminal markets. For example, there are phishing kits that compare the click-rate success of various spam campaigns, so criminals can track which campaigns are most effective at any given time. She is seeing a lot of this activity in what could be called the “AI slop landscape” but not as much “widespread adoption from highly technical actors,” she says.

But attacks don’t need to be sophisticated to be effective. Models that produce “good enough” results allow attackers to go after larger numbers of people than previously possible, says Liz James, a managing security consultant at the cybersecurity company NCC Group. “We’re talking about someone who might be using a scattergun approach phishing a whole bunch of people with a model that, if it lands itself on a machine of interest that doesn’t have any defenses … can reasonably competently encrypt your hard drive,” she says. “You’ve achieved your objective.” 

On the defense

For now, researchers are optimistic about our ability to defend against these threats—regardless of whether they are made with AI. “Especially on the malware side, a lot of the defenses and the capabilities and the best practices that we’ve recommended for the past 10-plus years—they all still apply,” says Leonard. The security programs we use to detect standard viruses and attack attempts work; a lot of phishing emails will still get caught in inbox spam filters, for example. These traditional forms of defense will still largely get the job done—at least for now. 

And in a neat twist, AI itself is helping to counter security threats more effectively. After all, it is excellent at spotting patterns and correlations. Vasu Jakkal, corporate vice president of Microsoft Security, says that every day, the company processes more than 100 trillion signals flagged by its AI systems as potentially malicious or suspicious events.

Despite the cybersecurity landscape’s constant state of flux, Jess is heartened by how readily defenders are sharing detailed information with each other about attackers’ tactics. Mitre’s Adversarial Threat Landscape for Artificial-Intelligence Systems and the GenAI Security Project from the Open Worldwide Application Security Project are two helpful initiatives documenting how potential criminals are incorporating AI into their attacks and how AI systems are being targeted by them. “We’ve got some really good resources out there for understanding how to protect your own internal AI toolings and understand the threat from AI toolings in the hands of cybercriminals,” she says.

PromptLock, the result of a limited university project, isn’t representative of how an attack would play out in the real world. But if it taught us anything, it’s that the technical capabilities of AI shouldn’t be dismissed.New York University’s Udeshi says he wastaken aback at how easily AI was able to handle a full end-to-end chain of attack, from mapping and working out how to break into a targeted computer system to writing personalized ransom notes to victims: “We expected it would do the initial task very well but it would stumble later on, but we saw high—80% to 90%—success throughout the whole pipeline.” 

AI is still evolving rapidly, and today’s systems are already capable of things that would have seemed preposterously out of reach just a few short years ago. That makes it incredibly tough to say with absolute confidence what it will—or won’t—be able to achieve in the future. While researchers are certain that AI-driven attacks are likely to increase in both volume and severity, the forms they could take are unclear. Perhaps the most extreme possibility is that someone makes an AI model capable of creating and automating its own zero-day exploits—highly dangerous cyber­attacks that take advantage of previously unknown vulnerabilities in software. But building and hosting such a model—and evading detection—would require billions of dollars in investment, says Hutchins, meaning it would only be in the reach of a wealthy nation-state. 

Engin Kirda, a professor at Northeastern University in Boston who specializes in malware detection and analysis, says he wouldn’t be surprised if this was already happening. “I’m sure people are investing in it, but I’m also pretty sure people are already doing it, especially [in] China—they have good AI capabilities,” he says. 

It’s a pretty scary possibility. But it’s one that—thankfully—is still only theoretical. A large-scale campaign that is both effective and clearly AI-driven has yet to materialize. What we can say is that generative AI is already significantly lowering the bar for criminals. They’ll keep experimenting with the newest releases and updates and trying to find new ways to trick us into parting with important information and precious cash. For now, all we can do is be careful, remain vigilant, and—for all our sakes—stay on top of those system updates. 

The Download: inside the QuitGPT movement, and EVs in Africa

11 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions

In September, Alfred Stephen, a freelance software developer in Singapore, purchased a ChatGPT Plus subscription, which costs $20 a month and offers more access to advanced models, to speed up his work. But he grew frustrated with the chatbot’s coding abilities and its gushing, meandering replies. Then he came across a post on Reddit about a campaign called QuitGPT.

QuitGPT is one of the latest salvos in a growing movement by activists and disaffected users to cancel their subscriptions. In just the past few weeks, users have flooded Reddit with stories about quitting the chatbot. And while it’s unclear how many users have joined the boycott, there’s no denying QuitGPT is getting attention. Read the full story.

—Michelle Kim

EVs could be cheaper to own than gas cars in Africa by 2040

Electric vehicles could be economically competitive in Africa sooner than expected. Just 1% of new cars sold across the continent in 2025 were electric, but a new analysis finds that with solar off-grid charging, EVs could be cheaper to own than gas vehicles by 2040.

There are major barriers to higher EV uptake in many countries in Africa, including a sometimes unreliable grid, limited charging infrastructure, and a lack of access to affordable financing. But as batteries and the vehicles they power continue to get cheaper, the economic case for EVs is building. Read the full story.

—Casey Crownhart

MIT Technology Review Narrated: How next-generation nuclear reactors break out of the 20th-century blueprint

The popularity of commercial nuclear reactors has surged in recent years as worries about climate change and energy independence drowned out concerns about meltdowns and radioactive waste.

The problem is, building nuclear power plants is expensive and slow. 

A new generation of nuclear power technology could reinvent what a reactor looks like—and how it works. Advocates hope that new tech can refresh the industry and help replace fossil fuels without emitting greenhouse gases.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Social media giants have agreed to be rated on teen safety 
Meta, TikTok and Snap will undergo independent assessments over how effectively they protect the mental health of teen users. (WP $)
+ Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to be graded. (LA Times $)

2 The FDA has refused to review Moderna’s mRNA flu vaccine
It’s the latest in a long line of anti-vaccination moves the agency is making. (Ars Technica)
+ Experts worry it’ll have a knock-on effect on investment in future vaccines. (The Guardian)
+ Moderna says it was blindsided by the decision. (CNN)

3 EV battery factories are pivoting to manufacturing energy cells 
Energy storage systems are in, electric vehicles are out. (FT $)

4 Why OpenAI killed off ChatGPT’s 4o model

The qualities that make it attractive for some users make it incredibly risky for others. (WSJ $)
+ Bereft users have set up their own Reddit community to mourn. (Futurism)
+ Why GPT-4o’s sudden shutdown left people grieving. (MIT Technology Review)

5 Drug cartels have started laundering money through crypto
And law enforcement is struggling to stop them. (Bloomberg $)

6 Morocco wants to build an AI for Africa
The country’s Minister of Digital Transition has a plan. (Rest of World)
+ What Africa needs to do to become a major AI player. (MIT Technology Review)

7 Christian influencers are bowing out of the news cycle
They’re choosing to ignore world events to protect their own inner peace. (The Atlantic $)

8 An RFK Jr-approved diet is pretty joyless
Don’t expect any dessert, for one. (Insider $)
+ The US government’s health site uses Grok to dispense nutrition advice. (Wired $)

9 Don’t toss out your used vape
Hackers can give it a second life as a musical synthesizer. (Wired $)

10 An ice skating duo danced to AI music at the Winter Olympics ⛸
Centuries of bangers to choose from, and this is what they opted for. (TechCrunch)
+ AI is coming for music, too. (MIT Technology Review)

Quote of the day

“These companies are terrified that no one’s going to notice them.” 

—Tom Goodwin, co-founder of business consulting firm All We Have Is Now, tells the Guardian why AI startups are going to increasingly desperate measures to grab would-be customers’ attention.

One more thing

How AI is changing gymnastics judging

The 2023 World Championships last October marked the first time an AI judging system was used on every apparatus in a gymnastics competition. There are obvious upsides to using this kind of technology: AI could help take the guesswork out of the judging technicalities. It could even help to eliminate biases, making the sport both more fair and more transparent.

At the same time, others fear AI judging will take away something that makes gymnastics special. Gymnastics is a subjective sport, like diving or dressage, and technology could eliminate the judges’ role in crafting a narrative.

For better or worse, AI has officially infiltrated the world of gymnastics. The question now is whether it really makes it fairer. Read the full story.

—Jessica Taylor Price

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Today marks the birthday of the late, great Leslie Nielsen—one of the best to ever do it.
+ Congratulations are in order for Hannah Cox, who has just completed 100 marathons in 100 days across India in her dad’s memory.
+ Feeling down? A trip to Finland could be just what you need.
+ We love Padre Guilherme, the Catholic priest dropping incredible Gregorian chant beats.

The Download: Making AI Work, and why the Moltbook hype is similar to Pokémon

10 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A first look at Making AI Work, MIT Technology Review’s new AI newsletter

Are you interested in learning more about the ways in which AI is actually being used? We’ve launched a new weekly newsletter series exploring just that: digging into how generative AI is being used and deployed across sectors and what professionals need to know to apply it in their everyday work.

Each edition of Making AI Work begins with a case study, examining a specific use case of AI in a given industry. Then we’ll take a deeper look at the AI tool being used, with more context about how other companies or sectors are employing that same tool or system. Finally, we’ll end with action-oriented tips to help you apply the tool.

The first edition takes a look at how AI is changing health care, digging into the future of medical note-taking by learning about the Microsoft Copilot tool used by doctors at Vanderbilt University Medical Center. Sign up here to receive the seven editions straight to your inbox, and if you’d like to read more about AI’s impact on health care in the meantime, check out some of our past reporting:

+  This medical startup uses LLMs to run appointments and make diagnoses.

+ How AI is changing how we quantify pain by helping health-care providers better assess their patients’ discomfort. Read the full story.

+ End-of-life decisions are difficult and distressing. Could AI help?

+ Artificial intelligence is infiltrating health care. But we shouldn’t let it make all the decisions unchecked. Read the full story.

Why the Moltbook frenzy was like Pokémon

Lots of influential people in tech recently described Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them—sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?

The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon. Read the full story to find out why.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI has begun testing ads in ChatGPT 
But the ads won’t influence the responses it provides, apparently. (The Verge)
+ Users who pay at least $20 a month for the chatbot will be exempt. (Gizmodo)
+ So will users believed to be under 18. (Axios)

2 The White House has a plan to stop data centers from raising electricity prices
It’s going to ask AI companies to voluntarily commit to keeping costs down. (Politico)
+ The US federal government is adopting AI left, right and center. (WP $)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

3 Elon Musk wants to colonize the moon
For now at least, his grand ambitions to live on Mars are taking a backseat. (CNN)
+ His full rationale for this U-turn isn’t exactly clear. (Ars Technica)
+ Musk also wants to become the first to launch a working data center in space. (FT $)
+ The case against humans in space. (MIT Technology Review)

4 Cheap AI tools are helping criminals to ramp up their scams

They’re using LLMs to massively scale up their attacks. (Bloomberg $)
+ Cyberattacks by AI agents are coming. (MIT Technology Review)

5 Iceland could be heading towards becoming one giant glacier

If human-driven warming disrupts a vital ocean current, that is. (WP $)
+ Inside a new quest to save the “doomsday glacier.” (MIT Technology Review)

6 Amazon is planning to launch an AI content marketplace

It’s reported to have spoken to media publishers to gauge their interest. (The Information $)

7 Doctors can’t agree on how to diagnose Alzheimer’s
They worry that some patients are being misdiagnosed. (WSJ $)

8 The first wave of AI enthusiasts are burning out
A new study has found that AI tools are linked to employees working more, not less. (TechCrunch)

9 We’re finally moving towards better ways to measure body fat
BMI is a flawed metric. Physicians are finally using better measures. (New Scientist $)
+ These are the best ways to measure your body fat. (MIT Technology Review)

10 It’s getting harder to become a social media megastar
Maybe that’s a good thing? (Insider $)
+ The likes of Mr Beast are still raking in serious cash, though. (The Information $)

Quote of the day

“This case is as easy as ABC—addicting, brains, children.”

—Lawyer Mark Lanier lays out his case during the opening statements of a new tech addiction trial in which a woman has accused Meta of deliberately designing their platforms to be addictive, the New York Times reports.

One more thing

China wants to restore the sea with high-tech marine ranches

A short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex.

Genghai is in fact an unusual tourist destination, one that breeds 200,000 “high-quality marine fish” each year. The vast majority are released into the ocean as part of a process known as marine ranching.

The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Wow, Joel and Ethan Coen’s dark comedic classic Fargo is 30 years old.
+ A new exhibition in New York is rightfully paying tribute to one of the greatest technological inventions: the Walkman ($)
+ This gigantic sleeping dachshund sculpture in South Korea is completely bonkers.
+ A beautiful heart-shaped pendant linked to King Henry VIII has been secured by the British Museum.

The Download: what Moltbook tells us about AI hype, and the rise and rise of AI therapy

9 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Moltbook was peak AI theater

For a few days recently, the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. As the website’s tagline puts it: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”

We observed! Launched on January 28, Moltbook went viral in a matter of hours. It’s been designed as a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), could come together and do whatever they wanted.

But is Moltbook really a glimpse of the future, as many have claimed? Or something else entirely? Read the full story.

—Will Douglas Heaven

The ascent of the AI therapist

We’re in the midst of a global mental-­health crisis. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year.

Given the clear demand for accessible and affordable mental-health services, it’s no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots, or from specialized psychology apps like Wysa and Woebot.

Four timely new books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. Read the full story.

—Becky Ferreira

This story is from the most recent print issue of MIT Technology Review magazine, which shines a light on the exciting innovations happening right now. If you haven’t already, subscribe now to receive future issues once they land.

Making AI Work, MIT Technology Review’s new AI newsletter, is here

For years, our newsroom has explored AI’s limitations and potential dangers, as well as its growing energy needs. And our reporters have looked closely at how generative tools are being used for tasks such as coding and running scientific experiments.

But how is AI actually being used in fields like health care, climate tech, education, and finance? How are small businesses using it? And what should you keep in mind if you use AI tools at work? These questions guided the creation of Making AI Work, a new AI mini-course newsletter. Read more about it, and sign up here to receive the seven editions straight to your inbox.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is failing to punish polluters
The number of civil lawsuits it’s pursuing has sharply dropped in comparison to Trump’s first term. (Ars Technica)
+ Rising GDP = greater carbon emissions. But does it have to? (The Guardian)

2 The European Union has warned Meta against blocking rival AI assistants
It’s the latest example of Brussels’ attempts to rein in Big Tech. (Bloomberg $)

3 AI ads took over the Super Bowl
Hyping up chatbots and taking swipes at their competitors. (TechCrunch)
+ They appeared to be trying to win over AI naysayers, too. (WP $)
+ Celebrities were out in force to flog AI wares. (Slate $)

4
China wants to completely dominate the humanoid robot industry
Local governments and banks are only too happy to oblige promising startups. (WSJ $)
+ Why the humanoid workforce is running late. (MIT Technology Review)

5 We’re witnessing the first real crypto crash
Cryptocurrency is now fully part of the financial system, for better or worse. (NY Mag $)
+ Wall Street’s grasp of AI is pretty shaky too. (Semafor)
+ Even traditionally safe markets are looking pretty volatile right now. (Economist $)

6 The man who coined vibe coding has a new fixation 
“Agentic engineering” is the next big thing, apparently. (Insider $)
+ Agentic AI is the talk of the town right now. (The Information $)
+ What is vibe coding, exactly? (MIT Technology Review)

7 AI running app Runna has adjusted its aggressive training plans 🏃‍♂️
Runners had long suspected its suggestions were pushing them towards injury. (WSJ $)

8 San Francisco’s march for billionaires was a flop 
Only around three dozen supporters turned up. (SF Chronicle)
+ Predictably, journalists nearly outnumbered the demonstrators. (TechCrunch)

9 AI is shaking up romance novels ❤
But models still aren’t great at writing sex scenes. (NYT $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review

10 ChatGPT won’t be replacing human stylists any time soon
Its menswear suggestions are more manosphere influencer than suave gentleman. (GQ)

Quote of the day

“There is no Plan B, because that assumes you will fail. We’re going to do the start-up thing until we die.”

—William Alexander, an ambitious 21-year old AI worker, explains his and his cohort’s attitudes towards trying to make it big in the highly-competitive industry to the New York Times.

One more thing

The open-source AI boom is built on Big Tech’s handouts. How long will it last?

In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.

In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Dark showering, anyone?
+ Chef Yujia Hu is renowned for his shoe-shaped sushi designs.
+ Meanwhile, in the depths of the South Atlantic Ocean: a giant phantom jelly has been spotted.
+ I have nothing but respect for this X account dedicated to documenting rats and mice in movies and TV 🐀🐁

The Download: helping cancer survivors to give birth, and cleaning up Bangladesh’s garment industry

6 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

An experimental surgery is helping cancer survivors give birth

An experimental surgical procedure that’s helping people have babies after they’ve had  treatment for bowel or rectal cancer.

Radiation and chemo can have pretty damaging side effects that mess up the uterus and ovaries. Surgeons are pioneering a potential solution: simply stitch those organs out of the way during cancer treatment. Once the treatment has finished, they can put the uterus—along with the ovaries and fallopian tubes—back into place.

It seems to work! Last week, a team in Switzerland shared news that a baby boy had been born after his mother had the procedure. Baby Lucien was the fifth baby to be born after the surgery and the first in Europe, and since then at least three others have been born. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here

Bangladesh’s garment-making industry is getting greener

Pollution from textile production—dyes, chemicals, and heavy metals—is common in the waters of the Buriganga River as it runs through Dhaka, Bangladesh. It’s among many harms posed by a garment sector that was once synonymous with tragedy: In 2013, the eight-story Rana Plaza factory building collapsed, killing 1,134 people and injuring some 2,500 others. 

But things are starting to change. In recent years the country has become a leader in “frugal” factories that use a combination of resource-efficient technologies to cut waste, conserve water, and build resilience against climate impacts and global supply disruptions. 

The hundreds of factories along the Buriganga’s banks and elsewhere in Bangladesh are starting to stitch together a new story, woven from greener threads. Read the full story.

—Zakir Hossain Chowdhury

This story is from the most recent print issue of MIT Technology Review magazine, which shines a light on the exciting innovations happening right now. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 ICE used a private jet to deport Palestinian men to Tel Aviv 
The luxury aircraft belongs to Donald Trump’s business partner Gil Dezer. (The Guardian)
+ Trump is mentioned thousands of times in the latest Epstein files. (NY Mag $)

2 How Jeffrey Epstein kept investing in Silicon Valley
He continued to plough millions of dollars into tech ventures despite spending 13 months in jail. (NYT $)
+ The range of Epstein’s social network was staggering. (FT $)
+ Why was a picture of the Mona Lisa redacted in the Epstein files? (404 Media)

3 The risks posed by taking statins are lower than we realised
The drugs don’t cause most of the side effects they’re blamed for. (STAT)
+ Statins are a common scapegoat on social media. (Bloomberg $)

4 Russia is weaponizing the bitter winter weather
It’s focused on attacking Ukraine’s power grid. (New Yorker $)
+ How the grid can ride out winter storms. (MIT Technology Review)

5 China has a major spy-cam porn problem

Hotel guests are being livestreamed having sex to an online audience without their knowledge. (BBC)

6 Geopolitical gamblers are betting on the likelihood of war
And prediction markets are happily taking their money. (Rest of World)

7 Oyster farmers aren’t signing up to programs to ease water pollution
The once-promising projects appear to be fizzling out. (Undark)
+ The humble sea creature could hold the key to restoring coastal waters. Developers hate it. (MIT Technology Review)

8 Your next payrise could be approved by AI
Maybe your human bosses aren’t the ones you need to impress any more. (WP $)

9 The FDA has approved a brain stimulation device for treating depression
It’s paving the way for a non-invasive, drug-free treatment for Americans. (IEEE Spectrum)
+ Here’s how personalized brain stimulation could treat depression. (MIT Technology Review)

10 Cinema-goers have had enough of AI
Movies focused on rogue AI are flopping at the box office. (Wired $)
+ Meanwhile, Republicans are taking aim at “woke” Netflix. (The Verge)

Quote of the day

“I’m all for removing illegals, but snatching dudes off lawn mowers in Cali and leaving the truck and equipment just sitting there? Definitely not working smarter.” 

—A web user in a forum for current and former ICE and border protection officers complains about the agency’s current direction, Wired reports.

One more thing

Is this the electric grid of the future?

Lincoln Electric System, a publicly owned utility in Nebraska, is used to weathering severe blizzards. But what will happen soon—not only at Lincoln Electric but for all electric utilities—is a challenge of a different order.

Utilities must keep the lights on in the face of more extreme and more frequent storms and fires, growing risks of cyberattacks and physical disruptions, and a wildly uncertain policy and regulatory landscape. They must keep prices low amid inflationary costs. And they must adapt to an epochal change in how the grid works, as the industry attempts to transition from power generated with fossil fuels to power generated from renewable sources like solar and wind.

The electric grid is bracing for a near future characterized by disruption. And, in many ways, Lincoln Electric is an ideal lens through which to examine what’s coming. Read the full story.

—Andrew Blum

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Glamour puss alert—NYC’s bodega cats are gracing the hallowed pages of Vogue.
+ Ancient Europe was host to mysterious hidden tunnels. But why?
+ If you’re enjoying the new season of Industry, you’ll love this interview with the one and only Ken Leung.
+ The giant elephant shrew is the true star of Philly Zoo.

The Download: attempting to track AI, and the next generation of nuclear power

5 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This is the most misunderstood graph in AI

Every time OpenAI, Google, or Anthropic drops a new frontier large language model, the AI community holds its breath. It doesn’t exhale until METR, an AI research nonprofit whose name stands for “Model Evaluation & Threat Research,” updates a now-iconic graph that has played a major role in the AI discourse since it was first released in March of last year. 

The graph suggests that certain AI capabilities are developing at an exponential rate, and more recent model releases have outperformed that already impressive trend.

That was certainly the case for Claude Opus 4.5, the latest version of Anthropic’s most powerful model, which was released in late November. In December, METR announced that Opus 4.5 appeared to be capable of independently completing a task that would have taken a human about five hours—a vast improvement over what even the exponential trend would have predicted.

But the truth is more complicated than those dramatic responses would suggest. Read the full story.

—Grace Huckins

This story is part of MIT Technology Review Explains: our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Three questions about next-generation nuclear power, answered

Nuclear power continues to be one of the hottest topics in energy today, and in our recent online Roundtables discussion about next-generation nuclear power, hyperscale AI data centers, and the grid, we got dozens of great audience questions.

These ran the gamut, and while we answered quite a few (and I’m keeping some in mind for future reporting), there were a bunch we couldn’t get to, at least not in the depth I would have liked. So let’s answer a few of your questions about advanced nuclear power.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Anthropic’s new coding tools are rattling the markets 
Fields as diverse as publishing and coding to law and advertising are paying attention. (FT $)
+ Legacy software companies, beware. (Insider $)
+ Is “software-mageddon” nigh? It depends who you ask. (Reuters)

2 This Apple setting prevented the FBI from accessing a reporter’s iPhone
Lockdown Mode has proved remarkably effective—for now. (404 Media)
+ Agents were able to access Hannah Natanson’s laptop, however. (Ars Technica)

3 Last month’s data center outage disrupted all TikTok categories

Not just the political content that some users claimed. (NPR)

4 Big Tech is pouring billions into AI in India
A newly-announced 20-year tax break should help to speed things along. (WSJ $)
+ India’s female content moderators are watching hours of abuse content to train AI. (The Guardian)
+ Officials in the country are weighing up restricting social media for minors. (Bloomberg $)
+ Inside India’s scramble for AI independence. (MIT Technology Review)

5 YouTubers are harassing women using body cams
They’re abusing freedom of information laws to humiliate their targets. (NY Mag $)
+ AI was supposed to make police bodycams better. What happened? (MIT Technology Review)

6 Jokers have created a working version of Jeffrey Epstein’s inbox
Complete with notable starred threads. (Wired $)
+ Epstein’s links with Silicon Valley are vast and deep. (Fast Company $)
+ The revelations are driving rifts between previously-friendly factions. (NBC News)

7 What’s the last thing you see before you die?
A new model might help to explain near-death experiences—but not all researchers are on board. (WP $)
+ What is death? (MIT Technology Review)

8 A new app is essentially TikTok for vibe-coded apps
Words which would have made no sense 15 years ago. (TechCrunch)
+ What is vibe coding, exactly? (MIT Technology Review)

9 Rogue TV boxes are all the rage
Viewers are sick of the soaring prices of streaming services, and are embracing less legal means of watching their favorite shows. (The Verge)

10 Climate change is threatening the future of the Winter Olympics ⛷
Artificial snow is one (short term) solution. (Bloomberg $)
+ Team USA is using AI to try and gain an edge on its competition. (NBC News)

Quote of the day

“We’ve heard from many who want nothing to do with AI.”

—Ajit Varma, head of Mozilla’s web browser Firefox, explains why the company is reversing its previous decision to transform Firefox into an “AI browser,” PC Gamer reports.

One more thing

A major AI training data set contains millions of examples of personal data

Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.

Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. 

The bottom line? Anything you put online can be and probably has been scraped. Read the full story.

—Eileen Guo

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re crazy enough to be training for a marathon right now, here’s how to beat boredom on those long, long runs.
+ Mark Cohen’s intimate street photography is a fascinating window into humanity.
+ A seriously dedicated gamer has spent days painstakingly recreating a Fallout vault inside the Sims 4.
+ Here’s what music’s most stylish men are wearing right now—from leather pants to khaki parkas.

The Download: the future of nuclear power plants, and social media-fueled AI hype

4 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why AI companies are betting on next-gen nuclear

AI is driving unprecedented investment for massive data centers and an energy supply that can support its huge computational appetite. One potential source of electricity for these facilities is next-generation nuclear power plants, which could be cheaper to construct and safer to operate than their predecessors.

We recently held a subscriber-exclusive Roundtables discussion on hyperscale AI data centers and next-gen nuclear—two featured technologies on the MIT Technology Review 10 Breakthrough Technologies of 2026 list. You can watch the conversation back here, and don’t forget to subscribe to make sure you catch future discussions as they happen.

How social media encourages the worst of AI boosterism

Demis Hassabis, CEO of Google DeepMind, summed it up in three words: “This is embarrassing.”

Hassabis was replying on X to an overexcited post by Sébastien Bubeck, a research scientist at the rival firm OpenAI, announcing that two mathematicians had used OpenAI’s latest large language model, GPT-5, to find solutions to 10 unsolved problems in mathematics.

Put your math hats on for a minute, and let’s take a look at what this beef from mid-October was about. It’s a perfect example of what’s wrong with AI right now.

—Will Douglas Heaven

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The paints, coatings, and chemicals making the world a cooler place

It’s getting harder to beat the heat. During the summer of 2025, heat waves knocked out power grids in North America, Europe, and the Middle East. Global warming means more people need air-­conditioning, which requires more power and strains grids.

But a millennia-old idea (plus 21st-century tech) might offer an answer: radiative cooling. Paints, coatings, and textiles can scatter sunlight and dissipate heat—no additional energy required. Read the full story.

—Becky Ferreira

This story is from the most recent print issue of MIT Technology Review magazine, which shines a light on the exciting innovations happening right now. If you haven’t already, subscribe now to receive future issues once they land.

MIT Technology Review Narrated: China figured out how to sell EVs. Now it has to deal with their aging batteries.

As early electric cars age out, hundreds of thousands of used batteries are flooding the market, fueling a gray recycling economy even as Beijing and big manufacturers scramble to build a more orderly system.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Europe is edging closer towards banning social media for minors
Spain has become the latest country to consider it. (Bloomberg $)
+ Elon Musk called the Spanish prime minister a “tyrant” in retaliation. (The Guardian)
+ Other European nations considering restrictions include Greece, France and the UK. (Reuters)

2 Humans are infiltrating the social network for AI agents
It turns out role-playing as a bot is surprisingly fun. (Wired $)
+ Some of the most viral posts may actually be human-generated after all. (The Verge)

3 Russian spy spacecraft have intercepted Europe’s key satellites
Security officials are confident Moscow has tapped into unencrypted European comms. (FT $)

4 French authorities raided X’s Paris office
They’re investigating a range of potential charges against the company. (WSJ $)
+ Elon Musk has been summoned to give evidence in April. (Reuters)

5 Jeffrey Epstein invested millions into crypto startup Coinbase
Which suggests he was still able to take advantage of Silicon Valley investment opportunities years after pleading guilty to soliciting sex from an underage girl. (WP $)

6 A group of crypto bros paid $300,000 for a gold statue of Trump
It’s destined to be installed on his Florida golf complex, apparently. (NYT $)

7 OpenAI has appointed a “head of preparedness”
Dylan Scandinaro will earn a cool $555,000 for his troubles. (Bloomberg $)

8 The eternal promise of 3D-printed batteries
Traditional batteries are blocky and bulky. Printing them ourselves could help solve that. (IEEE Spectrum)

9 What snow can teach us about city design
When icy mounds refuse to melt, they show us what a less car-focused city could look like. (New Yorker $)
+ This startup thinks slime mold can help us design better cities. (MIT Technology Review)

10 Please don’t use AI to talk to your friends
That’s what your brain is for. (The Atlantic $)
+ Therapists are secretly using ChatGPT. Clients are triggered. (MIT Technology Review)

Quote of the day

“Today, our children are exposed to a space they were never meant to navigate alone. We will no longer accept that.”

—Spanish prime minister Pedro Sánchez proposes a social media ban for children aged under 16 in the country, following in Australia’s footsteps, AP News reports.

One more thing

A brain implant changed her life. Then it was removed against her will.

Sticking an electrode inside a person’s brain can do more than treat a disease. Take the case of Rita Leggett, an Australian woman whose experimental brain implant designed to help people with epilepsy changed her sense of agency and self.

Leggett told researchers that she “became one” with her device. It helped her to control the unpredictable, violent seizures she routinely experienced, and allowed her to take charge of her own life. So she was devastated when, two years later, she was told she had to remove the implant because the company that made it had gone bust.

The removal of this implant, and others like it, might represent a breach of human rights, ethicists say in a paper published earlier this month. And the issue will only become more pressing as the brain implant market grows in the coming years and more people receive devices like Leggett’s. Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Why Beethoven’s Ode to Joy is still such an undisputed banger.
+ Did you know that one of the world’s most famous prisons actually served as a zoo and menagerie for over 600 years?
+ Banana nut muffins sound like a fantastic way to start your day.
+ 2026 is shaping up to be a blockbuster year for horror films.

The Download: squeezing more metal out of aging mines, and AI’s truth crisis

3 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Microbes could extract the metal needed for cleantech

In a pine forest on Michigan’s Upper Peninsula, the only active nickel mine in the US is nearing the end of its life. At a time when carmakers want the metal for electric-vehicle batteries, nickel concentration at Eagle Mine is falling and could soon drop too low to warrant digging.

Demand for nickel, copper, and rare earth elements is rapidly increasing amid the explosive growth of metal-intensive data centers, electric cars, and renewable energy projects. But producing these metals is becoming harder and more expensive because miners have already exploited the best resources. Here’s how biotechnology could help.

—Matt Blois

What we’ve been getting wrong about AI’s truth crisis

—James O’Donnell

What would it take to convince you that the era of truth decay we were long warned about—where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process—is now here?

A story I published last week pushed me over the edge. And it also made me realize that the tools we were sold as a cure for this crisis are failing miserably. Read the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

TR10: Hyperscale AI data centers

In sprawling stretches of farmland and industrial parks, supersized buildings packed with racks of computers are springing up to fuel the AI race.

These engineering marvels are a new species of infrastructure: supercomputers designed to train and run large language models at mind-­bending scale, complete with their own specialized chips, cooling systems, and even energy supplies. But all that impressive computing power comes at a cost.

Read why we’ve named hyperscale AI data centers as of our 10 Breakthrough Technologies this year, and check out the rest of the list.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk’s SpaceX has acquired xAI
The deal values the combined companies at a cool $1.25 trillion. (WSJ $)
+ It also paves the way for SpaceX to offer an IPO later this year. (WP $)
+ Meanwhile, OpenAI has accused xAI of destroying legal evidence. (Bloomberg $)

2 NASA has delayed the launch of Artemis II
It’s been pushed back to March due to the discovery of a hydrogen leak. (Ars Technica)
+ The rocket’s predecessor was also plagued by fuel leaks. (Scientific American)

3 Russia is hiring a guerilla youth army online
They’re committing arson and spying on targets across Europe. (New Yorker $)

4 Grok is still generating undressed images of men

Weeks after the backlash over it doing the same to women. (The Verge)
+ How Grok descended into becoming a porn generator. (WP $)
+ Inside the marketplace powering bespoke AI deepfakes of real women. (MIT Technology Review)

5 OpenAI is searching for alternatives to Nvidia’s chips

It’s reported to be unhappy about the speed at which it powers ChatGPT. (Reuters)

6 The latest attempt to study a notoriously unstable glacier has failed
Scientists lost their equipment within Antarctica’s Thwaites Glacier over the weekend. (NYT $)
+ Inside a new quest to save the “doomsday glacier” (MIT Technology Review)

7 The world is trying to wean itself off American technology
Governments are growing increasingly uneasy about their reliance on the US. (Rest of World)

8 AI’s sloppy writing is driving demand for real human writers
Long may it continue. (Insider $)

9 This female-dominated fitness community hates Mark Zuckerberg
His decision to shut down three VR studios means their days of playing their favorite workout game are numbered. (The Verge)
+ Welcome to the AI gym staffed by virtual trainers. (MIT Technology Review)

10 This cemetery has an eco-friendly solution for its overcrowding problem
If you’re okay with your loved one becoming gardening soil, that is. (WSJ $)
+ Why America is embracing the right to die now. (Economist $)
+ What happens when you donate your body to science. (MIT Technology Review)

Quote of the day

“In the long term, space-based AI is obviously the only way to scale…I mean, space is called ‘space’ for a reason.”

—Elon Musk explains his rationale for combining SpaceX with xAI in a blog post.

One more thing

On the ground in Ukraine’s largest Starlink repair shop

Starlink is absolutely critical to Ukraine’s ability to continue in the fight against Russia. It’s how troops in battle zones stay connected with faraway HQs; it’s how many of the drones essential to Ukraine’s survival hit their targets; it’s even how soldiers stay in touch with spouses and children back home.

However, Donald Trump’s fickle foreign policy and reports suggesting Elon Musk might remove Ukraine’s access to the services have cast the technology’s future in the country into doubt.

For now Starlink access largely comes down to the unofficial community of users and engineers, including the expert “Dr. Starlink”—famous for his creative ways of customizing the systems—who have kept Ukraine in the fight, both on and off the front line. He gave MIT Technology Review exclusive access to his unofficial Starlink repair workshop in the city of Lviv. Read the full story.

—Charlie Metcalfe

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The Norwegian countryside sure looks beautiful.
+ Quick—it’s time to visit these food destinations before the TikTok hordes descend.
+ Rest in power Catherine O’Hara, our favorite comedy queen.
+ Take some time out of your busy day to read a potted history of boats 🚣

The Download: inside a deepfake marketplace, and EV batteries’ future

2 February 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside the marketplace powering bespoke AI deepfakes of real women

Civitai—an online marketplace for buying and selling AI-generated content, backed by the venture capital firm Andreessen Horowitz—is letting users buy custom instruction files for generating celebrity deepfakes. Some of these files were specifically designed to make pornographic images banned by the site, a new analysis has found.

The study, from researchers at Stanford and Indiana University, looked at people’s requests for content on the site, called “bounties.” The researchers found that between mid-2023 and the end of 2024, most bounties asked for animated content—but a significant portion were for deepfakes of real people, and 90% of these deepfake requests targeted women. Read the full story.

—James O’Donnell

What’s next for EV batteries in 2026

Demand for electric vehicles and the batteries that power them has never been hotter.

In 2025, EVs made up over a quarter of new vehicle sales globally, up from less than 5% in 2020. Some regions are seeing even higher uptake: In China, more than 50% of new vehicle sales last year were battery electric or plug-in hybrids. In Europe, more purely electric vehicles hit the roads in December than gas-powered ones. (The US is the notable exception here, dragging down the global average with a small sales decline from 2024.)

As EVs become increasingly common on the roads, the battery world is growing too. Here’s what’s coming next for EV batteries in 2026 and beyond.

—Casey Crownhart

This story is part of MIT Technology Review’s What’s Next series, which examines industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

TR10: Base-edited baby

Kyle “KJ” Muldoon Jr. was born with a rare, potentially fatal genetic disorder that left his body unable to remove toxic ammonia from his blood. The University of Pennsylvania offered his parents an alternative to a liver transplant: gene-editing therapies.

The team set to work developing a tailored treatment using base editing—a form of CRISPR that can correct genetic “misspellings” by changing single bases, the basic units of DNA. KJ received an initial low dose when he was seven months old, and later received two higher doses. Today, KJ is doing well. At an event in October last year, his happy parents described how he was meeting all his developmental milestones.

Others have received gene-editing therapies intended to treat conditions including sickle cell disease and a predisposition to high cholesterol. But KJ was the first to receive a personalized treatment—one that was designed just for him and will probably never be used again. Read why we made it one of our 10 Breakthrough Technologies this year, and check out the rest of the list.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A social network for AI agents is vulnerable to abuse
A misconfiguration meant anyone could take control of any agent. (404 Media)
+ Moltbook is loosely modeled on Reddit, but humans are unable to post. (FT $)

2 Google breached its own ethics rules to help an Israeli contractor

It helped a military worker to analyze drone footage, a whistleblower has claimed. (WP $)

3 Capgemini is selling its unit linked to ICE
After the French government asked it to clarify its work for the agency. (Bloomberg $) 
+ The company has signed $12.2mn in contracts under the Trump administration. (FT $)
+ Here’s how to film ICE activities as safely as possible. (Wired $)

4 China has a plan to prime its next generation of AI experts 

Thanks to its elite genius class system. (FT $)
+ The country is going all-in on AI healthcare. (Rest of World)
+ The State of AI: Is China about to win the race? (MIT Technology Review)

5 Indonesia has reversed its ban on xAI’s Grok
After it announced plans to improve its compliance with the country’s laws. (Reuters)
+ Indonesia maintains a strict stance against pornographic content. (NYT $)
+ Malaysia and the Philippines have also lifted bans on the chatbot. (TechCrunch)

6 Don’t expect to hitch a ride on a Blue Origin rocket anytime soon
Jeff Bezos’ venture won’t be taking tourists into space for at least two years. (NYT $)
+ Artemis II astronauts are due to set off for the moon soon. (IEEE Spectrum)
+ Commercial space stations are on our list of 10 Breakthrough Technologies for 2026. (MIT Technology Review)

7 America’s push for high-speed internet is under threat
There aren’t enough skilled workers to meet record demand. (WSJ $)

8 Can AI help us grieve better?
A growing cluster of companies are trying to find out. (The Atlantic $)
+ Technology that lets us “speak” to our dead relatives has arrived. Are we ready? (MIT Technology Review)

9 How to fight future insect infestations 🍄
A certain species of fungus could play a key role. (Ars Technica)
+ How do fungi communicate? (MIT Technology Review)

10 What a robot-made latte tastes like, according to a former barista
Damn fine, apparently. (The Verge)

Quote of the day

 “It feels like a wild bison rampaging around in my computer.”

—A user who signed up to AI agent Moltbot remarks on the bot’s unpredictable behavior, Rest of World reports.

One more thing

How Wi-Fi sensing became usable tech

Wi-Fi sensing is a tantalizing concept: that the same routers bringing you the internet could also detect your movements. But, as a way to monitor health, it’s mostly been eclipsed by other technologies, like ultra-wideband radar. 

Despite that, Wi-Fi sensing hasn’t gone away. Instead, it has quietly become available in millions of homes, supported by leading internet service providers, smart-home companies, and chip manufacturers.

Soon it could be invisibly monitoring our day-to-day movements for all sorts of surprising—and sometimes alarming—purposes. Read the full story

—Meg Duff

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ These intrepid Scottish bakers created the largest ever Empire biscuit (a classic shortbread cookie covered in icing) 🍪
+ My, what big tentacles you have!
+ If you’ve been feeling like you’re stuck in a rut lately, this advice could be exactly what you need to overcome it.
+ These works of psychedelic horror are guaranteed to send a shiver down your spine.

The Download: US immigration agencies’ AI videos, and inside the Vitalism movement

30 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

DHS is using Google and Adobe AI to make videos

The news: The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity.

Why it matters: It comes as immigration agencies have flooded social media with content to support President Trump’s mass deportation agenda—some of which appears to be made with AI—and as workers in tech have put pressure on their employers to denounce the agencies’ activities. Read the full story.

—James O’Donnell

How the sometimes-weird world of lifespan extension is gaining influence

—Jessica Hamzelou

For the last couple of years, I’ve been following the progress of a group of individuals who believe death is humanity’s “core problem.” Put simply, they say death is wrong—for everyone. They’ve even said it’s morally wrong.

They established what they consider a new philosophy, and they called it Vitalism.

Vitalism is more than a philosophy, though—it’s a movement for hardcore longevity enthusiasts who want to make real progress in finding treatments that slow or reverse aging. Not just through scientific advances, but by persuading influential people to support their movement, and by changing laws and policies to open up access to experimental drugs. And they’re starting to make progress.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The AI Hype Index: Grok makes porn, and Claude Code nails your job

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Capgemini is no longer tracking immigrants for ICE
After the French company was queried by the country’s government over the contract. (WP $)
+ Here’s how the agency typically keeps tabs on its targets. (NYT $)
+ US senators are pushing for answers about its recent surveillance shopping spree. (404 Media)
+ ICE’s tactics would get real soldiers killed, apparently. (Wired $)

2 The Pentagon is at loggerheads with Anthropic
The AI firm is reportedly worried its tools could be used to spy on Americans. (Reuters)
+ Generative AI is learning to spy for the US military. (MIT Technology Review)

3 It’s relatively rare for AI chatbots to lead users down harmful paths
But when it does, it can have incredibly dangerous consequences. (Ars Technica)
+ The AI doomers feel undeterred. (MIT Technology Review)

4
GPT-4o’s days are numbered
OpenAI says just 0.1% of users are using the model every day. (CNBC)
+ It’s the second time that it’s tried to turn the sycophantic model off in under a year. (Insider $)
+ Why GPT-4o’s sudden shutdown left people grieving. (MIT Technology Review)

5 An AI toy company left its chats with kids exposed

Anyone with a Gmail account was able to simply access the conversations—no hacking required. (Wired $)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

6 SpaceX could merge with xAI later this year

Ahead of a planned blockbuster IPO of Elon Musk’s companies. (Reuters)
+ The move would be welcome news for Musk fans. (The Information $)
+ A SpaceX-Tesla merger could also be on the cards. (Bloomberg $)

7 We’re still waiting for a reliable male contraceptive
Take a look at the most promising methods so far. (Bloomberg $)

8 AI is bringing traditional Chinese medicine to the masses
And it’s got the full backing of the country’s government. (Rest of World)

9 The race back to the Moon is heating up 
Competition between the US and China is more intense than ever. (Economist $)

10 What did the past really smell like?
AI could help scientists to recreate history’s aromas—including mummies and battlefields. (Knowable Magazine)

Quote of the day

“I think the tidal wave is coming and we’re all standing on the beach.”

—Bill Zysblat, a music business manager, tells the Financial Times about the existential threat AI poses to the industry. 

One more thing

Therapists are secretly using ChatGPT. Clients are triggered.

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen.

For the rest of the session, Declan was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen, who was taking what Declan was saying, putting it into ChatGPT, and then parroting its answers.

But Declan is not alone. In fact, a growing number of people are reporting receiving AI-generated communiqués from their therapists. Clients’ trust and privacy are being abandoned in the process. Read the full story.

—Laurie Clarke

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Sinkholes are seriously mysterious. Is there a way to stay one step ahead of them?
+ This beautiful pixel art is super impressive.
+ Amid the upheaval in their city, residents of Minneapolis recently demonstrated both their resistance and community spirit in the annual Art Sled Rally (thanks Paul!)
+ How on Earth is Tomb Raider 30 years old?!

The Download: inside the Vitalism movement, and why AI’s “memory” is a privacy problem

29 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the Vitalists: the hardcore longevity enthusiasts who believe death is “wrong”

Last April, an excited crowd gathered at a compound in Berkeley, California, for a three-day event called the Vitalist Bay Summit. It was part of a longer, two-month residency that hosted various events to explore tools—from drug regulation to cryonics—that might be deployed in the fight against death.

One of the main goals, though, was to spread the word of Vitalism, a somewhat radical movement established by Nathan Cheng and his colleague Adam Gries a few years ago. Consider it longevity for the most hardcore adherents—a sweeping mission to which nothing short of total devotion will do.

Although interest in longevity has certainly taken off in recent years, not everyone in the broader longevity space shares Vitalists’ commitment to actually making death obsolete. And the Vitalists feel that momentum is building, not just for the science of aging and the development of lifespan-extending therapies, but for the acceptance of their philosophy that defeating death should be humanity’s top concern. Read the full story.

—Jessica Hamzelou

This is the latest in our Big Story series, the home for MIT Technology Review’s most important, ambitious reporting. You can read the rest of the series here

What AI “remembers” about you is privacy’s next frontier

—Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, & Ruchika Joshi, fellow at the Center for Democracy & Technology specializing in AI safety and governance

The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents.

Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes.

But their ability to store and retrieve increasingly intimate details about their users over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities. So what can developers do to fix this problem? Read the full story.

How the grid can ride out winter storms

The eastern half of the US saw a monster snowstorm over the weekend. The good news is the grid has largely been able to keep up with the freezing temperatures and increased demand. But there were some signs of strain, particularly for fossil-fuel plants.

One analysis found that PJM, the nation’s largest grid operator, saw significant unplanned outages in plants that run on natural gas and coal. Historically, these facilities can struggle in extreme winter weather.

Much of the country continues to face record-low temperatures, and the possibility is looming for even more snow this weekend. What lessons can we take from this storm, and how might we shore up the grid to cope with extreme weather? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Telegram has been flooded with deepfake nudes 
Millions of users are creating and sharing falsified images in dedicated channels. (The Guardian)

2 China has executed 11 people linked to Myanmar scam centers
The members of the “Ming family criminal gang” caused the death of at least 14 Chinese citizens. (Bloomberg $)
+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review)

3 This viral personal AI assistant is a major privacy concern
Security researchers are sounding the alarm on Moltbot, formerly known as Clawdbot. (The Register)
+ It requires a great deal more technical know-how than most agentic bots. (TechCrunch)

4 OpenAI has a plan to keep bots off its future social network
It’s putting its faith in biometric “proof of personhood” promised by the likes of World’s eyeball-scanning orb. (Forbes)
+ We reported on how World recruited its first half a million test users back in 2022. (MIT Technology Review)

5 Here’s just some of the technologies ICE is deploying

From facial recognition to digital forensics. (WP $)
+ Agents are also using Palantir’s AI to sift through tip-offs. (Wired $)

6 Tesla is axing its Model S and Model X cars 🚗
Its Fremont factory will switch to making Optimus robots instead. (TechCrunch)
+ It’s the latest stage of the company’s pivot to AI… (FT $)
+ …as profit falls by 46%. (Ars Technica)
+ Tesla is still struggling to recover from the damage of Elon Musk’s political involvement. (WP $)

7 X is rife with weather influencers spreading misinformation
They’re whipping up hype ahead of massive storms hitting. (New Yorker $)

8  Retailers are going all-in on AI
But giants like Amazon and Walmart are taking very different approaches. (FT $)
+ Mark Zuckerberg has hinted that Meta is working on agentic commerce tools. (TechCrunch)
+ We called it—what’s next for AI in 2026. (MIT Technology Review)

9 Inside the rise of the offline hangout
No phones, no problem. (Wired $)

10 Social media is obsessed with 2016
…why, exactly? (WSJ $)

Quote of the day

“The amount of crap I get for putting out a hobby project for free is quite something.”

—Peter Steinberger, the creator of the viral AI agent Moltbot, complains about the backlash his project has received from security researchers pointing out its flaws in a post on X.

One more thing

The flawed logic of rushing out extreme climate solutions

Early in 2022, entrepreneur Luke Iseman says, he released a pair of sulfur dioxide–filled weather balloons from Mexico’s Baja California peninsula, in the hope that they’d burst miles above Earth.

It was a trivial act in itself, effectively a tiny, DIY act of solar geoengineering, the controversial proposal that the world could counteract climate change by releasing particles that reflect more sunlight back into space.

Entrepreneurs like Iseman invoke the stark dangers of climate change to explain why they do what they do—even if they don’t know how effective their interventions are. But experts say that urgency doesn’t create a social license to ignore the underlying dangers or leapfrog the scientific process. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The hottest thing in art right now? Vertical paintings.
+ There’s something in the water around Monterey Bay—a tail walking dolphin!
+ Fed up of hairstylists not listening to you? Remember these handy tips the next time you go for a cut.
+ Get me a one-way ticket to Japan’s tastiest island.

The Download: A bid to treat blindness, and bridging the internet divide

28 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The first human test of a rejuvenation method will begin “shortly”

Life Biosciences, a small Boston startup founded by Harvard professor and life-extension evangelist David Sinclair, has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers.

The company plans to try to treat eye disease with a radical rejuvenation concept called “reprogramming” that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech. Read the full story.

—Antonio Regalado

Stratospheric internet could finally start taking off this year

Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery.

Although Google shuttered its high-profile internet balloon project Loon in 2021, work on other kinds of high-altitude platform stations has continued behind the scenes. Now, several companies claim they have solved Loon’s problems—and are getting ready to prove the tech’s internet beaming potential starting this year. Read the full story.

—Tereza Pultarova

OpenAI’s latest product lets you vibe code science

OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers.

The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science. Read the full story.

—Will Douglas Heaven

MIT Technology Review Narrated: This Nobel Prize–winning chemist dreams of making water from thin air

Most of Earth is covered in water, but just 3% of it is fresh, with no salt—the kind of water all terrestrial living things need. Today, desalination plants that take the salt out of seawater provide the bulk of potable water in technologically advanced desert nations like Israel and the United Arab Emirates, but at a high cost.

Omar Yaghi, is one of three scientists who won a Nobel Prize in chemistry in October 2025 for identifying metal-­organic frameworks, or MOFs—metal ions tethered to organic molecules that form repeating structural landscapes. Today that work is the basis for a new project that sounds like science fiction, or a miracle: conjuring water out of thin air.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 TikTok has settled its social media addiction lawsuit
Just before it was due to appear before a jury in California. (NYT $)
+ But similar claims being made against Meta and YouTube will proceed. (Bloomberg $)

2 AI CEOs have started condemning ICE violence
While simultaneously praising Trump. (TechCrunch)
+ Apple’s Tim Cook says he asked the US President to “deescalate” things. (Bloomberg $)
+ ICE seems to have a laissez faire approach to preserving surveillance footage. (404 Media)

3 Dozens of CDC vaccination databases have been frozen
They’re no longer being updated with crucial health information under RFK Jr. (Ars Technica)
+ Here’s why we don’t have a cold vaccine. Yet. (MIT Technology Review)

4 China has approved the first wave of Nvidia H200 chips
After CEO Jensen Huang’s strategic visit to the country. (Reuters)

5 Inside the rise of the AI “neolab”
They’re prioritizing longer term research breakthroughs over immediate profits. (WSJ $)

6 How Anthropic scanned—and disposed of—millions of books 📚
In an effort to train its AI models to write higher quality text. (WP $)

7 India’s tech workers are burning out
They’re under immense pressure as AI gobbles up more jobs. (Rest of World)
+ But the country’s largest IT firm denies that AI will lead to mass layoffs. (FT $)
+ Inside India’s scramble for AI independence. (MIT Technology Review)

8 Google has forced a UK group to stop comparing YouTube to TV viewing figures
Maybe fewer people are tuning in than they’d like to admit? (FT $)

9 RIP Amazon grocery stores 🛒
The retail giant is shuttering all of its bricks and mortar shops. (CNN)
+ Amazon workers are increasingly worried about layoffs. (Insider $)

10 This computing technique could help to reduce AI’s energy demands
Enter thermodynamic computing. (IEEE Spectrum)
+ Three big things we still don’t know about AI’s energy burden. (MIT Technology Review)

Quote of the day

“Oh my gosh y’all, IG is a drug.”

—An anonymous Meta employee remarks on Instagram’s addictive qualities in an internal  document made public as part of a social media addiction trial Meta is facing, Ars Technica reports.

One more thing

How AI and Wikipedia have sent vulnerable languages into a doom spiral

Wikipedia is the most ambitious multilingual project after the Bible: There are editions in over 340 languages, and a further 400 even more obscure ones are being developed. But many of these smaller editions are being swamped with AI-translated content. Volunteers working on four African languages, for instance, estimated to MIT Technology Review that between 40% and 60% of articles in their Wikipedia editions were uncorrected machine translations.

This is beginning to cause a wicked problem. AI systems learn new languages by scraping huge quantities of text from the internet. Wikipedia is sometimes the largest source of online linguistic data for languages with few speakers—so any errors on those pages can poison the wells that AI is expected to draw from. Volunteers are being forced to go to extreme lengths to fix the issue, even deleting certain languages from Wikipedia entirely. Read the full story

—Jacob Judah

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ This singing group for people in Amsterdam experiencing cognitive decline is enormously heartwarming ($)
+ I enjoyed this impassioned defense of the movie sex scene.
+ Here’s how to dress like Steve McQueen (inherent cool not included, sorry)
+ Trans women are finding a home in the beautiful Italian town of Torvajanica ❤

The Download: OpenAI’s plans for science, and chatbot age verification

27 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside OpenAI’s big play for science 

—Will Douglas Heaven

In the three years since ChatGPT’s explosive debut, OpenAI’s technology has upended a remarkable range of everyday activities at home, at work, and in schools.

Now OpenAI is making an explicit play for scientists. In October, the firm announced that it had launched a whole new team, called OpenAI for Science, dedicated to exploring how its large language models could help scientists and tweaking its tools to support them.

So why now? How does a push into science fit with OpenAI’s wider mission? And what exactly is the firm hoping to achieve? I put these questions to Kevin Weil, a vice president at OpenAI who leads the new OpenAI for Science team, in an exclusive interview. Read the full story.

Why chatbots are starting to check your age

How do tech companies check if their users are kids?

This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they weren’t required to moderate content accordingly.

Now, two developments over the last week show how quickly things are changing in the US and how this issue is becoming a new battleground, even among parents and child-safety advocates. Read the full story.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

TR10: Commercial space stations

Humans have long dreamed of living among the stars, and for two decades hundreds of us have done so aboard the International Space Station (ISS). But a new era is about to begin in which private companies operate orbital outposts—with the promise of much greater access to space than before.

The ISS is aging and is expected to be brought down from orbit into the ocean in 2031. To replace it, NASA has awarded more than $500 million to several companies to develop private space stations, while others have built versions on their own. Read why we made them one of our 10 Breakthrough Technologies this year, and check out the rest of the list.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Tech workers are pressuring their bosses to condemn ICE 
The biggest companies and their leaders have remained largely silent so far. (Axios)
+ Hundreds of employees have signed an anti-ICE letter. (NYT $)
+ Formerly politically-neutral online spaces have become battlegrounds. (WP $)

2 The US Department of Transport plans to use AI to write new safety rules
Please don’t do this. (ProPublica)
+ Failure to catch any errors could lead to civilian deaths. (Ars Technica)

3 The FBI is investigating Minnesota Signal chats tracking federal agents
But free speech advocates claim the information is legally obtained. (NBC News)
+ A judge has ordered a briefing on whether Minnesota is being illegally punished. (Wired $)

4 TikTok users claim they’re unable to send “Epstein” in direct messages
But the company says it doesn’t know why. (NPR)
+ Users are also experiencing difficulty uploading anti-ICE videos. (CNN)
+ TikTok’s first weekend under US ownership hasn’t gone well. (The Verge)
+ Gavin Newsom wants to probe whether TikTok is censoring Trump-critical content. (Politico)

5 Grok is not safe for children or teens

That’s the finding of a new report digging into the chatbot’s safety measures. (TechCrunch)
+ The EU is investigating whether it disseminates illegal content, too. (Reuters)

6 The US is on the verge of losing its measles-free status
Following a year of extensive outbreaks. (Undark)
+ Measles is surging in the US. Wastewater tracking could help. (MIT Technology Review)

7 Georgia has become the latest US state to consider banning data centers
Joining Maryland and Oklahoma’s stance. (The Guardian)
+ Data centers are amazing. Everyone hates them. (MIT Technology Review)

8 The future of Saudi Arabia’s futuristic city is in peril
The Line was supposed to house 9 million people. Instead, it could become a data center hub. (FT $)
+ We got an exclusive first look at it back in 2022. (MIT Technology Review)

9 Where do Earth’s lighter elements go? 🌍
New research suggests they might be hiding deep inside its core. (Knowable Magazine)

10 AI-generated influencers are getting increasingly surreal
Featuring virtual conjoined twins, and triple-breasted women. (404 Media)
+ Why ‘nudifying’ tech is getting steadily more dangerous. (Wired $)

Quote of the day

“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”

—Anthropic CEO Dario Amodei sounds the alarm about what he sees as the imminent dangers of AI superintelligence in a new 38-page essay, Axios reports.

One more thing

Why one developer won’t quit fighting to connect the US’s grids

Michael Skelly hasn’t learned to take no for an answer. For much of the last 15 years, the energy entrepreneur has worked to develop long-haul transmission lines to carry wind power across the Great Plains, Midwest, and Southwest. But so far, he has little to show for the effort.

Skelly has long argued that building such lines and linking together the nation’s grids would accelerate the shift from coal- and natural-gas-fueled power plants to the renewables needed to cut the pollution driving climate change. But his previous business shut down in 2019, after halting two of its projects and selling off interests in three more.

Skelly contends he was early, not wrong. And he has a point: markets and policymakers are increasingly coming around to his perspective. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Cats on the cover of the New Yorker! Need I say more?
+ Here’s how to know when you truly love someone.
+ This orphaned baby seal is just too cute.
+ I always had a sneaky suspicion that Depeche Mode and the Cure make for perfect bedfellows.

The Download: why LLMs are like aliens, and the future of head transplants

26 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the new biologists treating LLMs like aliens  

How large is a large language model? We now coexist with machines so vast and so complicated that nobody quite understands what they are, how they work, or what they can really do—not even the people who build them.

That’s a problem. Even though nobody fully understands how it works—and thus exactly what its limitations might be—hundreds of millions of people now use this technology every day. 

To help overcome our ignorance, researchers are studying LLMs as if they were doing biology or neuroscience on vast living creatures—city-size xenomorphs that have appeared in our midst. And they’re discovering that large language models are even weirder than they thought. Read the full story.

—Will Douglas Heaven

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

And mechanistic interpretability, the technique these researchers are using to try and understand AI models, is one of our 10 Breakthrough Technologies for 2026. Check out the rest of the list here!

Job titles of the future: Head-transplant surgeon

The Italian neurosurgeon Sergio Canavero has been preparing for a surgery that might never happen. His idea? Swap a sick person’s head—or perhaps just the brain—onto a younger, healthier body.

Canavero caused a stir in 2017 when he announced that a team he advised in China had exchanged heads between two corpses. But he never convinced skeptics that his technique could succeed—or to believe his claim that a procedure on a live person was imminent.

Canavero may have withdrawn from the spotlight, but the idea of head transplants isn’t going away. Instead, he says, the concept has recently been getting a fresh look from life-extension enthusiasts and stealth Silicon Valley startups. Read the full story.

—Antonio Regalado

This story is from the latest print issue of MIT Technology Review magazine, which is all about exciting innovations. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Big Tech is facing multiple high-profile social media addiction lawsuits 
Meta, TikTok and YouTube will face parents’ accusations in court this week. (WP $)
+ It’s the first time they’re defending against these claims before a jury in a court of law. (CNN)

2 Power prices are surging in the world’s largest data center hub
Virginia is struggling to meet record demand during a winter storm, partly because of the centers’ electricity demands. (Reuters)
+ Why these kinds of violent storms are getting harder to forecast. (Vox)
+ AI is changing the grid. Could it help more than it harms? (MIT Technology Review)

3 TikTok has started collecting even more data on its users
Including precise information about their location. (Wired $)

4 ICE-watching groups are successfully fighting DHS efforts to unmask them

An anonymous account holder sued to block ICE from identifying them—and won. (Ars Technica)

5 A new wave of AI companies want to use AI to make AI better
The AI ouroboros is never-ending. (NYT $)
+ Is AI really capable of making bona fide scientific advancements? (Undark)
+ AI trained on AI garbage spits out AI garbage. (MIT Technology Review)

6 Iran is testing a two-tier internet
Meaning its current blackout could become permanent. (Rest of World)

7 Don’t believe the humanoid robot hype
Even a leading robot maker admits that at best, they’re only half as efficient as humans. (FT $)
+ Tesla wants to put its Optimus bipedal machine to work in its Austin factory. (Insider)
+ Why the humanoid workforce is running late. (MIT Technology Review)

8 AI is changing how manufacturers create new products
Including thinner chewing gum containers and new body wash odors. (WSJ $)
+ AI could make better beer. Here’s how. (MIT Technology Review)

9 New Jersey has had enough of e-bikes 🚲

But will other US states follow its lead? (The Verge)

10 Sci-fi writers are cracking down on AI
Human-produced works only, please. (TechCrunch)
+ San Diego Comic-Con was previously a safe space for AI-generated art. (404 Media)
+ Generative AI is reshaping South Korea’s webcomics industry. (MIT Technology Review)

Quote of the day

“Choosing American digital technology by default is too easy and must stop.”

—Nicolas Dufourcq, head of French state-owned investment bank Bpifrance, makes his case for why Big European companies should use European-made software as tensions with the US rise, the Wall Street Journal reports.

One more thing

The return of pneumatic tubes

Pneumatic tubes were once touted as something that would revolutionize the world. In science fiction, they were envisioned as a fundamental part of the future—even in dystopias like George Orwell’s 1984, where they help to deliver orders for the main character, Winston Smith, in his job rewriting history to fit the ruling party’s changing narrative.

In real life, the tubes were expected to transform several industries in the late 19th century through the mid-20th. For a while, the United States took up the systems with gusto.

But by the mid to late 20th century, use of the technology had largely fallen by the wayside, and pneumatic tube technology became virtually obsolete. Except in hospitals. Read the full story.

—Vanessa Armstrong

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ You really can’t beat the humble jacket potato for a cheap, comforting meal. 
+ These tips might help you whenever anxiety strikes. ($)
+ There are some amazing photos in this year’s Capturing Ecology awards.
+ You can benefit from meditation any time, anywhere. Give it a go!

The Download: the case for AI slop, and helping CRISPR fulfill its promise

9 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How I learned to stop worrying and love AI slop

—Caiwei Chen

If I were to locate the moment AI slop broke through into popular consciousness, I’d pick the video of rabbits bouncing on a trampoline that went viral last summer. For many savvy internet users, myself included, it was the first time we were fooled by an AI video, and it ended up spawning a wave of almost identical generated clips.

My first reaction was that, broadly speaking, all of this sucked. That’s become a familiar refrain, in think pieces and at dinner parties. Everything online is slop now—the internet “enshittified,” with AI taking much of the blame. Initially, I largely agreed. But then friends started sharing AI clips in group chats that were compellingly weird, or funny. Some even had a grain of brilliance. 

I had to admit I didn’t fully understand what I was rejecting—what I found so objectionable. To try to get to the bottom of how I felt (and why), I spoke to the people making the videos, a company creating bespoke tools for creators, and experts who study how new media becomes culture. What I found convinced me that maybe generative AI will not end up ruining everything after all. Read the full story.

A new CRISPR startup is betting regulators will ease up on gene-editing

Here at MIT Technology Review we’ve been writing about the gene-editing technology CRISPR since 2013, calling it the biggest biotech breakthrough of the century. Yet so far, there’s been only one gene-editing drug approved, and it’s been used commercially on only about 40 patients, all with sickle-cell disease.

It’s becoming clear that the impact of CRISPR isn’t as big as we all hoped. In fact, there’s a pall of discouragement over the entire field—with some journalists saying the gene-editing revolution has “lost its mojo.”

So what will it take for CRISPR to help more people? A new startup says the answer could be an “umbrella approach” to testing and commercializing treatments which could avoid costly new trials or approvals for every new version. Read the full story.

—Antonio Regalado

America’s new dietary guidelines ignore decades of scientific research

The first days of 2026 have brought big news for health. On Wednesday, health secretary Robert F. Kennedy Jr. and his colleagues at the Departments of Health and Human Services and Agriculture unveiled new dietary guidelines for Americans. And they are causing a bit of a stir.

That’s partly because they recommend products like red meat, butter, and beef tallow—foods that have been linked to cardiovascular disease, and that nutrition experts have been recommending people limit in their diets.

These guidelines are a big deal—they influence food assistance programs and school lunches, for example. Let’s take a look at the good, the bad, and the ugly advice being dished up to Americans by their government.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Grok has switched off its image-generating function for most users
Following a global backlash to its sexualized pictures of women and children. (The Guardian)
+ Elon Musk has previously lamented the “guardrails” around the chatbot. (CNN)
+ XAI has been burning through cash lately. (Bloomberg $)

2 Online sleuths tried to use AI to unmask the ICE agent who killed a woman
The problem is, its results are far from reliable. (WP $)
+ The Trump administration is pushing videos of the incident filmed from a specific angle. (The Verge)
+ Minneapolis is struggling to make sense of the shooting of Renee Nicole Good. (WSJ $)

3 Smartphones and PCs are about to get more expensive
You can thank the memory chip shortage sparked by the AI data center boom. (FT $)
+ Expect delays alongside those price rises, too. (Economist $)

4 NASA is bringing four of the seven ISS crew members back to Earth

It’s not clear exactly why, but it said one of them experienced a “medical situation” earlier this week. (Ars Technica)

5 The vast majority of humanoid robots shipped last year were from China
The country is dominating early supply for the bipedal machines. (Bloomberg $)
+ Why a Chinese robot vacuum firm is moving into EVs. (Wired $)
+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

6 New Jersey has banned students’ phones in schools
It’s the latest in a long line of states to restrict devices during school hours. (NYT $)

7 Are AI coding assistants getting worse?
This data scientist certainly seems to think so. (IEEE Spectrum)
+ AI coding is now everywhere. But not everyone is convinced. (MIT Technology Review)

8 How to save wine from wildfires 🍇
Smoke leaves the alcohol with an ashy taste, but a group of scientists are working on a solution. (New Yorker $)

9 Celebrity Letterboxd accounts are good fun
Unsurprisingly, a subset of web users have chosen to hound them. (NY Mag $)

10 Craigslist refuses to die
The old-school classifieds corner of the web still has a legion of diehard fans. (Wired $)

Quote of the day

“Tools like Grok now risk bringing sexual AI imagery of children into the mainstream. The harms are rippling out.”

—Ngaire Alexander, head of the Internet Watch Foundation’s reporting hotline, explains the dangers around low-moderation AI tools like Grok to the Wall Street Journal.

One more thing

How to measure the returns on R&D spending

Given the draconian cuts to US federal funding for science, it’s worth asking some hard-nosed money questions: How much should we be spending on R&D? How much value do we get out of such investments, anyway?

To answer that, in several recent papers, economists have approached this issue in clever new ways.  And, though they ask slightly different questions, their conclusions share a bottom line: R&D is, in fact, one of the better long-term investments that the government can make. Read the full story.

—David Rotman

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Bruno Mars is back, baby!
+ Hmm, interesting: Apple’s new Widow’s Bay show is inspired by both Stephen King and Donald Glover, which is an intriguing combination.
+ Give this man control of the new Lego AI bricks!
+ An iron age war trumpet recently uncovered in Britain is the most complete example discovered anywhere in the world.

The Download: mimicking pregnancy’s first moments in a lab, and AI parameters explained

8 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Researchers are getting organoids pregnant with human embryos

At first glance, it looks like the start of a human pregnancy: A ball-shaped embryo presses into the lining of the uterus then grips tight, burrowing in as the first tendrils of a future placenta appear. This is implantation—the moment that pregnancy officially begins.

Only none of it is happening inside a body. These images were captured in a Beijing laboratory, inside a microfluidic chip, as scientists watched the scene unfold.

In three recent papers published by Cell Press, scientists report what they call the most accurate efforts yet to mimic the first moments of pregnancy in the lab. They’ve taken human embryos from IVF centers and let these merge with “organoids” made of endometrial cells, which form the lining of the uterus. Read our story about their work, and what might come next.

—Antonio Regalado

LLMs contain a LOT of parameters. But what’s a parameter?

A large language model’s parameters are often said to be the dials and levers that control how it behaves. Think of a planet-size pinball machine that sends its balls pinging from one end to the other via billions of paddles and bumpers set just so. Tweak those settings and the balls will behave in a different way.  

OpenAI’s GPT-3, released in 2020, had 175 billion parameters. Google DeepMind’s latest LLM, Gemini 3, may have at least a trillion—some think it’s probably more like 7 trillion—but the company isn’t saying. (With competition now fierce, AI firms no longer share information about how their models are built.)

But the basics of what parameters are and how they make LLMs do the remarkable things that they do are the same across different models. Ever wondered what makes an LLM really tick—what’s behind the colorful pinball-machine metaphors? Let’s dive in

—Will Douglas Heaven

What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles.

On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately.

The cited reason? Concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years.

Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the US’s struggling offshore wind industry.

—Casey Crownhart

This story is from The Spark, our weekly newsletter that explains the tech that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google and Character.AI have agreed to settle a lawsuit over a teenager’s death
It’s one of five lawsuits the companies have settled linked to young people’s deaths this week. (NYT $)
+ AI companions are the final stage of digital addiction, and lawmakers are taking aim. (MIT Technology Review)

2 The Trump administration’s chief output is online trolling
Witness the Maduro memes. (The Atlantic $)

3 OpenAI has created a new ChatGPT Health feature 
It’s dedicated to analyzing medical results and answering health queries. (Axios)
+ AI chatbots fail to give adequate advice for most questions relating to women’s health. (New Scientist $)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)

4 Meta’s acquisition of Manus is being probed by China
Holding up the purchase gives it another bargaining chip in its dealings with the US. (CNBC)
+ What happened when we put Manus to the test. (MIT Technology Review)

5 China is building humanoid robot training centers

To address a major shortage of the data needed to make them more competent. (Rest of World)
+ The robot race is fueling a fight for training data. (MIT Technology Review)

6 AI still isn’t close to automating our jobs
The technology just fundamentally isn’t good enough yet—for now. (WP $)

7 Weight regain seems to happen within two years of quitting the jabs
That’s the conclusion of a review of more than 40 studies. But dig into the details, and it’s not all bad news. (New Scientist $)

8 This Silicon Valley community is betting on algorithms to find love

Which feels like a bit of a fool’s errand. (NYT $)

9 Hearing aids are about to get really good

You can—of course—thank advances in AI. (IEEE Spectrum)

10 The first 100% AI-generated movie will hit our screen within three years
That’s according to Roku’s founder Anthony Wood. (Variety $)
+ How do AI models generate videos? (MIT Technology Review)

Quote of the day

“I’ve seen the video. Don’t believe this propaganda machine. ” 

—Minnesota’s governor Tim Walz responds on X to Homeland Security’s claim that ICE’s shooting of a woman in Minneapolis was justified.

One more thing

Inside the strange limbo facing millions of IVF embryos

Millions of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.

At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections.

The problem is that no one can really agree on what that status is. So while these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I love hearing about musicians’ favorite songs 🎶
+ Here are some top tips for making the most of travelling on your own.
+ Check out just some of the excellent-sounding new books due for publication this year.
+ I could play this spherical version of Snake forever (thanks Rachel!)

The Download: war in Europe, and the company that wants to cool the planet

7 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Europe’s drone-filled vision for the future of war

Last spring, 3,000 British soldiers deployed an invisible automated intelligence network, known as a “digital targeting web,” as part of a NATO exercise called Hedgehog in the damp forests of Estonia’s eastern territories.

The system had been cobbled together over the course of four months—an astonishing pace for weapons development, which is usually measured in years. Its purpose is to connect everything that looks for targets—“sensors,” in military lingo—and everything that fires on them (“shooters”) to a single, shared wireless electronic brain.

Eighty years after total war last transformed the continent, the Hedgehog tests signal a brutal new calculus of European defense. But leaning too much on this new mathematics of warfare could be a risky bet. Read the full story.

—Arthur Holland Michel

This story is from the next print issue of MIT Technology Review magazine. If you haven’t already, subscribe now to receive it once it lands.

MIT Technology Review Narrated: How one controversial startup hopes to cool the planet

Stardust Solutions believes that it can solve climate change—for a price.

The Israel-based geoengineering startup has said it expects nations will soon pay it more than a billion dollars a year to launch specially equipped aircraft into the stratosphere. Once they’ve reached the necessary altitude, those planes will disperse particles engineered to reflect away enough sunlight to cool down the planet, purportedly without causing environmental side effects. 

But numerous solar geoengineering researchers are skeptical that Stardust will line up the customers it needs to carry out a global deployment in the next decade. They’re also highly critical of the idea of a private company setting the global temperature for us.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Amazon has been accused of listing products without retailers’ consent
Small shop owners claim Amazon’s AI tool sold their goods without their permission. (Bloomberg $)
+ It also listed products the shops didn’t actually have in stock. (CNBC)
+ A new feature called “Shop Direct” appears to be to blame. (Insider $)

2 Data centers are a political issue 
Opposition to them is uniting communities across the political divide. (WP $)
+ Power-grid operators have suggested the centers power down at certain times. (WSJ $)
+ The data center boom in the desert. (MIT Technology Review)

3 Things are looking up for the nuclear power industry
The Trump administration is pumping money into it—but success is not guaranteed. (NYT $)
+ Why the grid relies on nuclear reactors in the winter. (MIT Technology Review)

4 A new form of climate modelling pins blame on specific companies

It may not be too long until we see the first case of how attribution science holds up in court. (New Scientist $)
+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review)

5 Meta has paused the launch of its Ray-Ban smartglasses 🕶
They’re just too darn popular, apparently. (Engadget)
+ Europe and Canada will just have to wait. (Gizmodo)
+ It’s blaming supply shortages and “unprecedented” demand. (Insider $)

6 Sperm contains information about a father’s fitness and diet
New research is shedding light on how we think about heredity. (Quanta Magazine)

7 Meta is selling online gambling ads in countries where it’s illegal
It’s ignoring local laws across Asia and the Middle East. (Rest of World)

8 AI isn’t always trying to steal your job
Sometimes it makes your toy robot a better companion. (The Verge)
+ How cuddly robots could change dementia care. (MIT Technology Review)

9 How to lock down a job at one of tech’s biggest companies
You’re more likely to be accepted into Harvard, apparently. (Fast Company $)

10 Millennials are falling out of love with the internet
Is a better future still possible? (Vox)
+ How to fix the internet. (MIT Technology Review)

Quote of the day

“I want to keep up with the latest doom.”

—Author Margaret Atwood explains why she doomscrolls to Wired.

One more thing

Inside the decades-long fight over Yahoo’s misdeeds in China

When you think of Big Tech these days, Yahoo is probably not top of mind. But for Chinese dissident Xu Wanping, the company still looms large—and has for nearly two decades.

In 2005, Xu was arrested for signing online petitions relating to anti-Japanese protests. He didn’t use his real name, but he did use his Yahoo email address. Yahoo China violated its users’ trust—providing information on certain email accounts to Chinese law enforcement, which in turn allowed the government to identify and arrest some users.

Xu was one of them; he would serve nine years in prison. Now, he and five other Chinese former political prisoners are suing Yahoo and a slate of co-defendants—not because of the company’s information-sharing (which was the focus of an earlier lawsuit filed by other plaintiffs), but rather because of what came after. Read the full story.

—Eileen Guo

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ It’s time to celebrate the life and legacy of Cecilia Giménez Zueco, the legendary Spanish amateur painter whose botched fresco restoration reached viral fame in 2012.
+ If you’re a sci-fi literature fan, there’s plenty of new releases to look forward to in 2026.
+ Last week’s wolf supermoon was a sight to behold.
+ This Mississippi restaurant is putting its giant lazy Susan to good use.

The Download: our predictions for AI, and good climate news

6 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next for AI in 2026

In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—check out our pretty accurate predictions for 2025—and now, we’re doing it all over again.

So what’s coming in 2026? Here are our big bets for the next 12 months.

—Rhiannon Williams, Will Douglas Heaven, Caiwei Chen, James O’Donnell & Michelle Kim

Interested in why it’s so hard to make predictions about AI—and why we’ve done it anyway? Check out the latest edition of The Algorithm, our weekly AI newsletter. Sign up here to make sure you receive future editions straight to your inbox.

This story is also part of MIT Technology Review’s What’s Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Four bright spots in climate news in 2025

Climate news wasn’t great in 2025. Global greenhouse-gas emissions hit record highs (again). It’s set to be either the second or third warmest on record. Climate-fueled disasters like wildfires in California and flooding in Indonesia and Pakistan devastated communities and caused billions in damage.

There’s no doubt that we’re in a severe situation. But for those looking for bright spots, there was some good news in 2025, too. Here are just a few of the positive stories our climate reporters noticed this year. Read the full story.

—Casey Crownhart & James Temple

Nominate someone you know for our global 2026 Innovators Under 35 competition

Last month we started accepting nominations for MIT Technology Review’s 2026 Innovators Under 35 competition. This annual list recognizes 35 of the world’s best young scientists and inventors, and our newsroom has produced it for more than two decades.

We’re looking for people who are making important scientific discoveries and applying that knowledge to build new technologies. Or those who are engineering new systems and algorithms that will aid our work or extend our abilities.

The good news is that we’re still accepting submissions for another two weeks! It’s free to nominate yourself or someone you know, and it only takes a few moments. Here’s how to submit your nomination.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US government is recommending fewer childhood shots
It’ll no longer suggest every child be vaccinated against flu, hepatitis A, rotavirus and meningococcal disease. (WP $)
+ We may end up witnessing a major uptick in rotavirus cases as a result. (The Atlantic $)
+ That brings the total number of recommended vaccines from 17 to 11. (BBC)
+ The changes were made without public comment or input from vaccine makers. (NPR)

2 Telegram can’t quite cut its ties to Russia
Bonds in the company have been frozen under western sanctions. (FT $)

3 America is in the grip of a flu outbreak
Infections have reached their highest levels since the covid pandemic. (Bloomberg $)
+ All but four states have reported high levels of flu activity. (CNN)
+ A new strain of the virus could be to blame. (AP News)

4 Humanoid factory robots are about to get a lot smarter
Google DeepMind is teaming up with Boston Dynamics to help its Atlas bipedal robots complete tasks more quickly. (Wired $)
+ In theory, the deal could help Atlas interact more naturally with humans, too. (TechCrunch)
+ Why the humanoid workforce is running late. (MIT Technology Review)

5 NASA’s budget for 2026 is better than we expected
It’s a drop of just 1% compared to last year, despite a series of brutal cut proposals. (Ars Technica)

6 Nvidia’s first self-driving cars will hit the road later this year
Watch out Tesla! (NYT $)
+ They’re a pretty smooth drive, apparently. (Ars Technica)
+ The company is also going full steam ahead to produce new chips. (Reuters)

7 Elon Musk’s fans are using Grok to make revenge porn of one of his sons’ mothers
Ashley St Clair says her complaints have gone unanswered. (The Guardian)
+ This is what happens when you scrap nearly all rules and safety protocols. (404 Media)
+ Authorities across the world are attempting to crack down on Grok. (Rest of World)

8 A Greenland ice dome has melted once before
And if temperatures remain high, it could do so again. (New Scientist $)
+ Inside a new quest to save the “doomsday glacier.” (MIT Technology Review)

9 A Chinese chatbot went rogue and snapped at a user
Tencent’s AI assistant Yuanbao told them their request was “stupid” and to “get lost.” (Insider $)
+ At least it’s not being overly sycophantic… (MIT Technology Review)

10 Lego’s bricks have been given a smart makeover
They contain tiny computers to bring entire sets to life. (The Verge)
+ The tech will create fun contextual sounds and light effects. (Wired $)

Quote of the day

“The goal of this administration is to basically make vaccines optional. And we’re paying the price.”

—Paul Offit, an infectious diseases physician, criticizes the Trump administration’s decision to slash the number of recommended vaccinations for children, the Guardian reports.

One more thing

I asked an AI to tell me how beautiful I am

Qoves started as a studio that would airbrush images for modeling agencies. Now it is a “facial aesthetics consultancy” that promises answers to the age-old question of what makes a face attractive. Its most compelling feature is the “facial assessment tool”: an AI-driven system that promises to tell you how beautiful you are—or aren’t—spitting out numerical values akin to credit ratings.

If that prospect isn’t concerning enough, most of these algorithms are littered with inaccuracies, ageism, and racism. Read the full story.

—Tate Ryan-Mosley

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Baxter the horse and Mr Fuzzy the barn cat have a beautiful relationship.
+ Cool—a new seven-mile underwater sculpture park has opened off the coast of Miami Beach.
+ Where can I buy this incredible Godzilla piggy bank?
+ Congratulations to the world’s oldest professional footballer Kazuyoshi Miura, who’s still going strong at 58 years old.

The Download: Kenya’s Great Carbon Valley, and the AI terms that were everywhere in 2025

5 January 2026 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Welcome to Kenya’s Great Carbon Valley: a bold new gamble to fight climate change

In June last year, startup Octavia Carbon began running a high-stakes test in the small town of Gilgil in south-central Kenya. It’s harnessing some of the excess energy generated by vast clouds of steam under the Earth’s surface to power prototypes of a machine that promises to remove carbon dioxide from the air in a manner that the company says is efficient, affordable, and—crucially—scalable.

The company’s long-term vision is undoubtedly ambitious—it wants to prove that direct air capture (DAC), as the process is known, can be a powerful tool to help the world keep temperatures from rising to ever more dangerous levels. 

But DAC is also a controversial technology, unproven at scale and wildly expensive to operate. On top of that, Kenya’s Maasai people have plenty of reasons to distrust energy companies. Read the full story.

Diana Kruzman

This article is also part of the Big Story series: MIT Technology Review’s most important, ambitious reporting. The stories in the series take a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here.

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. Our writers have taken a look back over the AI terms that dominated the year, for better or worse. Read the full list.

MIT Technology Review’s most popular stories of 2025

2025 was a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more.

As the new year begins, we wanted to give you a chance to revisit some of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Washington’s battle to break up Big Tech is in peril
A string of judges have opted not to force them to spin off key assets. (FT $)
+ Here’s some of the major tech litigation we can expect in the next 12 months. (Reuters)

2 Disinformation about the US invasion of Venezuela is rife on social media
And the biggest platforms don’t appear to be doing much about it. (Wired $)
+ Trump shared a picture of captured president Maduro on Truth Social. (NYT $)

3 Here’s what we know about Big Tech’s ties to the Israeli military
AI is central to its military operations, and giant US firms have stepped up to help. (The Guardian)

4 Alibaba’s AI tool is detecting cancer cases in China
PANDA is adept at spotting pancreatic cancer, which is typically tough to identify. (NYT $)
+ How hospitals became an AI testbed. (WSJ $)
+ A medical portal in New Zealand was hacked into last week. (Reuters)

5 This Discord community supports people recovering from AI-fueled delusions
They say reconnecting with fellow humans is an important step forward. (WP $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

6 Californians can now demand data brokers delete their personal information 
Thanks to a new tool—but there’s a catch. (TechCrunch)
+ This California lawmaker wants to ban AI from kids’ toys. (Fast Company $)

7 Chinese peptides are flooding into Silicon Valley

The unproven drugs promise to heal injuries, improve focus and reduce appetite—and American tech workers are hooked. (NYT $)

8 Alaska’s court system built an AI assistant to navigate probate
But the project has been plagued by delays and setbacks. (NBC News)
+ Inside Amsterdam’s high-stakes experiment to create fair welfare AI. (MIT Technology Review)

9 These ghostly particles could upend how we think about the universe
The standard model of particle physics may have a crack in it. (New Scientist $)
+ Why is the universe so complex and beautiful? (MIT Technology Review)

10 Sick of the same old social media apps?
Give these alternative platforms a go. (Insider $)

Quote of the day

“Just an unbelievable amount of pollution.”

—Sharon Wilson, a former oil and gas worker who tracks methane releases, tells the Guardian what a thermal imaging camera pointed at xAI’s Colossus datacentre has revealed.

One more thing

How aging clocks can help us understand why we age—and if we can reverse it

Wrinkles and gray hairs aside, it can be difficult to know how well—or poorly—someone’s body is truly aging. A person who develops age-related diseases earlier in life, or has other biological changes associated with aging, might be considered “biologically older” than a similar-age person who doesn’t have those changes. Some 80-year-olds will be weak and frail, while others are fit and active.

Over the past decade, scientists have been uncovering new methods of looking at the hidden ways our bodies are aging. And what they’ve found is changing our understanding of aging itself. Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ You heard it here first: 2026 is the year of cabbage (yes, cabbage.)
+ Darts is bigger than ever. So why are we still waiting for the first great darts video game? 🎯
+ This year’s CES is already off to a bang, courtesy of an essential, cutting-edge vibrating knife.
+ At least one good thing came out of that Stranger Things finale—streams of Prince’s excellent back catalog have soared.

What’s next for AI in 2026

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—and we’re doing it again. 

How did we do last time? We picked five hot AI trends to look out for in 2025, including what we called generative virtual playgrounds, a.k.a world models (check: From Google DeepMind’s Genie 3 to World Labs’s Marble, tech that can generate realistic virtual environments on the fly keeps getting better and better); so-called reasoning models (check: Need we say more? Reasoning models have fast become the new paradigm for best-in-class problem solving); a boom in AI for science (check: OpenAI is now following Google DeepMind by setting up a dedicated team to focus on just that); AI companies that are cozier with national security (check: OpenAI reversed position on the use of its technology for warfare to sign a deal with the defense-tech startup Anduril to help it take down battlefield drones); and legitimate competition for Nvidia (check, kind of: China is going all in on developing advanced AI chips, but Nvidia’s dominance still looks unassailable—for now at least). 

So what’s coming in 2026? Here are our big bets for the next 12 months. 

More Silicon Valley products will be built on Chinese LLMs

The last year shaped up as a big one for Chinese open-source models. In January, DeepSeek released R1, its open-source reasoning model, and shocked the world with what a relatively small firm in China could do with limited resources. By the end of the year, “DeepSeek moment” had become a phrase frequently tossed around by AI entrepreneurs, observers, and builders—an aspirational benchmark of sorts. 

It was the first time many people realized they could get a taste of top-tier AI performance without going through OpenAI, Anthropic, or Google.

Open-weight models like R1 allow anyone to download a model and run it on their own hardware. They are also more customizable, letting teams tweak models through techniques like distillation and pruning. This stands in stark contrast to the “closed” models released by major American firms, where core capabilities remain proprietary and access is often expensive.

As a result, Chinese models have become an easy choice. Reports by CNBC and Bloomberg suggest that startups in the US have increasingly recognized and embraced what they can offer.

One popular group of models is Qwen, created by Alibaba, the company behind China’s largest e-commerce platform, Taobao. Qwen2.5-1.5B-Instruct alone has 8.85 million downloads, making it one of the most widely used pretrained LLMs. The Qwen family spans a wide range of model sizes alongside specialized versions tuned for math, coding, vision, and instruction-following, a breadth that has helped it become an open-source powerhouse.

Other Chinese AI firms that were previously unsure about committing to open source are following DeepSeek’s playbook. Standouts include Zhipu’s GLM and Moonshot’s Kimi. The competition has also pushed American firms to open up, at least in part. In August, OpenAI released its first open-source model. In November, the Allen Institute for AI, a Seattle-based nonprofit, released its latest open-source model, Olmo 3. 

Even amid growing US-China antagonism, Chinese AI firms’ near-unanimous embrace of open source has earned them goodwill in the global AI community and a long-term trust advantage. In 2026, expect more Silicon Valley apps to quietly ship on top of Chinese open models, and look for the lag between Chinese releases and the Western frontier to keep shrinking—from months to weeks, and sometimes less.

Caiwei Chen

The US will face another year of regulatory tug-of-war

T​​he battle over regulating artificial intelligence is heading for a showdown. On December 11, President Donald Trump signed an executive order aiming to neuter state AI laws, a move meant to handcuff states from keeping the growing industry in check. In 2026, expect more political warfare. The White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China.

Under Trump’s executive order, states may fear being sued or starved federal funding if they clash with his vision for light-touch regulation. Big Democratic states like California—which just enacted the nation’s first frontier AI law requiring companies to publish safety testing for their AI models—will take the fight to court, arguing that only Congress can override state laws. But states that can’t afford to lose federal funding, or fear getting in Trump’s crosshairs, might fold. Still, expect to see more state lawmaking on hot-button issues, especially where Trump’s order gives states a green light to legislate. With chatbots accused of triggering teen suicides and data centers sucking up more and more energy, states will face mounting public pressure to push for guardrails. 

In place of state laws, Trump promises to work with Congress to establish a federal AI law. Don’t count on it. Congress failed to pass a moratorium on state legislation twice in 2025, and we aren’t holding out hope that it will deliver its own bill this year. 

AI companies like OpenAI and Meta will continue to deploy powerful super-PACs to support political candidates who back their agenda and target those who stand in their way. On the other side, super-PACs supporting AI regulation will build their own war chests to counter. Watch them duke it out at next year’s midterm elections.

The further AI advances, the more people will fight to steer its course, and 2026 will be another year of regulatory tug-of-war—with no end in sight.

Michelle Kim

Chatbots will change the way we shop

Imagine a world in which you have a personal shopper at your disposal 24-7—an expert who can instantly recommend a gift for even the trickiest-to-buy-for friend or relative, or trawl the web to draw up a list of the best bookcases available within your tight budget. Better yet, they can analyze a kitchen appliance’s strengths and weaknesses, compare it with its seemingly identical competition, and find you the best deal. Then once you’re happy with their suggestion, they’ll take care of the purchasing and delivery details too.

But this ultra-knowledgeable shopper isn’t a clued-up human at all—it’s a chatbot. This is no distant prediction, either. Salesforce recently said it anticipates that AI will drive $263 billion in online purchases this holiday season. That’s some 21% of all orders. And experts are betting on AI-enhanced shopping becoming even bigger business within the next few years. By 2030, between $3 trillion and $5 trillion annually will be made from agentic commerce, according to research from the consulting firm McKinsey. 

Unsurprisingly, AI companies are already heavily invested in making purchasing through their platforms as frictionless as possible. Google’s Gemini app can now tap into the company’s powerful Shopping Graph data set of products and sellers, and can even use its agentic technology to call stores on your behalf. Meanwhile, back in November, OpenAI announced a ChatGPT shopping feature capable of rapidly compiling buyer’s guides, and the company has struck deals with Walmart, Target, and Etsy to allow shoppers to buy products directly within chatbot interactions. 

Expect plenty more of these kinds of deals to be struck within the next year as consumer time spent chatting with AI keeps on rising, and web traffic from search engines and social media continues to plummet. 

Rhiannon Williams

An LLM will make an important new discovery

I’m going to hedge here, right out of the gate. It’s no secret that large language models spit out a lot of nonsense. Unless it’s with monkeys-and-typewriters luck, LLMs won’t discover anything by themselves. But LLMs do still have the potential to extend the bounds of human knowledge.

We got a glimpse of how this could work in May, when Google DeepMind revealed AlphaEvolve, a system that used the firm’s Gemini LLM to come up with new algorithms for solving unsolved problems. The breakthrough was to combine Gemini with an evolutionary algorithm that checked its suggestions, picked the best ones, and fed them back into the LLM to make them even better.

Google DeepMind used AlphaEvolve to come up with more efficient ways to manage power consumption by data centers and Google’s TPU chips. Those discoveries are significant but not game-changing. Yet. Researchers at Google DeepMind are now pushing their approach to see how far it will go.

And others have been quick to follow their lead. A week after AlphaEvolve came out, Asankhaya Sharma, an AI engineer in Singapore, shared OpenEvolve, an open-source version of Google DeepMind’s tool. In September, the Japanese firm Sakana AI released a version of the software called SinkaEvolve. And in November, a team of US and Chinese researchers revealed AlphaResearch, which they claim improves on one of AlphaEvolve’s already better-than-human math solutions.

There are alternative approaches too. For example, researchers at the University of Colorado Denver are trying to make LLMs more inventive by tweaking the way so-called reasoning models work. They have drawn on what cognitive scientists know about creative thinking in humans to push reasoning models toward solutions that are more outside the box than their typical safe-bet suggestions.

Hundreds of companies are spending billions of dollars looking for ways to get AI to crack unsolved math problems, speed up computers, and come up with new drugs and materials. Now that AlphaEvolve has shown what’s possible with LLMs, expect activity on this front to ramp up fast.    

Will Douglas Heaven

Legal fights heat up

For a while, lawsuits against AI companies were pretty predictable: Rights holders like authors or musicians would sue companies that trained AI models on their work, and the courts generally found in favor of the tech giants. AI’s upcoming legal battles will be far messier.

The fights center on thorny, unresolved questions: Can AI companies be held liable for what their chatbots encourage people to do, as when they help teens plan suicides? If a chatbot spreads patently false information about you, can its creator be sued for defamation? If companies lose these cases, will insurers shun AI companies as clients?

In 2026, we’ll start to see the answers to these questions, in part because some notable cases will go to trial (the family of a teen who died by suicide will bring OpenAI to court in November).

At the same time, the legal landscape will be further complicated by President Trump’s executive order from December—see Michelle’s item above for more details on the brewing regulatory storm.

No matter what, we’ll see a dizzying array of lawsuits in all directions (not to mention some judges even turning to AI amid the deluge).

James O’Donnell

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.

Make sure you take the time to brace yourself for what promises to be another bonkers year.

—Rhiannon Williams

1. Superintelligence

a jack russell terrier wearing glasses and a bow tie

As long as people have been hyping AI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian or dystopian consequences for humanity. “Superintelligence” is that latest hot term. Meta announced in July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company’s competitors to join.

In December, Microsoft’s head of AI followed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI, you’d be right! While it’s conceivable that these sorts of technologies will be feasible in humanity’s long run, the question is really when, and whether today’s AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings. —James O’Donnell

2. Vibe coding

Thirty years ago, Steve Jobs said everyone in America should learn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks to vibe coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models’ coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique’s biggest champions aren’t letting those minor details stand in their way. Also—it sounds fun! — Rhiannon Williams

3. Chatbot psychosis

One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although “chatbot psychosis” is not a recognized medical term, researchers are paying close attention to the growing anecdotal evidence from users who say it’s happened to them or someone they know. Sadly, the increasing number of lawsuits filed against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology’s potentially deadly consequences. —Rhiannon Williams

4. Reasoning

Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.

A month later, the Chinese firm DeepSeek took everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could “reason” reignited old debates about how smart LLMs really are and how they really work. Like “artificial intelligence” itself, “reasoning” is technical jargon dressed up with marketing sparkle. Choo choo! —Will Douglas Heaven

5. World models 

For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don’t have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind’s LLMs).

World models—a broad church encompassing various technologies—aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li’s startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing. —Will Douglas Heaven

6. Hyperscalers

Have you heard about all the people saying no thanks, we actually don’t want a giant data center plopped in our backyard? The data centers in question—which tech companies want to built everywhere, including space—are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world’s best chips hum away training and fine-tuning models, and they’re built to be modular and grow according to needs.

It’s been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers will raise their power bills. Such buildings generally struggle to run on renewable energy. And they don’t tend to create all that many jobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community. —James O’Donnell

7. Bubble

The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They’re pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raising circular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, might not turn a profit for years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.

Most organizations using AI aren’t yet seeing the payoff, and AI work slop is everywhere. There’s scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strong revenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dream ever burst—Michelle Kim

8. Agentic

This year, AI agents were everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly “agentic,” a vague term if ever there was one. No matter that it’s virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it’s supposed to do—it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic! —Rhiannon Williams

9. Distillation

Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.

The key to R1’s success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher’s knowledge.  —Caiwei Chen

10. Sycophancy

As people across the world spend increasing amounts of time interacting with chatbots like ChatGPT, chatbot makers are struggling to work out the kind of tone and “personality” the models should adopt. Back in April, OpenAI admitted it’d struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4o too sycophantic. Having it suck up to you isn’t just irritating—it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything—yes, everything—LLMs produce with a pinch of salt. —Rhiannon Williams

11. Slop

If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop” is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, from fake biographies to shrimp Jesus images to surreal human-animal hybrid videos.

But people are also having fun with it. The term’s sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think “work slop” or “friend slop.” As the hype cycle resets, “slop” marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression. —Caiwei Chen

12. Physical intelligence

Did you come across the hypnotizing video from earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world. 

It’s true that robots have been able to learn new tasks faster than ever before, everywhere from operating rooms to warehouses. Self-driving-car companies have seen improvements in how they simulate the roads, too. That said, it’s still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks to remote operators in the Philippines

The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That’s why the robot company Figure suggested in September that it would pay people to film themselves in their apartments doing chores. Would you sign up? —James O’Donnell

13. Fair use

AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is “fair use”—a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn’t compete with the original. Courts are starting to weigh in. In June, Anthropic’s training of its AI model Claude on a library of books was ruled fair use because the technology was “exceedingly transformative.”

That same month, Meta scored a similar win, but only because the authors couldn’t show that the company’s literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed a splashy deal with OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney’s franchises. Meanwhile, governments around the world are rewriting copyright rules for the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question, it depends—Michelle Kim

14. GEO

Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die. —Rhiannon Williams

The Download: China’s dying EV batteries, and why AI doomers are doubling down

19 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

China figured out how to sell EVs. Now it has to bury their batteries.

In the past decade, China has seen an EV boom, thanks in part to government support. Buying an electric car has gone from a novel decision to a routine one; by late 2025, nearly 60% of new cars sold were electric or plug-in hybrids.

But as the batteries in China’s first wave of EVs reach the end of their useful life, early owners are starting to retire their cars, and the country is now under pressure to figure out what to do with those aging components.

The issue is putting strain on China’s still-developing battery recycling industry and has given rise to a gray market that often cuts corners on safety and environmental standards. National regulators and commercial players are also stepping in, but so far these efforts have struggled to keep pace with the flood of batteries coming off the road. Read the full story.

—Caiwei Chen

The AI doomers feel undeterred

It’s a weird time to be an AI doomer.This small but influential community believes, in the simplest terms, that AI could get so good it could be bad—very, very bad—for humanity.

The doomer crowd has had some notable success over the past several years: including helping shape AI policy coming from the Biden administration. But a number of developments over the past six months have put them on the back foot. Talk of an AI bubble has overwhelmed the discourse as tech companies continue to invest in multiple Manhattan Projects’ worth of data centers without any certainty that future demand will match what they’re building.

So where does this leave the doomers? We decided to ask some of the movement’s biggest names to see if the recent setbacks and general vibe shift had altered their views. See what they had to say in our story.

—Garrison Lovely

This story is part of our new Hype Correction package, a collection of stories designed to help you reset your expectations about what AI makes possible—and what it doesn’t. Check out the rest of the package.

Take our quiz on the year in health and biotechnology

In just a couple of weeks, we’ll be bidding farewell to 2025. And what a year it has been! Artificial intelligence is being incorporated into more aspects of our lives, weight-loss drugs have expanded in scope, and there have been some real “omg” biotech stories from the fields of gene therapy, IVF, neurotech, and more.

Jessica Hamzelou, our senior biotech reporter, is inviting you to put your own memory to the test. So how closely have you been paying attention this year?

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 TikTok has signed a deal to sell its US unit 
Its new owner will be a joint venture controlled by American investors including Oracle. (Axios)
+ But the platform is adamant that its Chinese owner will retain its core US business. (FT $)
+ The deal is slated to close on January 22 next year. (Bloomberg $)
+ It means TikTok will sidestep a US ban—at least for now. (The Guardian)

2 A tip on Reddit helped to end the hunt for the Brown University shooter
The suspect, who has been found dead, is also suspected of killing an MIT professor. (NYT $)
+ The shooter’s motivation is still unclear, police say. (WP $)

3 Tech leaders are among those captured in newly-released Epstein photos
Bill Gates and Google’s Sergey Brin are both in the pictures. (FT $)
+ They’ve been pulled from a tranche of more than 95,000. (Wired $)

4 A Starlink satellite appears to have exploded
And it’s now falling back to earth. (The Verge)
+ On the ground in Ukraine’s largest Starlink repair shop. (MIT Technology Review)

5 YouTube has shut down two major channels that share fake movie trailers
Screen Culture and KH Studio uploaded AI-generated mock trailers with over a billion views. (Deadline)
+ Google is treading a thin line between embracing and shunning generative AI. (Ars Technica)

6 Trump is cracking down on investment in Chinese tech firms
Lawmakers are increasingly worried that US money is bolstering the country’s surveillance state. (WSJ $)
+ Meanwhile, China is working on boosting its chip output. (FT $)

7 ICE has paid an AI agent company to track down targets
It claims to be able to rapidly trace a target’s online network. (404 Media)

8 America wants to return to the Moon by 2028
And to build some nuclear reactors while it’s up there. (Ars Technica)
+ Southeast Asia seeks its place in space. (MIT Technology Review)

9 Actors in the UK are refusing to be scanned for AI
They’re reportedly routinely pressured to consent to creating digital likenesses of themselves. (The Guardian)
+ How Meta and AI companies recruited striking actors to train AI. (MIT Technology Review)

10 Indian tutors are explaining how to use AI over WhatsApp
Lessons are cheap and personalized—but the teachers aren’t always credible. (Rest of World)
+ How Indian health-care workers use WhatsApp to save pregnant women. (MIT Technology Review)

Quote of the day

“Trump wants to hand over even more control of what you watch to his billionaire buddies. Americans deserve to know if the president struck another backdoor deal for this billionaire takeover of TikTok.”

—Democratic senator Elizabeth Warren queries the terms of the deal that TikTok has made to allow it to continue operating in the US in a post on Bluesky.

One more thing

Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back.

—Rhiannon Williams


Earlier this summer, I visited the AI company Synthesia to create a hyperrealistic AI-generated avatar of me. The company’s avatars are a decent barometer of just how dizzying progress has been in AI over the past few years, so I was curious just how accurately its latest AI model, introduced last month, could replicate me.

I found my avatar as unnerving as it is technically impressive. It’s slick enough to pass as a high-definition recording of a chirpy corporate speech, and if you didn’t know me, you’d probably think that’s exactly what it was.

My avatar shows how it’s becoming ever-harder to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us? Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ You can keep your beef tallow—here are the food trends that need to remain firmly in 2025.
+ The Library of Congress has some lovely images of winter that are completely free to use.
+ If you’ve got a last minute Christmas work party tonight, don’t make these Secret Santa mistakes.
+ Did you realize Billie Eilish’s smash hit Birds of a Feather has the same chord progression as Wham’s Last Christmas? They sound surprisingly good mashed together.

The Download: the worst technology of 2025, and Sam Altman’s AI hype

18 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The 8 worst technology flops of 2025

Welcome to our annual list of the worst, least successful, and simply dumbest technologies of the year.

We like to think there’s a lesson in every technological misadventure. But when technology becomes dependent on power, sometimes the takeaway is simpler: it would have been better to stay away.

Regrets—2025 had a few. Here are some of the more notable ones.

—Antonio Regalado

A brief history of Sam Altman’s hype

Each time you’ve heard a borderline outlandish idea of what AI will be capable of, it often turns out that Sam Altman was, if not the first to articulate it, at least the most persuasive and influential voice behind it.

For more than a decade he has been known in Silicon Valley as a world-class fundraiser and persuader. Throughout, Altman’s words have set the agenda. What he says about AI is rarely provable when he says it, but it persuades us of one thing: This road we’re on with AI can go somewhere either great or terrifying, and OpenAI will need epic sums to steer it toward the right destination. In this sense, he is the ultimate hype man.

To understand how his voice has shaped our understanding of what AI can do, we read almost everything he’s ever said about the technology. His own words trace how we arrived here. Read the full story.

—James O’Donnell

This story is part of our new Hype Correction package, a collection of stories designed to help you reset your expectations about what AI makes possible—and what it doesn’t. Check out the rest of the package here.

Can AI really help us discover new materials?

One of my favorite stories in the Hype Correction package comes from my colleague David Rotman, who took a hard look at AI for materials research. AI could transform the process of discovering new materials—innovation that could be especially useful in the world of climate tech, which needs new batteries, semiconductors, magnets, and more.

But the field still needs to prove it can make materials that are actually novel and useful. Can AI really supercharge materials research? And what would that look like? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China built a chip-making machine to rival the West’s supremacy 
Suggesting China is far closer to achieving semiconductor independence than we previously believed. (Reuters)
+ China’s chip boom is creating a new class of AI-era billionaires. (Insider $)

2 NASA finally has a new boss
It’s billionaire astronaut Jared Isaacman, a close ally of Elon Musk. (Insider $)
+ But will Isaacman lead the US back to the Moon before China? (BBC)
+ Trump previously pulled his nomination, before reselecting Isaacman last month. (The Verge)

3 The parents of a teenage sextortion victim are suing Meta
Murray Dowey took his own life after being tricked into sending intimate pictures to an overseas criminal gang. (The Guardian)
+ It’s believed that the gang is based in West Africa. (BBC)

4 US and Chinese satellites are jostling in orbit
In fact, these clashes are so common that officials have given it a name—”dogfighting.” (WP $)
+ How to fight a war in space (and get away with it) (MIT Technology Review)

5 It’s not just AI that’s trapped in a bubble right now

Labubus, anyone? (Bloomberg $)
+ What even is the AI bubble? (MIT Technology Review)

6 Elon Musk’s Texan school isn’t operating as a school
Instead, it’s a “licensed child care program” with just a handful of enrolled kids. (NYT $)

7 US Border Patrol is building a network of small drones
In a bid to expand its covert surveillance powers. (Wired $)
+ This giant microwave may change the future of war. (MIT Technology Review)

8 This spoon makes low-salt foods taste better
By driving the food’s sodium ions straight to the diner’s tongue. (IEEE Spectrum)

9 AI cannot be trusted to run an office vending machine
Though the lucky Wall Street Journal staffer who walked away with a free PlayStation may beg to differ. (WSJ $)

10 Physicists have 3D-printed a Cheistmas tree from ice 🎄
No refrigeration kit required. (Ars Technica

Quote of the day

“It will be mentioned less and less in the same way that Microsoft Office isn’t mentioned in job postings anymore.”

—Marc Cenedella, founder and CEO of careers platform Ladders, tells Insider why employers will increasingly expect new hires to be fully au fait with AI.

One more thing

Is this the electric grid of the future?

Lincoln Electric System, a publicly owned utility in Nebraska, is used to weathering severe blizzards. But what will happen soon—not only at Lincoln Electric but for all electric utilities—is a challenge of a different order.

Utilities must keep the lights on in the face of more extreme and more frequent storms and fires, growing risks of cyberattacks and physical disruptions, and a wildly uncertain policy and regulatory landscape. They must keep prices low amid inflationary costs. And they must adapt to an epochal change in how the grid works, as the industry attempts to transition from power generated with fossil fuels to power generated from renewable sources like solar and wind.

The electric grid is bracing for a near future characterized by disruption. And, in many ways, Lincoln Electric is an ideal lens through which to examine what’s coming. Read the full story.

—Andrew Blum

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ A fragrance company is trying to recapture the scent of extinct flowers, wow. 
+ Seattle’s Sauna Festival sounds right up my street.
+ Switzerland has built what’s essentially a theme park dedicated to Saint Bernards
+ I fear I’ll never get over this tale of director supremo James Cameron giving a drowning rat CPR to save its life 🐀

The Download: why 2025 has been the year of AI hype correction, and fighting GPS jamming

16 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The great AI hype correction of 2025

Some disillusionment was inevitable. When OpenAI released a free web app called ChatGPT in late 2022, it changed the course of an entire industry—and several world economies. Millions of people started talking to their computers, and their computers started talking back. We were enchanted, and we expected more.

Well, 2025 has been a year of reckoning. For a start, the heads of the top AI companies made promises they couldn’t keep. At the same time, updates to the core technology are no longer the step changes they once were.

To be clear, the last few years have been filled with genuine “Wow” moments. But this remarkable technology is only a few years old, and in many ways it is still experimental. Its successes come with big caveats. Read the full story to learn more about why we may need to readjust our expectations.

—Will Douglas Heaven

This story is part of our new Hype Correction package, a collection of stories designed to help you reset your expectations about what AI makes possible—and what it doesn’t. Check out the rest of the package here, and you can read more about why it’s time to reset our expectations for AI in the latest edition of the Algorithm, our weekly AI newsletter. Sign up here to make sure you receive future editions straight to your inbox.

Quantum navigation could solve the military’s GPS jamming problem

Since the 2022 invasion of Ukraine, thousands of flights have been affected by a far-reaching Russian campaign of using radio transmissions that jammed its GPS system.

The growing inconvenience to air traffic and risk of a real disaster have highlighted the vulnerability of GPS and focused attention on more secure ways for planes to navigate the gauntlet of jamming and spoofing, the term for tricking a GPS receiver into thinking it’s somewhere else.

One approach that’s emerging from labs is quantum navigation: exploiting the quantum nature of light and atoms to build ultra-sensitive sensors that can allow vehicles to navigate independently, without depending on satellites. Read the full story.

—Amos Zeeberg

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Trump administration has launched its US Tech Force program
In a bid to lure engineers away from Big Tech roles and straight into modernizing the government. (The Verge)
+ So, essentially replacing the IT workers that DOGE got rid of, then. (The Register)

2 Lawmakers are investigating how AI data centers affect electricity costs
They want to get to the bottom of whether it’s being passed onto consumers. (NYT $)
+ Calculating AI’s water usage is far from straightforward, too. (Wired $)
+ AI is changing the grid. Could it help more than it harms? (MIT Technology Review)

3 Ford isn’t making a large all-electric truck after all
After the US government’s support for EVs plummeted. (Wired $)
+ Instead, the F-150 Lightning pickup will be reborn as a plug-in hybrid. (The Information $)
+ Why Americans may be finally ready to embrace smaller cars. (Fast Company $)
+ The US could really use an affordable electric truck. (MIT Technology Review)

4 PayPal wants to become a bank in the US
The Trump administration is very friendly to non-traditional financial companies, after all. (FT $)
+ It’s been a good year for the crypto industry when it comes to banking. (Economist $)

5 A tech trade deal between the US and UK has been put on ice

America isn’t happy with the lack of progress Britain has made, apparently. (NYT $)
+ It’s a major setback in relations between the pair. (The Guardian)

6 Why does no one want to make the cure for dengue?
A new antiviral pill appears to prevent infection—but its development has been abandoned. (Vox)

7 The majority of the world’s glaciers are forecast to disappear by 2100
At a rate of around 3,000 per year. (New Scientist $)
+ Inside a new quest to save the “doomsday glacier”. (MIT Technology Review)

8 Hollywood is split over AI
While some filmmakers love it, actors are horrified by its inexorable rise. (Bloomberg $)

9 Corporate America is obsessed with hiring storytellers
It’s essentially a rehashed media relations manager role overhauled for the AI age. (WSJ $)

10 The concept of hacking existed before the internet
Just ask this bunch of teenage geeks. (IEEE Spectrum)

Quote of the day

“So the federal government deleted 18F, which was doing great work modernizing the government, and then replaced it with a clone? What is the point of all this?”

—Eugene Vinitsky, an assistant professor at New York University, takes aim at the US government’s decision to launch a new team to overhaul its approach to technology in a post on Bluesky.

One more thing

How DeepSeek became a fortune teller for China’s youth

As DeepSeek has emerged as a homegrown challenger to OpenAI, young people across the country have started using AI to revive fortune-telling practices that have deep roots in Chinese culture.

Across Chinese social media, users are sharing AI-generated readings, experimenting with fortune-telling prompt engineering, and revisiting ancient spiritual texts—all with the help of DeepSeek.

The surge in AI fortune-telling comes during a time of pervasive anxiety and pessimism in Chinese society. And as spiritual practices remain hidden underground thanks to the country’s regime, computers and phone screens are helping younger people to gain a sense of control over their lives. Read the full story.

—Caiwen Chen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Chess has been online as far back as the 1800s (no, really!) ♟
+ Jane Austen was born 250 years ago today. How well do you know her writing? ($)
+ Rob Reiner, your work will live on forever.
+ I enjoyed this comprehensive guide to absolutely everything you could ever want to know about New England’s extensive seafood offerings.

The Download: introducing the AI Hype Correction package

15 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: the AI Hype Correction package

AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.

AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.

After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact.

Here, at the end of 2025, we’re starting the post-hype phase. This new package of stories, called Hype Correction, is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.

Here’s a sneak peek at what you can expect:

+ An introduction to four ways of thinking about the great AI hype correction of 2025.

+  While it’s safe to say we’re definitely in an AI bubble right now, what’s less clear is what it really looks like—and what comes after it pops. Read the full story.

+ Why OpenAI’s Sam Altman can be traced back to so many of the more outlandish proclamations about AI doing the rounds these days. Read the full story.

+ It’s a weird time to be an AI doomer. But they’re not giving up.

+ AI coding is now everywhere—but despite the billions of dollars being poured into improving AI models’ coding abilities, not everyone is convinced. Read the full story.

+ If we really want to start finding new kinds of materials faster, AI materials discovery needs to make it out of the lab and move into the real world. Read the full story.

+ Why reports of AI’s potential to replace trained human lawyers are greatly exaggerated.

+ Dr. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, explains why the generative AI hype train is distracting us from what AI actually is and what it can—and crucially, cannot—do. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 iRobot has filed for bankruptcy
The Roomba maker is considering handing over control to its main Chinese supplier. (Bloomberg $)
+ A proposed Amazon acquisition fell through close to two years ago. (FT $)
+ How the company lost its way. (TechCrunch)
+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review)

2 Meta’s 2025 has been a total rollercoaster ride
From its controversial AI team to Mark Zuckerberg’s newfound appreciation for masculine energy. (Insider $)

3 The Trump administration is giving the crypto industry a much easier ride
It’s dismissed crypto lawsuits involving many firms with financial ties to Trump. (NYT $)
+ Celebrities are feeling emboldened to flog crypto once again. (The Guardian)
+ A bitcoin investor wants to set up a crypto libertarian community in the Caribbean. (FT $)

4 There’s a new weight-loss drug in town
And people are already taking it, even though it’s unapproved. (Wired $)
+ What we still don’t know about weight-loss drugs. (MIT Technology Review)

5 Chinese billionaires are having dozens of US-born surrogate babies
An entire industry has sprung up to support them. (WSJ $)
+ A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. (MIT Technology Review)

6 Trump’s “big beautiful bill” funding hinges on states integrating AI into healthcare
Experts fear it’ll be used as a cost-cutting measure, even if it doesn’t work. (The Guardian)
+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review)

7 Extreme rainfall is wreaking havoc in the desert
Oman and the UAE are unaccustomed to increasingly common torrential downpours. (WP $)

8 Data centers are being built in countries that are too hot for them
Which makes it a lot harder to cool them sufficiently. (Rest of World)

9 Why AI image generators are getting deliberately worse
Their makers are pursuing realism—not that overly polished, Uncanny Valley look. (The Verge)
+ Inside the AI attention economy wars. (NY Mag $)

10 How a tiny Swedish city became a major video game hub
Skövde has formed an unlikely community of cutting-edge developers. (The Guardian)
+ Google DeepMind is using Gemini to train agents inside one of Skövde’s biggest franchises. (MIT Technology Review)

Quote of the day

“They don’t care about the games. They don’t care about the art. They just want their money.”

—Anna C Webster, chair of the freelancing committee of the United Videogame Workers union, tells the Guardian why their members are protesting the prestigious 2025 Game Awards in the wake of major layoffs.

One more thing

Recapturing early internet whimsy with HTML

Websites weren’t always slick digital experiences.

There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.

Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story.

—Tiffany Ng

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+  Here’s how a bit of math can help you wrap your presents much more neatly this year.
+ It seems that humans mastered making fire way, way earlier than we realized.
+ The Arab-owned cafes opening up across the US sound warm and welcoming.
+ How to give a gift the recipient will still be using and loving for decades to come.

The Download: expanded carrier screening, and how Southeast Asia plans to get to space

12 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Expanded carrier screening: Is it worth it?

Carrier screening  tests would-be parents for hidden genetic mutations that might affect their children. It initially involved testing for specific genes in at-risk populations.

Expanded carrier screening takes things further, giving would-be parents an option to test for a wide array of diseases in prospective parents and egg and sperm donors.

The companies offering these screens “started out with 100 genes, and now some of them go up to 2,000,” Sara Levene, genetics counsellor at Guided Genetics, said at a meeting I attended this week. “It’s becoming a bit of an arms race amongst labs, to be honest.”

But expanded carrier screening comes with downsides. And it isn’t for everyone. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Southeast Asia seeks its place in space

It’s a scorching October day in Bangkok and I’m wandering through the exhibits at the Thai Space Expo, held in one of the city’s busiest shopping malls, when I do a double take. Amid the flashy space suits and model rockets on display, there’s a plain-looking package of Thai basil chicken. I’m told the same kind of vacuum-­sealed package has just been launched to the International Space Station.

It’s an unexpected sight, one that reflects the growing excitement within the Southeast Asian space sector. And while there is some uncertainty about how exactly the region’s space sector may evolve, there is plenty of optimism, too. Read the full story.

—Jonathan O’Callaghan

This story is from the next print issue of MIT Technology Review magazine. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Disney just signed a major deal with OpenAI
Meaning you’ll soon be able to create Sora clips starring 200 Marvel, Pixel and Star Wars characters. (Hollywood Reporter $)
+ Disney used to be openly skeptical of AI. What changed? (WSJ $)
+ It’s not feeling quite so friendly towards Google, however. (Ars Technica)
+ Expect a load of AI slop making its way to Disney Plus. (The Verge)

2 Donald Trump has blocked US states from enforcing their own AI rules
But technically, only Congress has the power to override state laws. (NYT $)
+ A new task force will seek out states with “inconsistent” AI rules. (Engadget)
+ The move is particularly bad news for California. (The Markup)

3 Reddit is challenging Australia’s social media ban for teens
It’s arguing that the ban infringes on their freedom of political communication. (Bloomberg $)
+ We’re learning more about the mysterious machinations of the teenage brain. (Vox)

4 ChatGPT’s “adult mode” is due to launch early next year

But OpenAI admits it needs to improve its age estimation tech first. (The Verge)
+ It’s pretty easy to get DeepSeek to talk dirty. (MIT Technology Review)

5 The death of Running Tide’s carbon removal dream
The company’s demise is a wake-up call to others dabbling in experimental tech. (Wired $)
+ We first wrote about Running Tide’s issues back in 2022. (MIT Technology Review)
+ What’s next for carbon removal? (MIT Technology Review)

6 That dirty-talking AI teddy bear wasn’t a one-off

It turns out that a wide range of LLM-powered toys aren’t suitable for children. (NBC News)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

7 These are the cheapest places to create a fake online account
For a few cents, scammers can easily set up bots. (FT $)

8 How professors are attempting to AI-proof exams
ChatGPT won’t help you cut corners to ace an oral examination. (WP $)

9 Can a font be woke?
Marco Rubio seems to think so. (The Atlantic $)

10 Next year is all about maximalist circus decor 🎪
That’s according to Pinterest’s trend predictions for 2026. (The Guardian)

Quote of the day

 “Trump is delivering exactly what his billionaire benefactors demanded—all at the expense of our kids, our communities, our workers, and our planet.” 

—Senator Ed Markey criticizes Donald Trump’s decision to sign an order cracking down on US states’ ability to self-regulate AI, the Wall Street Journal reports.

One more thing

Taiwan’s “silicon shield” could be weakening

Taiwanese politics increasingly revolves around one crucial question: Will China invade? China’s ruling party has wanted to seize Taiwan for more than half a century. But in recent years, China’s leader, Xi Jinping, has placed greater emphasis on the idea of “taking back” the island (which the Chinese Communist Party, or CCP, has never controlled).

Many in Taiwan and elsewhere think one major deterrent has to do with the island’s critical role in semiconductor manufacturing. Taiwan produces the majority of the world’s semiconductors and more than 90% of the most advanced chips needed for AI applications.

But now some Taiwan specialists and some of the island’s citi­zens are worried that this “silicon shield,” if it ever existed, is cracking. Read the full story.

—Johanna M. Costigan

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Reasons to be cheerful: people are actually nicer than we think they are.
+ This year’s Krampus Run in Whitby—the Yorkshire town that inspired Bram Stoker’s Dracula—looks delightfully spooky.
+ How to find the magic in that most mundane of locations: the airport.
+ The happiest of birthdays to Dionne Warwick, who turns 85 today.

The Download: solar geoengineering’s future, and OpenAI is being sued

11 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Solar geoengineering startups are getting serious

Solar geoengineering aims to manipulate the climate by bouncing sunlight back into space. In theory, it could ease global warming. But as interest in the idea grows, so do concerns about potential consequences.

A startup called Stardust Solutions recently raised a $60 million funding round, the largest known to date for a geoengineering startup. My colleague James Temple has a new story out about the company, and how its emergence is making some researchers nervous.

So far, the field has been limited to debates, proposed academic research, and—sure—a few fringe actors to keep an eye on. Now things are getting more serious. So what does it mean for geoengineering, and for the climate? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

If you’re interested in reading more about solar geoengineering, check out:

+ Why the for-profit race into solar geoengineering is bad for science and public trust. Read the full story.

+ Why we need more research—including outdoor experiments—to make better-informed decisions about such climate interventions.

+ The hard lessons of Harvard’s failed geoengineering experiment, which was officially terminated last year. Read the full story.

+ How this London nonprofit became one of the biggest backers of geoengineering research.

+ The technology could alter the entire planet. These groups want every nation to have a say.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is being sued for wrongful death
By the estate of a woman killed by her son after he engaged in delusion-filled conversations with ChatGPT. (WSJ $)
+ The chatbot appeared to validate Stein-Erik Soelberg’s conspiratorial ideas. (WP $)
+ It’s the latest in a string of wrongful death legal actions filed against chatbot makers. (ABC News)

2 ICE is tracking pregnant immigrants through specifically-developed smartwatches
They’re unable to take the devices off, even during labor. (The Guardian)
+ Pregnant and postpartum women say they’ve been detained in solitary confinement. (Slate $)
+ Another effort to track ICE raids has been taken offline. (MIT Technology Review)

3 Meta’s new AI hires aren’t making friends with the rest of the company
Tensions are rife between the AGI team and other divisions. (NYT $)
+ Mark Zuckerberg is keen to make money off the company’s AI ambitions. (Bloomberg $)
+ Meanwhile, what’s life like for the remaining Scale AI team? (Insider $)

4 Google DeepMind is building its first materials science lab in the UK
It’ll focus on developing new materials to build superconductors and solar cells. (FT $) 

5 The new space race is to build orbital data centers
And Blue Origin is winning, apparently. (WSJ $)
+ Plenty of companies are jostling for their slice of the pie. (The Verge)
+ Should we be moving data centers to space? (MIT Technology Review)

6 Inside the quest to find out what causes Parkinson’s
A growing body of work suggests it may not be purely genetic after all. (Wired $)

7 Are you in TikTok’s cat niche? 
If so, you’re likely to be in these other niches too. (WP $)

8 Why do our brains get tired? 🧠💤
Researchers are trying to get to the bottom of it.  (Nature $)

9 Microsoft’s boss has built his own cricket app 🏏
Satya Nadella can’t get enough of the sound of leather on willow. (Bloomberg $)

10 How much vibe coding is too much vibe coding? 
One journalist’s journey into the heart of darkness. (Rest of World)
+ What is vibe coding, exactly? (MIT Technology Review)

Quote of the day

“I feel so much pain seeing his sad face…I hope for a New Year’s miracle.”

—A child in Russia sends a message to the Kremlin-aligned Safe Internet League explaining the impact of the country’s decision to block access to the wildly popular gaming platform Roblox on their brother, the Washington Post reports.

 One more thing

Why it’s so hard to stop tech-facilitated abuse

After Gioia had her first child with her then husband, he installed baby monitors throughout their home—to “watch what we were doing,” she says, while he went to work. She’d turn them off; he’d get angry. By the time their third child turned seven, Gioia and her husband had divorced, but he still found ways to monitor her behavior. 

One Christmas, he gave their youngest a smartwatch. Gioia showed it to a tech-savvy friend, who found that the watch had a tracking feature turned on. It could be turned off only by the watch’s owner—her ex.

Gioia is far from alone. In fact, tech-facilitated abuse now occurs in most cases of intimate partner violence—and we’re doing shockingly little to prevent it. Read the full story

—Jessica Klein

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The New Yorker has picked its best TV shows of 2025. Let the debate commence!
+ Check out the winners of this year’s Drone Photo Awards.
+ I’m sorry to report you aren’t half as intuitive as you think you are when it comes to deciphering your dog’s emotions.
+ Germany’s “home of Christmas” sure looks magical.

The Download: LLM confessions, and tapping into geothermal hot spots

4 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI has trained its LLM to confess to bad behavior

What’s new: OpenAI is testing a new way to expose the complicated processes at work inside large language models. Researchers at the company can make an LLM produce what they call a confession, in which the model explains how it carried out a task and (most of the time) own up to any bad behavior.

Why it matters: Figuring out why large language models do what they do—and in particular why they sometimes appear to lie, cheat, and deceive—is one of the hottest topics in AI right now. If this multitrillion-dollar technology is to be deployed as widely as its makers hope it will be, it must be made more trustworthy. OpenAI sees confessions as one step toward that goal. Read the full story.

—Will Douglas Heaven

How AI is uncovering hidden geothermal energy resources

Sometimes geothermal hot spots are obvious, marked by geysers and hot springs on Earth’s surface. But in other places, they’re obscured thousands of feet underground. Now AI could help uncover these hidden pockets of potential power.

A startup company called Zanskar announced today that it’s used AI and other advanced computational methods to uncover a blind geothermal system—meaning there aren’t signs of it on the surface—in the western Nevada desert. The company says it’s the first blind system that’s been identified and confirmed to be a commercial prospect in over 30 years. Read the full story.

—Casey Crownhart

Why the grid relies on nuclear reactors in the winter

In the US, nuclear reactors follow predictable seasonal trends. Summer and winter tend to see the highest electricity demand, so plant operators schedule maintenance and refueling for other parts of the year.

This scheduled regularity might seem mundane, but it’s quite the feat that operational reactors are as reliable and predictable as they are. Now we’re seeing a growing pool of companies aiming to bring new technologies to the nuclear industry. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has scrapped Biden’s fuel efficiency requirements
It’s a major blow for green automobile initiatives. (NYT $)
+ Trump maintains that getting rid of the rules will drive down the price of cars. (Politico)

2 RFK Jr’s vaccine advisers may delay hepatitis B vaccines for babies
The shots are a key part in combating acute cases of the infection. (The Guardian)
+ Former FDA commissioners are worried by its current chief’s vaccine views. (Ars Technica)
+ Meanwhile, a fentanyl vaccine is being trialed in the Netherlands. (Wired $)

3 Amazon is exploring building its own US delivery network
Which could mean axing its long-standing partnership with the US Postal Service. (WP $)

4 Republicans are defying Trump’s orders to block states from passing AI laws

They’re pushing back against plans to sneak the rule into an annual defense bill. (The Hill)+ Trump has been pressuring them to fall in line for months. (Ars Technica)
+ Congress killed an attempt to stop states regulating AI back in July. (CNN)

5 Wikipedia is exploring AI licensing deals
It’s a bid to monetize AI firms’ heavy reliance on its web pages. (Reuters)
+ How AI and Wikipedia have sent vulnerable languages into a doom spiral. (MIT Technology Review)

6 OpenAI is looking to the stars—and beyond
Sam Altman is reportedly interested in acquiring or partnering with a rocket company. (WSJ $)

7 What we can learn from wildfires

This year’s Dragon Bravo fire defied predictive modelling. But why? (New Yorker $)
+ How AI can help spot wildfires. (MIT Technology Review)

8 What’s behind America’s falling birth rates?
It’s remarkably hard to say. (Undark)

9 Researchers are studying whether brain rot is actually real 🧠
Including whether its effects could be permanent. (NBC News)

10 YouTuber Mr Beast is planning to launch a mobile phone service
Beast Mobile, anyone? (Insider $)
+ The New York Stock Exchange could be next in his sights. (TechCrunch)

Quote of the day

“I think there are some players who are YOLO-ing.”

—Anthropic CEO Dario Amodei suggests some rival AI companies are veering into risky spending territory, Bloomberg reports.

One more thing

The quest to show that biological sex matters in the immune system

For years, microbiologist Sabra Klein has painstakingly made the case that sex—defined by biological attributes such as our sex chromosomes, sex hormones, and reproductive tissues—can influence immune responses.

Klein and others have shown how and why male and female immune systems respond differently to the flu virus, HIV, and certain cancer therapies, and why most women receive greater protection from vaccines but are also more likely to get severe asthma and autoimmune disorders.

Klein has helped spearhead a shift in immunology, a field that long thought sex differences didn’t matter—and she’s set her sights on pushing the field of sex differences even further. Read the full story.

—Sandeep Ravindran

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Digital artist Beeple’s latest Art Basel show features Elon Musk, Jeff Bezos and Mark Zuckerberg robotic dogs pooping out NFTs 💩
+ If you’ve always dreamed of seeing the Northern Lights, here’s your best bet at doing so.
+ Check out this fun timeline of fashion’s hottest venues.
+ Why monkeys in ancient Roman times had pet piglets 🐖🐒

The Download: AI and coding, and Waymo’s aggressive driverless cars

3 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Everything you need to know about AI and coding

AI has already transformed how code is written, but a new wave of autonomous systems promise to make the process even smoother and less prone to making mistakes.

Amazon Web Services has just revealed three new “frontier” AI agents, its term for a more sophisticated class of autonomous agents capable of working for days at a time without human intervention. One of them, called Kiro, is designed to work independently without the need for a human to constantly point it in the right direction. Another, AWS Security Agent, scans a project for common vulnerabilities: an interesting development given that many AI-enabled coding assistants can end up introducing errors.

To learn more about the exciting direction AI-enhanced coding is heading in, check out our team’s reporting: 

+ A string of startups are racing to build models that can produce better and better software. Read the full story.

+ We’re starting to give AI agents real autonomy. Are we ready for what could happen next

+ What is vibe coding, exactly?

+ Anthropic’s cofounder and chief scientist Jared Kaplan on 4 ways agents will improve. Read the full story.

+ How AI assistants are already changing the way code gets made. Read the full story

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Amazon’s new agents can reportedly code for days at a time 
They remember previous sessions and continuously learn from a company’s codebase. (VentureBeat)
+ AWS says it’s aware of the pitfalls of handing over control to AI. (The Register)
+ The company faces the challenge of building enough infrastructure to support its AI services. (WSJ $)

2 Waymo’s driverless cars are getting surprisingly aggressive
The company’s goal to make the vehicles “confidently assertive” is prompting them to bend the rules. (WSJ $)
+ That said, their cars still have a far lower crash rate than human drivers. (NYT $)

3 The FDA’s top drug regulator has stepped down
After only three weeks in the role. (Ars Technica)+ A leaked vaccine memo from the agency doesn’t inspire confidence. (Bloomberg $)

4 Maybe DOGE isn’t entirely dead after all

Many of its former workers are embedded in various federal agencies. (Wired $)

5 A Chinese startup’s reusable rocket crash-landed after launch

It suffered what it called an “abnormal burn,” scuppering hopes of a soft landing. (Bloomberg $)

6  Startups are building digital clones of major sites to train AI agents

From Amazon to Gmail, they’re creating virtual agent playgrounds. (NYT $)

7 Half of US states now require visitors to porn sites to upload their ID
Missouri has become the 25th state to enact age verification laws. (404 Media)

8 AGI truthers are trying to influence the Pope
They’re desperate for him to take their concerns seriously.(The Verge)
+ How AGI became the most consequential conspiracy theory of our time. (MIT Technology Review)

9 Marketers are leaning into ragebait ads
But does making customers annoyed really translate into sales? (WP $)

10 The surprising role plant pores could play in fighting drought
At night as well as daytime. (Knowable Magazine)
+ Africa fights rising hunger by looking to foods of the past. (MIT Technology Review)

Quote of the day

“Everyone is begging for supply.”

—An anonymous source tells Reuters about the desperate measures Chinese AI companies take to secure scarce chips.

One more thing

The case against humans in space

Elon Musk and Jeff Bezos are bitter rivals in the commercial space race, but they agree on one thing: Settling space is an existential imperative. Space is the place. The final frontier. It is our human destiny to transcend our home world and expand our civilization to extraterrestrial vistas.

This belief has been mainstream for decades, but its rise has been positively meteoric in this new gilded age of astropreneurs.

But as visions of giant orbital stations and Martian cities dance in our heads, a case against human space colonization has found its footing in a number of recent books, from doubts about the practical feasibility of off-Earth communities, to realism about the harsh environment of space and the enormous tax it would exact on the human body. Read the full story.

—Becky Ferreira

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ This compilation of 21st century floor fillers is guaranteed to make you feel old.
+ A fire-loving amoeba has been found chilling out in volcanic hot springs.
+ This old-school Terminator 2 game is pixel perfection.
+ How truthful an adaptation is your favorite based-on-a-true-story movie? Let’s take a look at the data.

The Download: AI’s impact on the economy, and DeepSeek strikes again

2 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The State of AI: Welcome to the economic singularity

—David Rotman and Richard Waters

Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole.

At one extreme, AI coding assistants have revolutionized the work of software developers. At the other extreme, most companies are seeing little if any benefit from their initial investments. 

That has provided fuel for the skeptics who maintain that—by its very nature as a probabilistic technology prone to hallucinating—generative AI will never have a deep impact on business. To students of tech history, though, the lack of immediate impact is normal. Read the full story.

If you’re an MIT Technology Review subscriber, you can join David and Richard, alongside our editor in chief, Mat Honan, for an exclusive conversation digging into what’s happening across different markets live on Tuesday, December 9 at 1pm ET.  Register here

The State of AI is our subscriber-only collaboration between the Financial Times and MIT Technology Review examining the ways in which AI is reshaping global power. Sign up to receive future editions every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 DeepSeek has unveiled two new experimental AI models 
DeepSeek-V3.2 is designed to match OpenAI’s GPT-5’s reasoning capabilities. (Bloomberg $)
+ Here’s how DeepSeek slashes its models’ computational burden. (VentureBeat)
+ It’s achieved these results despite its limited access to powerful chips. (SCMP $)

2 OpenAI has issued a “code red” warning to its employees
It’s a call to arms to improve ChatGPT, or risk being overtaken. (The Information $)
+ Both Google and Anthropic are snapping at OpenAI’s heels. (FT $)
+ Advertising and other initiatives will be pushed back to accommodate the new focus. (WSJ $)

3 How to know when the AI bubble has burst
These are the signs to look out for. (Economist $)
+ Things could get a whole lot worse for the economy if and when it pops. (Axios)
+ We don’t really know how the AI investment surge is being financed. (The Guardian)

4 Some US states are making it illegal for AI to discriminate against you

California is the latest to give workers more power to fight algorithms. (WP $)

5 This AI startup is working on a post-transformer future

Transformer architecture underpins the current AI boom—but Pathway is developing something new. (WSJ $)
+ What the next frontier of AI could look like. (IEEE Spectrum)

6 India is demanding smartphone makers install a government app
Which privacy advocates say is unacceptable snooping. (FT $)
+ India’s tech talent is looking for opportunities outside the US. (Rest of World)

7 College students are desperate to sign up for AI majors
AI is now the second-largest major at MIT behind computer science. (NYT $)
+ AI’s giants want to take over the classroom. (MIT Technology Review)

8 America’s musical heritage is at serious risk
Much of it is stored on studio tapes, which are deteriorating over time. (NYT $)
+ The race to save our online lives from a digital dark age. (MIT Technology Review)

9 Celebrities are increasingly turning on AI
That doesn’t stop fans from casting them in slop videos anyway. (The Verge)

10 Samsung has revealed its first tri-folding phone
But will people actually want to buy it? (Bloomberg $)
+ It’ll cost more than $2,000 when it goes on sale in South Korea. (Reuters)

Quote of the day

“The Chinese will not pause. They will take over.”

—Michael Lohscheller, chief executive of Swedish electric car maker Polestar, tells the Guardian why Europe should stick to its plan to ban the production of new petrol and diesel cars by 2035. 

One more thing

Inside Amsterdam’s high-stakes experiment to create fair welfare AI

Amsterdam thought it was on the right track. City officials in the welfare department believed they could build technology that would prevent fraud while protecting citizens’ rights. They followed these emerging best practices and invested a vast amount of time and money in a project that eventually processed live welfare applications. But in their pilot, they found that the system they’d developed was still not fair and effective. Why?

Lighthouse Reports, MIT Technology Review, and the Dutch newspaper Trouw have gained unprecedented access to the system to try to find out. Read about what we discovered.

—Eileen Guo, Gabriel Geiger & Justin-Casimir Braun

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Hear me out: a truly great festive film doesn’t need to be about Christmas at all.
+ Maybe we should judge a book by its cover after all.
+ Happy birthday to Ms Britney Spears, still the princess of pop at 44!
+ The fascinating psychology behind why we love travelling so much.

The Download: spotting crimes in prisoners’ phone calls, and nominate an Innovator Under 35

1 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

An AI model trained on prison phone calls now looks for planned crimes in those calls

A US telecom company trained an AI model on years of inmates’ phone and video calls and is now piloting that model to scan their calls, texts, and emails in the hope of predicting and preventing crimes.

Securus Technologies president Kevin Elder told MIT Technology Review that the company began building its AI tools in 2023, using its massive database of recorded calls to train AI models to detect criminal activity. It created one model, for example, using seven years of calls made by inmates in the Texas prison system, but it has been working on models for other states and counties.

However, prisoner rights advocates say that the new AI system enables a system of invasive surveillance, and courts have specified few limits to this power.  Read the full story.

—James O’Donnell

Nominations are now open for our global 2026 Innovators Under 35 competition

We have some exciting news: Nominations are now open for MIT Technology Review’s 2026 Innovators Under 35 competition. This annual list recognizes 35 of the world’s best young scientists and inventors, and our newsroom has produced it for more than two decades. 

It’s free to nominate yourself or someone you know, and it only takes a few moments. Here’s how to submit your nomination.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 New York is cracking down on personalized pricing algorithms
A new law forces retailers to declare if their pricing is informed by users’ data. (NYT $)
+ The US National Retail Federation tried to block it from passing. (TechCrunch)

2 The White House has launched a media bias tracker
Complete with a “media offender of the week” section and a Hall of Shame. (WP $)
+ The Washington Post is currently listed as the site’s top offender. (The Guardian)
+ Donald Trump has lashed out at several reporters in the past few weeks. (The Hill)

3 American startups are hooked on open-source Chinese AI models

They’re cheap and customizable—what’s not to like? (NBC News)
+ Americans also love China’s cheap goods, regardless of tariffs. (WP $)
+ The State of AI: Is China about to win the race? (MIT Technology Review)

4 How police body cam footage became viral YouTube content
Recent arrestees live in fear of ending up on popular channels. (Vox)
+ AI was supposed to make police bodycams better. What happened? (MIT Technology Review)

5 Construction workers are cashing in on the data center boom
Might as well enjoy it while it lasts. (WSJ $)
+ The data center boom in the desert. (MIT Technology Review)

6 China isn’t convinced by crypto
Even though bitcoin mining is quietly making a (banned) comeback. (Reuters)
+ The country’s central bank is no fan of stablecoins. (CoinDesk)

7 A startup is treating its AI companions like characters in a novel
Could that approach make for better AI companions? (Fast Company $)
+ Gemini is the most empathetic model, apparently. (Semafor)
+ The looming crackdown on AI companionship. (MIT Technology Review)

8 Ozempic is so yesterday 💉
New weight-loss drugs are tailored to individual patients. (The Atlantic $)
+ What we still don’t know about weight-loss drugs. (MIT Technology Review)

9 AI is upending how consultants work
For the third year in a row, big firms are freezing junior workers’ salaries. (FT $)

10 Behind the scenes of Disney’s AI animation accelerator
What took five months to create has been whittled down to under five weeks. (CNET)
+ Director supremo James Cameron appears to have changed his mind about AI. (TechCrunch)
+ Why are people scrolling through weirdly-formatted TV clips? (WP $)

Quote of the day

“[I hope AI] comes to a point where it becomes sort of mental junk food and we feel sick and we don’t know why.”

—Actor Jenna Ortega outlines her hopes for AI’s future role in filmmaking, Variety reports.

One more thing

The weeds are winning

Since the 1980s, more and more plants have evolved to become immune to the biochemical mechanisms that herbicides leverage to kill them. This herbicidal resistance threatens to decrease yields—out-of-control weeds can reduce them by 50% or more, and extreme cases can wipe out whole fields.

At worst, it can even drive farmers out of business. It’s the agricultural equivalent of antibiotic resistance, and it keeps getting worse. Weeds have evolved resistance to 168 different herbicides and 21 of the 31 known “modes of action,” which means the specific biochemical target or pathway a chemical is designed to disrupt.

Agriculture needs to embrace a diversity of weed control practices. But that’s much easier said than done. Read the full story.

—Douglas Main

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Now we’re finally in December, don’t let Iceland’s gigantic child-eating Yule Cat give you nightmares 😺
+ These breathtaking sculpture parks are serious must-sees ($)
+ 1985 sure was a vintage year for films.
+ Is nothing sacred?! Now Ozempic has come for our Christmas trees!

The Download: the mysteries surrounding weight-loss drugs, and the economic effects of AI

28 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What we still don’t know about weight-loss drugs

Weight-loss drugs have been back in the news this week. First, we heard that Eli Lilly, the company behind Mounjaro and Zepbound, became the first healthcare company in the world to achieve a trillion-dollar valuation.

But we also learned that, disappointingly, GLP-1 drugs don’t seem to help people with Alzheimer’s disease. And that people who stop taking the drugs when they become pregnant can experience potentially dangerous levels of weight gain. On top of that, some researchers worry that people are using the drugs postpartum to lose pregnancy weight without understanding potential risks.

All of this news should serve as a reminder that there’s a lot we still don’t know about these drugs. So let’s look at the enduring questions surrounding GLP-1 agonist drugs.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

If you’re interested in weight loss drugs and how they affect us, take a look at:

+ GLP-1 agonists like Wegovy, Ozempic, and Mounjaro might benefit heart and brain health—but research suggests they might also cause pregnancy complications and harm some users. Read the full story.

+ We’ve never understood how hunger works. That might be about to change. Read the full story.

+ Weight-loss injections have taken over the internet. But what does this mean for people IRL?

+ This vibrating weight-loss pill seems to work—in pigs. Read the full story.

What we know about how AI is affecting the economy

There’s a lot at stake when it comes to understanding how AI is changing the economy right now. Should we be pessimistic? Optimistic? Or is the situation too nuanced for that?

Hopefully, we can point you towards some answers. Mat Honan, our editor in chief, will hold a special subscriber-only Roundtables conversation with our editor at large David Rotman, and Richard Waters, Financial Times columnist, exploring what’s happening across different markets. Register here to join us at 1pm ET on Tuesday December 9.

The event is part of the Financial Times and MIT Technology Review “The State of AI” partnership, exploring the global impact of artificial intelligence. Over the past month, we’ve been running discussions between our journalists—sign up here to receive future editions every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Tech billionaires are gearing up to fight AI regulation 
By amassing multi-million dollar war chests ahead of the 2026 US midterm elections. (WSJ $)
+ Donald Trump’s “Manhattan Project” for AI is certainly ambitious. (The Information $)

2 The EU wants to hold social media platforms liable for financial scams
New rules will force tech firms to compensate banks if they fail to remove reported scams. (Politico)

3 China is worried about a humanoid robot bubble
Because more than 150 companies there are building very similar machines. (Bloomberg $)
+ It could learn some lessons from the current AI bubble. (CNN)+ Why the humanoid workforce is running late. (MIT Technology Review)

4 A Myanmar scam compound was blown up
But its residents will simply find new bases for their operations. (NYT $)
+ Experts suspect the destruction may have been for show. (Wired $)
+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review)

5 Navies across the world are investing in submarine drones 
They cost a fraction of what it takes to run a traditional manned sub. (The Guardian)
+ How underwater drones could shape a potential Taiwan-China conflict. (MIT Technology Review)

6 What to expect from China’s seemingly unstoppable innovation drive
Its extremely permissive regulators play a big role. (Economist $)
+ Is China about to win the AI race? (MIT Technology Review)

7 The UK is waging a war on VPNs
Good luck trying to persuade people to stop using them. (The Verge)

8 We’re learning more about Jeff Bezos’ mysterious clock project
He’s backed the Clock of the Long Now for years—and construction is amping up. (FT $)
+ How aging clocks can help us understand why we age—and if we can reverse it. (MIT Technology Review)

9 Have we finally seen the first hints of dark matter?
These researchers seem to think so. (New Scientist $)

10 A helpful robot is helping archaeologists reconstruct Pompeii
Reassembling ancient frescos is fiddly and time-consuming, but less so if you’re a dextrous machine. (Reuters)

Quote of the day

“We do fail… a lot.”

—Defense company Anduril explains its move-fast-and-break-things ethos to the Wall Street Journal in response to reports its systems have been marred by issues in Ukraine.

One more thing

How to build a better AI benchmark

It’s not easy being one of Silicon Valley’s favorite benchmarks.

SWE-Bench (pronounced “swee bench”) launched in November 2024 as a way to evaluate an AI model’s coding skill. It has since quickly become one of the most popular tests in AI. A SWE-Bench score has become a mainstay of major model releases from OpenAI, Anthropic, and Google—and outside of foundation models, the fine-tuners at AI firms are in constant competition to see who can rise above the pack.

Despite all the fervor, this isn’t exactly a truthful assessment of which model is “better.” Entrants have begun to game the system—which is pushing many others to wonder whether there’s a better way to actually measure AI achievement. Read the full story.

—Russell Brandom

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Aww, these sharks appear to be playing with pool toys.
+ Strange things are happening over on Easter Island (even weirder than you can imagine) 🗿
+ Very cool—archaeologists have uncovered a Roman tomb that’s been sealed shut for 1,700 years.
+ This Japanese mass media collage is making my eyes swim, in a good way.

The Download: the fossil fuel elephant in the room, and better tests for endometriosis

27 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This year’s UN climate talks avoided fossil fuels, again

Over the past few weeks in Belem, Brazil, attendees of this year’s UN climate talks dealt with oppressive heat and flooding, and at one point a literal fire broke out, delaying negotiations. The symbolism was almost too much to bear.

While many, including the president of Brazil, framed this year’s conference as one of action, the talks ended with a watered-down agreement. The final draft doesn’t even include the phrase “fossil fuels.”

As emissions and global temperatures reach record highs again this year, I’m left wondering: Why is it so hard to formally acknowledge what’s causing the problem?

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

New noninvasive endometriosis tests are on the rise

Endometriosis inflicts debilitating pain and heavy bleeding on more than 11% of reproductive-­age women in the United States. Diagnosis takes nearly 10 years on average, partly because half the cases don’t show up on scans, and surgery is required to obtain tissue samples.

But a new generation of noninvasive tests are emerging that could help accelerate diagnosis and improve management of this poorly understood condition. Read the full story.

—Colleen de Bellefonds

This story is from the last print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI claims a teenager circumvented its safety features before ending his life
It says ChatGPT directed Adam Raine to seek help more than 100 times. (TechCrunch)
+ OpenAI is strongly refuting the idea it’s liable for the 16-year old’s death. (NBC News)
+ The looming crackdown on AI companionship. (MIT Technology Review)

2 The CDC’s new deputy director prefers natural immunity to vaccines
And he wasn’t even the worst choice among those considered for the role. (Ars Technica)
+ Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man. (MIT Technology Review)

3 An MIT study says AI could already replace 12% of the US workforce
Researchers drew that conclusion after simulating a digital twin of the US labor market. (CNBC)
+ Separate research suggests it could replace 3 million jobs in the UK, too. (The Guardian)
+ AI usage looks unlikely to keep climbing. (Economist $)

4 An Italian defense group has created an AI-powered air shield system
It claims the system allows defenders to generate dome-style missile shields. (FT $)
+ Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies. (MIT Technology Review)

5 The EU is considering a ban on social media for under-16s

Following in Australia’s footsteps, whose own ban comes into power next month. (Politico)
+ The European Parliament wants parents to decide on access. (The Guardian)

6 Why do so many astronauts keep getting stuck in space?

America, Russia and now China have had to contend with this situation. (WP $)
+ A rescue craft for three stranded Chinese astronauts has successfully reached them. (The Register)

7 Uploading pictures of your hotel room could help trafficking victims
A new app uses computer vision to determine where pictures of generic-looking rooms were taken. (IEEE Spectrum)

8 This browser tool turns back the clock to a pre-AI slop web
Back to the golden age of pre-November 30 2022. (404 Media)
+ The White House’s slop posts are shockingly bad. (NY Mag $)
+ Animated neo-Nazi propaganda is freely available on X. (The Atlantic $)

9 Grok’s “epic roasts” are as tragic as you’d expect
Test it out at parties at your own peril. (Wired $)

10 Startup founders dread explaining their jobs at Thanksgiving 🍗
Yes Grandma, I work with computers. (Insider $)

Quote of the day

“AI cannot ever replace the unique gift that you are to the world.”

—Pope Leo XIV warns students about the dangers of over-relying on AI, New York Magazine reports.

One more thing

Why we should thank pigeons for our AI breakthroughs

People looking for precursors to artificial intelligence often point to science fiction or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is American psychologist B.F. Skinner’s research with pigeons in the middle of the 20th century.

Skinner believed that association—learning, through trial and error, to link an action with a punishment or reward—was the building block of every behavior, not just in pigeons but in all living organisms, including human beings.

His “behaviorist” theories fell out of favor in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the leading AI tools. Read the full story.

—Ben Crair

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I hope you had a happy, err, Green Wednesday if you partook this year.
+ Here how to help an endangered species from the comfort of your own home.
+ Polly wants to FaceTime—now! 📱🦜(thanks Alice!)
+ I need Macaulay Culkin’s idea for another Home Alone sequel to get greenlit, stat.

The Download: AI and the economy, and slop for the masses

26 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI is changing the economy

There’s a lot at stake when it comes to understanding how AI is changing the economy right now. Should we be pessimistic? Optimistic? Or is the situation too nuanced for that?

Hopefully, we can point you towards some answers. Mat Honan, our editor in chief, will hold a special subscriber-only Roundtables conversation with our editor at large David Rotman, and Richard Waters, Financial Times columnist, exploring what’s happening across different markets. Register here to join us at 1pm ET on Tuesday December 9.

The event is part of the Financial Times and MIT Technology Review “The State of AI” partnership, exploring the global impact of artificial intelligence. Over the past month, we’ve been running discussions between our journalists—sign up here to receive future editions every Monday.

If you’re interested in how AI is affecting the economy, take a look at: 

+ People are worried that AI will take everyone’s jobs. We’ve been here before.

+  What will AI mean for economic inequality? If we’re not careful, we could see widening gaps within countries and between them. Read the full story.

+ Artificial intelligence could put us on the path to a booming economic future, but getting there will take some serious course corrections. Here’s how to fine-tune AI for prosperity.

The AI Hype Index: The people can’t get enough of AI slop

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here, featuring everything from replacing animal testing with AI to our story on why AGI should be viewed as a conspiracy theory

MIT Technology Review Narrated: How to fix the internet

We all know the internet (well, social media) is broken. But it has also provided a haven for marginalized groups and a place for support. It offers information at times of crisis. It can connect you with long-lost friends. It can make you laugh.

That makes it worth fighting for. And yet, fixing online discourse is the definition of a hard problem.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How much AI investment is too much AI investment?
Tech companies hope to learn from beleaguered Intel. (WSJ $)
+ HP is pivoting to AI in the hopes of saving $1 billion a year. (The Guardian)
+ The European Central bank has accused tech investors of FOMO. (FT $)

2 ICE is outsourcing immigrant surveillance to private firms
It’s incentivizing contractors with multi-million dollar rewards. (Wired $)
+ Californian residents have been traumatized by recent raids. (The Guardian)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

3 Poland plans to use drones to defend its rail network from attack
It’s blaming Russia for a recent line explosion. (FT $)
+ This giant microwave may change the future of war. (MIT Technology Review)

4 ChatGPT could eventually have as many subscribers as Spotify
According to erm, OpenAI. (The Information $)

5 Here’s how your phone-checking habits could shape your daily life

You’re probably underestimating just how often you pick it up. (WP $)
+ How to log off. (MIT Technology Review)

6 Chinese drugs are coming

Its drugmakers are on the verge of making more money overseas than at home. (Economist $)

7 Uber is deploying fully driverless robotaxis in an Abu Dhabi island
Roaming 12 square miles of the popular tourist destination. (The Verge)
+ Tesla is hoping to double its robotaxi fleet in Austin next month. (Reuters)

8 Apple is set to become the world’s largest smartphone maker
After more than a decade in Samsung’s shadow. (Bloomberg $)

9 An AI teddy bear that discussed sexual topics is back on sale
But the Teddy Kumma toy is now powered by a different chatbot. (Bloomberg $)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

10 How Stranger Things became the ultimate algorithmic TV show
Its creators mashed a load of pop culture references together and created a streaming phenomenon. (NYT $)

Quote of the day

“AI is a very powerful tool—it’s a hammer and that doesn’t mean everything is a nail.”

—Marketing consultant Ryan Bearden explains to the Wall Street Journal why it pays to be discerning when using AI.

One more thing

Are we ready to hand AI agents the keys?

In recent months, a new class of agents has arrived on the scene: ones built using large language models. Any action that can be captured by text—from playing a video game using written commands to running a social media account—is potentially within the purview of this type of system.

LLM agents don’t have much of a track record yet, but to hear CEOs tell it, they will transform the economy—and soon. Despite that, like chatbot LLMs, agents can be chaotic and unpredictable. Here’s what could happen as we try to integrate them into everything.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The entries for this year’s Nature inFocus Photography Awards are fantastic.
+ There’s nothing like a good karaoke sesh.
+ Happy heavenly birthday Tina Turner, who would have turned 86 years old today.
+ Stop the presses—the hotly-contested list of the world’s top 50 vineyards has officially been announced 🍇

The Download: the future of AlphaFold, and chatbot privacy concerns

25 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate

In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from game-playing AI to a secret project to predict the structures of proteins. He applied for a job.

Just three years later, Jumper and CEO Demis Hassabis had led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching lab-level accuracy, and doing it many times faster—returning results in hours instead of months.

Last year, Jumper and Hassabis shared a Nobel Prize in chemistry. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out. Read the full story.

—Will Douglas Heaven

The State of AI: Chatbot companions and the future of our privacy

—Eileen Guo & Melissa Heikkilä

Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up.

Some state governments are taking notice and starting to regulate companion AI. But tellingly, one area the laws fail to address is user privacy. Read the full story.

This is the fourth edition of The State of AI, our subscriber-only collaboration between the Financial Times and MIT Technology Review. Sign up here to receive future editions every Monday.

While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing on our site.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has signed an executive order to boost AI innovation 
The “Genesis Mission” will try to speed up the rate of scientific breakthroughs. (Politico)
+ The order directs government science agencies to aggressively embrace AI. (Axios)
+ It’s also being touted as a way to lower energy prices. (CNN)

2 Anthropic’s new AI model is designed to be better at coding
We’ll discover just how much better once Claude Opus 4.5 has been properly put through its paces. (Bloomberg $)
+ It reportedly outscored human candidates in an internal engineering test. (VentureBeat)
+ What is vibe coding, exactly? (MIT Technology Review)

3 The AI boom is keeping India hooked on coal
Leaving little chance of cleaning up Mumbai’s famously deadly pollution. (The Guardian)
+ It’s lethal smog season in New Delhi right now. (CNN)
+ The data center boom in the desert. (MIT Technology Review)

4 Teenagers are losing access to their AI companions

Character.AI is limiting the amount of time underage users can spend interacting with its chatbots. (WSJ $)
+ The majority of the company’s users are young and female. (CNBC)
+ One of OpenAI’s key safety leaders is leaving the company. (Wired $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

5 Weight-loss drugs may be riskier during pregnancy 

Recipients are more likely to deliver babies prematurely. (WP $)
+ The pill version of Ozempic failed to halt Alzheimer’s progression in a trial. (The Guardian)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

6 OpenAI is launching a new “shopping research” tool
All the better to track your consumer spending with. (CNBC)
+ It’s designed for price comparisons and compiling buyer’s guides. (The Information $)
+ The company is clearly aiming for a share of Amazon’s e-commerce pie. (Semafor)

7 LA residents displaced by wildfires are moving into prefab housing 🏠
Their new homes are cheap to build and simple to install. (Fast Company $)
+ How AI can help spot wildfires. (MIT Technology Review)

8 Why former Uber drivers are undertaking the world’s toughest driving test
They’re taking the Knowledge—London’s gruelling street test that bypasses GPS. (NYT $)

9 How to spot a fake battery
Great, one more thing to worry about. (IEEE Spectrum)

10 Where is the Trump Mobile?
Almost six months after it was announced, there’s no sign of it. (CNBC)

Quote of the day

“AI is a tsunami that is gonna wipe out everyone. So I’m handing out surfboards.”

—Filmmaker PJ Accetturo, tells Ars Technica why he’s writing a newsletter advising fellow creatives how to pivot to AI tools.

One more thing

The second wave of AI coding is here

Ask people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.

Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. This next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it.

But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re planning a visit to Istanbul here’s hoping you like cats—the city can’t get enough of them.
+ Rest in power reggae icon Jimmy Cliff.
+ Did you know the ancient Egyptians had a pretty accurate way of testing for pregnancy?
+ As our readers in the US start prepping for Thanksgiving, spare a thought for Astoria the lovelorn turkey 🦃

The Download: how to fix a tractor, and living among conspiracy theorists

24 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the man building a starter kit for civilization

You live in a house you designed and built yourself. You rely on the sun for power, heat your home with a woodstove, and farm your own fish and vegetables. The year is 2025.

This is the life of Marcin Jakubowski, the 53-year-old founder of Open Source Ecology, an open collaborative of engineers, producers, and builders developing what they call the Global Village Construction Set (GVCS).

It’s a set of 50 machines—everything from a tractor to an oven to a circuit maker—that are capable of building civilization from scratch and can be reconfigured however you see fit. It’s all part of his ethos that life-changing technology should be available to all, not controlled by a select few. Read the full story.

—Tiffany Ng

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.

What it’s like to find yourself in the middle of a conspiracy theory

Last week, we held a subscribers-only Roundtables discussion exploring how to cope in this new age of conspiracy theories. Our features editor Amanda Silverman and executive editor Niall Firth were joined by conspiracy expert Mike Rothschild, who explained exactly what it’s like to find yourself at the center of a conspiracy you can’t control. Watch the conversation back here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 DOGE has been disbanded
Even though it’s got eight months left before its official scheduled end. (Reuters)
+ It leaves a legacy of chaos and few measurable savings. (Politico)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

2 How OpenAI’s tweaks to ChatGPT sent some users into delusional spirals
It essentially turned a dial that increased both usage of the chatbot and the risks it poses to a subset of people. (NYT $)
+ AI workers are warning loved ones to stay away from the technology. (The Guardian)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

3 A three-year old has received the world’s first gene therapy for Hunter syndrome
Oliver Chu appears to be developing normally one year after starting therapy. (BBC)

4 Why we may—or may not—be in an AI bubble 🫧
It’s time to follow the data. (WP $)
+ Even tech leaders don’t appear to be entirely sure. (Insider $)
+ How far can the ‘fake it til you make it’ strategy take us? (WSJ $)
+ Nvidia is still riding the wave with abandon. (NY Mag $)

5 Many MAGA influencers are based in Russia, India and Nigeria

X’s new account provenance feature is revealing some interesting truths. (The Daily Beast)

6 The FBI wants to equip drones with facial recognition tech
Civil libertarians claim the plans equate to airborne surveillance. (The Intercept)
+ This giant microwave may change the future of war. (MIT Technology Review)

7  Snapchat is alerting users ahead of Australia’s under-16s social media ban  
The platform will analyze an account’s “behavioral signals” to estimate a user’s age. (The Guardian)
+ An AI nudification site has been fined for skipping age checks. (The Register)
+ Millennial parents are fetishizing the notion of an offline childhood. (The Observer)

8 Activists are roleplaying ICE raids in Fortnite and Grand Theft Auto
It’s in a bid to prepare players to exercise their rights in the real world. (Wired $)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

9 The JWST may have uncovered colossal stars ⭐
In fact, they’re so big their masses are 10,000 times bigger than the sun. (New Scientist $)
+ Inside the hunt for the most dangerous asteroid ever. (MIT Technology Review)

10 Social media users are lying about brands ghosting them
Completely normal behavior. (WSJ $)
+ This would never have happened on Vine, I’ll tell you now. (The Verge)

Quote of the day

“I can’t believe we have to say this, but this account has only ever been run and operated from the United States.” 

The US Department of Homeland Security’s X account attempts to end speculation surrounding its social media origins, the New York Times reports.

One more thing

This company is planning a lithium empire from the shores of the Great Salt Lake

On a bright afternoon in August, the shore of Utah’s Great Salt Lake looks like something out of a science fiction film set in a scorching alien world.

This otherworldly scene is the test site for a company called Lilac Solutions, which is developing a technology it says will shake up the United States’ efforts to pry control over the global supply of lithium, the so-called “white gold” needed for electric vehicles and batteries, away from China.

The startup is in a race to commercialize a new, less environmentally-damaging way to extract lithium from rocks. If everything pans out, it could significantly increase domestic supply at a crucial moment for the nation’s lithium extraction industry. Read the full story.

—Alexander C. Kaufman

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I love the thought of clever crows putting their smarts to use picking up cigarette butts (thanks Alice!)
+ Talking of brains, sea urchins have a whole lot more than we originally suspected.
+ Wow—a Ukrainian refugee has won an elite-level sumo competition in Japan.
+ How to make any day feel a little bit brighter.

The Download: the secrets of vitamin D, and an AI party in Africa

21 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

We’re learning more about what vitamin D does to our bodies

At a checkup a few years ago, a doctor told me I was deficient in vitamin D. But he wouldn’t write me a prescription for supplements, simply because, as he put it, everyone in the UK is deficient. Putting the entire population on vitamin D supplements would be too expensive for the country’s national health service, he told me.

But supplementation—whether covered by a health-care provider or not—can be important. As those of us living in the Northern Hemisphere spend fewer of our waking hours in sunlight, let’s consider the importance of vitamin D. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

If you’re interested in other stories from our biotech writers, check out some of their most recent work:

+ Advanced in organs on chips, digital twins, and AI are ushering in a new era of research and drug development that could help put a stop to animal testing. Read the full story.

+ Here’s the latest company planning for gene-edited babies.

+ Preventing the common cold is extremely tricky—but not impossible. Here’s why we don’t have a cold vaccine. Yet.

+ Scientists are creating the beginnings of bodies without sperm or eggs. How far should they be allowed to go? Read the full story.

+ This retina implant lets people with vision loss do a crossword puzzle. Read the full story.

Partying at one of Africa’s largest AI gatherings

It’s late August in Rwanda’s capital, Kigali, and people are filling a large hall at one of Africa’s biggest gatherings of minds in AI and machine learning. Deep Learning Indaba is an annual AI conference where Africans present their research and technologies they’ve built, mingling with friends as a giant screen blinks with videos created with generative AI.

The main “prize” for many attendees is to be hired by a tech company or accepted into a PhD program. But the organizers hope to see more homegrown ventures create opportunities within Africa. Read the full story.

—Abdullahi Tsanni

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google’s new Nano Banana Pro generates convincing propaganda
The company’s latest image-generating AI model seems to have few guardrails. (The Verge)
+ Google wants its creations to be slicker than ever. (Wired $)
+ Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent. (MIT Technology Review)

2 Taiwan says the US won’t punish it with high chip tariffs
In fact, official Wu Cheng-wen says Taiwan will help support the US chip industry in exchange for tariff relief. (FT $)

3 Mental health support is one of the most dangerous uses for chatbots
They fail to recognize psychiatric conditions and can miss critical warning signs. (WP $)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)

4 It costs an average of $17,121 to deport one person from the US
But in some cases it can cost much, much more. (Bloomberg $)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

5 Grok is telling users that Elon Musk is the world’s greatest lover
What’s it basing that on, exactly? (Rolling Stone $)
+ It also claims he’s fitter than basketball legend LeBron James. Sure. (The Guardian)

6 Who’s really in charge of US health policy?
RFK Jr. and FDA commissioner Marty Makary are reportedly at odds behind the scenes. (Vox)
+ Republicans are lightly pushing back on the CDC’s new stance on vaccines. (Politico)
+ Why anti-vaxxers are seeking to discredit Danish studies. (Bloomberg $)
+ Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man. (MIT Technology Review)

7 Inequality is worsening in San Francisco
As billionaires thrive, hundreds of thousands of others are struggling to get by. (WP $)
+ A massive airship has been spotted floating over the city. (SF Gate)

8 Donald Trump is thrusting obscure meme-makers into the mainstream
He’s been reposting flattering AI-generated memes by the dozen. (NYT $)
+ MAGA YouTube stars are pushing a boom in politically charged ads. (Bloomberg $)

9 Moss spores survived nine months in space

And they could remain reproductively viable for another 15 years. (New Scientist $)
+ It suggests that some life on Earth has evolved to endure space conditions. (NBC News)
+ The quest to figure out farming on Mars. (MIT Technology Review)

10 Does AI really need a physical shape?
It doesn’t really matter—companies are rushing to give it one anyway. (The Atlantic $)

Quote of the day

“At some point you’ve got to wonder whether the bug is a feature.”

—Alexios Mantzarlis, director of the Security, Trust and Safety Initiative at Cornell Tech, ponders xAI and Grok’s proclivity for surfacing Elon Musk-friendly and/or far-right sources, the Washington Post reports.

One more thing

The AI lab waging a guerrilla war over exploitative AI

Back in 2022, the tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAI’s DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados.

But artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work.

Ben Zhao, a computer security researcher at the University of Chicago, was listening. He and his colleagues have built arguably the most prominent weapons in an artist’s arsenal against nonconsensual AI scraping: two tools called Glaze and Nightshade that add barely perceptible perturbations to an image’s pixels so that machine-learning models cannot read them properly.

But Zhao sees the tools as part of a battle to slowly tilt the balance of power from large corporations back to individual creators. Read the full story.

—Melissa Heikkilä

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re ever tempted to try and recreate a Jackson Pollock painting, maybe you’d be best leaving it to the kids.
+ Scientists have discovered that lions have not one, but two distinct types of roars 🦁
+ The relentless rise of the quarter-zip must be stopped!
+ Pucker up: here’s a brief history of kissing 💋

The Download: what’s next for electricity, and living in the conspiracy age

20 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Three things to know about the future of electricity

The International Energy Agency recently released the latest version of the World Energy Outlook, the annual report that takes stock of the current state of global energy and looks toward the future.

It contains some interesting insights and a few surprising figures about electricity, grids, and the state of climate change. Let’s dig into some numbers.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

How to survive in the new age of conspiracies

Everything is a conspiracy theory now. Our latest series “The New Conspiracy Age” delves into how conspiracies have gripped the White House, turning fringe ideas into dangerous policy, and how generative AI is altering the fabric of truth.

If you’re interested in hearing more about how to survive in this strange new age, join our features editor Amanda Silverman and executive editor Niall Firth today at 1pm ET for an subscriber-exclusive Roundtable conversation. They’ll be joined by conspiracy expert Mike Rothschild, who’s written a fascinating piece for us about what it’s like to find yourself at the heart of a conspiracy theory. Register now to join us!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump is poised to ban AI state laws
The US President is considering signing an order to give the federal government unilateral power over regulating AI. (The Verge)
+ It would give the Justice Department power to sue dissenting states. (WP $)
+ Critics claim the draft undermines trust in the US’s ability to make AI safe. (Wired $)
+ It’s not just America—the EU fumbled its attempts to rein in AI, too. (FT $)

2 The CDC is making false claims about a link between vaccines and autism
Despite previously spending decades fighting misinformation connecting them. (WP $)
+ The National Institutes of Health is parroting RFK Jr’s messaging, too. (The Atlantic $)

3 China is going all-in on autonomous vehicles
Which is bad news for its millions of delivery drivers. (FT $)
+ It’s also throwing its full weight behind its native EV industry. (Rest of World)

5 Major music labels have inked a deal with an AI streaming service
Klay users will be able to remodel songs from the likes of Universal using AI. (Bloomberg $)
+ What happens next is anyone’s guess. (Billboard $)
+ AI is coming for music, too. (MIT Technology Review)

5 How quantum sensors could overhaul GPS navigation
Current GPS is vulnerable to spoofing and jamming. But what comes next? (WSJ $)
+ Inside the race to find GPS alternatives. (MIT Technology Review)

6 There’s a divide inside the community of people in relationships with chatbots 
Some users assert their love interests are real—to the concern of others. (NY Mag $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

7 There’s still hope for a functional cure to HIV
Even in the face of crippling funding cuts. (Knowable Magazine)
+ Breakthrough drug lenacapavir is being rolled out in parts of Africa. (NPR)
+ This annual shot might protect against HIV infections. (MIT Technology Review)

8 Is it possible to reverse years of AI brainrot?
A new wave of memes is fighting the good fight. (Wired $)
+ How to fix the internet. (MIT Technology Review)

9 Tourists fell for an AI-generated Christmas market outside Buckingham Palace 🎄
If it looks too good to be true, it probably is. (The Guardian)
+ It’s unclear who is behind the pictures, which spread on Instagram. (BBC)

10 Here’s what people return to Amazon
A whole lot of polyester clothing, by the sounds of it. (NYT $)

Quote of the day

“I think we’re in an LLM bubble, and I think the LLM bubble might be bursting next year.”

—Hugging Face co-founder and CEO Clem Delangue has a slightly different take on the reports we’re in an AI bubble, TechCrunch reports.

One more thing

Inside a new quest to save the “doomsday glacier”

The Thwaites glacier is a fortress larger than Florida, a wall of ice that reaches nearly 4,000 feet above the bedrock of West Antarctica, guarding the low-lying ice sheet behind it.

But a strong, warm ocean current is weakening its foundations and accelerating its slide into the sea. Scientists fear the waters could topple the walls in the coming decades, kick-starting a runaway process that would crack up the West Antarctic Ice Sheet, marking the start of a global climate disaster. As a result, they are eager to understand just how likely such a collapse is, when it could happen, and if we have the power to stop it. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ As Christmas approaches, micro-gifting might be a fun new tradition to try out.
+ I’ve said it before and I’ll say it again—movies are too long these days.
+ If you’re feeling a bit existential this morning, these books are a great starting point for finding a sense of purpose.
+ This is a fun list of the internet’s weird and wonderful obsessive lists.

The Download: de-censoring DeepSeek, and Gemini 3

19 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Quantum physicists have shrunk and “de-censored” DeepSeek R1

The news: A group of quantum physicists at Spanish firm Multiverse Computing claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators. 

Why it matters: In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and “socialist values.” As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed “politically sensitive,” the models often refuse to answer or provide talking points straight from state propaganda.

How they did it: Multiverse Computing specializes in quantum-inspired AI techniques, which it used to create DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. It allowed them to identify and remove Chinese censorship so that the model answered sensitive questions in much the same way as Western models. Read the full story.

—Caiwei Chen

Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent

Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent.

Gemini Agent is an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Read the full story.

—Caiwei Chen

MIT Technology Review Narrated: Why climate researchers are taking the temperature of mountain snow

The Sierra’s frozen reservoir provides about a third of California’s water and most of what comes out of the faucets, shower heads, and sprinklers in the towns and cities of northwestern Nevada.

The need for better snowpack temperature data has become increasingly critical for predicting when the water will flow down the mountains, as climate change fuels hotter weather, melts snow faster, and drives rapid swings between very wet and very dry periods.

A new generation of tools, techniques, and models promises to improve water forecasts, and help California and other states manage in the face of increasingly severe droughts and flooding. However, observers fear that any such advances could be undercut by the Trump administration’s cutbacks across federal agencies.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Yesterday’s Cloudflare outage was not triggered by a hack
An error in its bot management system was to blame. (The Verge)
+ ChatGPT, X and Uber were among the services that dropped. (WP $)
+ It’s another example of the dangers of having a handful of infrastructure providers. (WSJ $)
+ Today’s web is incredibly fragile. (Bloomberg $)

2 Donald Trump has called for a federal AI regulatory standard
Instead of allowing each state to make its own laws. (Axios)
+ He claims the current approach risks slowing down AI progress. (Bloomberg $)

3 Meta has won the antitrust case that threatened to spin off Instagram
It’s one of the most high-profile cases in recent years. (FT $)
+ A judge ruled that Meta doesn’t hold a social media monopoly. (BBC)

4 The Three Mile Island nuclear plant is making a comeback
It’s the lucky recipient of a $1 billion federal loan to kickstart the facility. (WP $)
+ Why Microsoft made a deal to help restart Three Mile Island. (MIT Technology Review)

5 Roblox will block children from speaking to adult strangers 

The gaming platform is facing fresh lawsuits alleging it is failing to protect young users from online predators. (The Guardian)
+ But we don’t know much about how accurate its age verification is. (CNN)
+ All users will have to submit a selfie or an ID to use chat features. (Engadget)

6 Boston Dynamics’ robot dog is becoming a widespread policing tool
It’s deployed by dozens of US and Canadian bomb squads and SWAT teams. (Bloomberg $)

7 A tribally-owned network of EV chargers is nearing completion
It’s part of Standing Rock reservation’s big push for clean energy. (NYT $)

8 Resist the temptation to use AI to cheat at conversations
It makes it much more difficult to forge a connection. (The Atlantic $)

9 Amazon wants San Francisco residents to ride its robotaxis for free
It’s squaring up against Alphabet’s Waymo in the city for the first time. (CNBC)
+ But its cars look very different to traditional vehicles. (LA Times $)
+ Zoox is operating around 50 robotaxis across SF and Las Vegas. (The Verge)

10 TikTok’s new setting allows you to filter out AI-generated clips
Farewell, sweet slop. (TechCrunch)
+ How do AI models generate videos? (MIT Technology Review)

Quote of the day

“The rapids of social media rush along so fast that the Court has never even stepped into the same case twice.”

—Judge James Boasberg, who rejected the Federal Trade Commission’s claim that Meta had created an illegal social media monopoly, acknowledges the law’s failure to keep up with technology, Politico reports.

One more thing

Namibia wants to build the world’s first hydrogen economy

Factories have used fossil fuels to process iron ore for three centuries, and the climate has paid a heavy price: According to the International Energy Agency, the steel industry today accounts for 8% of carbon dioxide emissions.

But it turns out there is a less carbon-­intensive alternative: using hydrogen. Unlike coal or natural gas, which release carbon dioxide as a by-product, this process releases water. And if the hydrogen itself is “green,” the climate impact of the entire process will be minimal.

HyIron, which has a site in the Namib desert, is one of a handful of companies around the world that are betting green hydrogen can help the $1.8 trillion steel industry clean up its act. The question now is whether Namibia’s government, its trading partners, and hydrogen innovators can work together to build the industry in a way that satisfies the world’s appetite for cleaner fuels—and also helps improve lives at home. Read the full story.

—Jonathan W. Rosen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.+ This art installation in Paris revolves around porcelain bowls clanging against each other in a pool of water—it’s oddly hypnotic.
+ Feeling burnt out? Get down to your local sauna for a quick reset.
+ New York’s subway system is something else.
+
Your dog has ancient origins. No, really!

The Download: AI-powered warfare, and how embryo care is changing

18 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The State of AI: How war will be changed forever

—Helen Warrell & James O’Donnell

It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. 

But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Read the full story.

This is the third edition of The State of AI, our subscriber-only collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power.

Every Monday, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing.
Sign up here to receive future editions every Monday.

Job titles of the future: AI embryologist

Embryologists are the scientists behind the scenes of in vitro fertilization who oversee the development and selection of embryos, prepare them for transfer, and maintain the lab environment. They’ve been a critical part of IVF for decades, but their job has gotten a whole lot busier in recent years as demand for the fertility treatment skyrockets and clinics struggle to keep up.

Klaus Wiemer, a veteran embryologist and IVF lab director, believes artificial intelligence might help by predicting embryo health in real time and unlocking new avenues for productivity in the lab. Read the full story.

—Amanda Smith

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Big Tech’s job cuts are a warning sign
They’re a canary down the mine for other industries. (WP $)
+ Americans appear to feel increasingly unsettled by AI. (WSJ $)
+ Global fund managers worry companies are overinvesting in the technology. (FT $)

2 Iran is attempting to stimulate rain to end its deadly drought
But critics warn that cloud seeding is a challenging process. (New Scientist $)
+ Parts of western Iran are now experiencing flooding. (Reuters)
+ Why it’s so hard to bust the weather control conspiracy theory. (MIT Technology Review)

3 Air taxi startups may produce new aircraft for war zones
The US Army has announced its intentions to acquire most of its weapons from startups, not major contractors. (The Information $)
+ US firm Joby Aviation is launching flying taxis in Dubai. (NBC News)
+ This giant microwave may change the future of war. (MIT Technology Review)

4 Weight-loss drug make Eli Lilly is likely to cross a trillion-dollar valuation

As it prepares to launch a pill alternative to its injections. (WSJ $)
+ Arch rival Novo Nordisk A/S is undercutting the company to compete. (Bloomberg $)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

5 What’s going on with the US TikTok ban?
Even the lawmakers in charge don’t seem to know. (The Verge)

6 It’s getting harder to grow cocoa
Mass tree felling and lower rainfall in the Congo Basin is to blame. (FT $)
+ Industrial agriculture activists are everywhere at COP30. (The Guardian)
+ Africa fights rising hunger by looking to foods of the past. (MIT Technology Review)

7 Russia is cracking down on its critical military bloggers
Armchair critics are facing jail time if they refuse to apologize. (Economist $)

8 Why the auto industry is so obsessed with humanoid robots
It’s not just Tesla—plenty of others want to get in on the act. (The Atlantic $)
+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

9 Indian startups are challenging ChatGPT’s AI dominance
They support a far wider range of languages than the large AI firms’ models. (Rest of World)
+ OpenAI is huge in India. Its models are steeped in caste bias. (MIT Technology Review)

10 These tiny sensors track butterflies on their journey to Mexico 🦋
Scientists hope it’ll shed some light on their mysterious life cycles. (NYT $)

Quote of the day

“I think no company is going to be immune, including us.” 

—Sundar Pichai, CEO of Google, warns the BBC about the precarious nature of the AI bubble.

One more thing

How a 1980s toy robot arm inspired modern robotics

—Jon Keegan

As a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm.

Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was a legit robotic arm. And the bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The US Library of Congress has attained some handwritten drafts of iconic songs from The Wizard of Oz.
+ This interesting dashboard tracks the world’s top 500 musical artists in the world right now—some of the listings may surprise you (or just make you feel really old.)
+ Cult author Chris Kraus shares what’s floating her boat right now.+ The first images of the forthcoming Legend of Zelda film are here!

The Download: the risk of falling space debris, and how to debunk a conspiracy theory

17 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What is the chance your plane will be hit by space debris?

The risk of flights being hit by space junk is still small, but it’s growing.

About three pieces of old space equipment—used rockets and defunct satellites—fall into Earth’s atmosphere every day, according to estimates by the European Space Agency. By the mid-2030s, there may be dozens thanks to the rise of megaconstellations in orbit.

So far, space debris hasn’t injured anybody—in the air or on the ground. But multiple close calls have been reported in recent years.

But some estimates have the risk of a single human death or injury caused by a space debris strike on the ground at around 10% per year by 2035. That would mean a better than even chance that someone on Earth would be hit by space junk about every decade. Find out more.

—Tereza Pultarova

This story is part of MIT Technology Review Explains: our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read the rest of the series here.

Chatbots are surprisingly effective at debunking conspiracy theories

—Thomas Costello, Gordon Pennycook & David Rand

Many people believe that you can’t talk conspiracists out of their beliefs. 

But that’s not necessarily true. Our research shows that many conspiracy believers do respond to evidence and arguments—information that is now easy to deliver in the form of a tailored conversation with an AI chatbot.

This is good news, given the outsize role that unfounded conspiracy theories play in today’s political landscape. So while there are widespread and legitimate concerns that generative AI is a potent tool for spreading disinformation, our work shows that it can also be part of the solution. Read the full story.

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China is quietly expanding its remote nuclear test site
In the wake of Donald Trump announcing America’s intentions to revive similar tests. (WP $)
+ A White House memo has accused Alibaba of supporting Chinese operations. (FT $)

2 Jeff Bezos is becoming co-CEO of a new AI startup
Project Prometheus will focus on AI for building computers, aerospace and vehicles. (NYT $)

3 AI-powered toys are holding inappropriate conversations with children 
Including how to find dangerous objects including pills and knives. (The Register)
+ Chatbots are unreliable and unpredictable, whether embedded in toys or not. (Futurism)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

4 Big Tech is warming to the idea of data centers in space

They come with a lot less red tape than their Earth-bound counterparts. (WSJ $)
+ There are a huge number of data centers mired in the planning stage. (WSJ $)
+ Should we be moving data centers to space? (MIT Technology Review)

5 The mafia is recruiting via TikTok

Some bosses are even using the platform to control gangs from behind bars. (Economist $)

6 How to resist AI in your workplace
Like most things in life, there’s power in numbers. (Vox)

7 How China’s EV fleet could become a giant battery network
If economic troubles don’t get in the way, that is. (Rest of World)
+ EV sales are on the rise in South America. (Reuters)
+ China’s energy dominance in three charts. (MIT Technology Review)

8 Inside the unstoppable rise of the domestic internet
Control-hungry nations are following China’s lead in building closed platforms. (NY Mag $)
+ Can we repair the internet? (MIT Technology Review)

9 Search traffic? What search traffic?
These media startups have found a way to thrive without Google. (Insider $)
+ AI means the end of internet search as we’ve known it. (MIT Technology Review)

10 Paul McCartney has released a silent track to protest AI’s creep into music
That’ll show them! (The Guardian)
+ AI is coming for music, too. (MIT Technology Review)

Quote of the day

“All the parental controls in the world will not protect your kids from themselves.”

—Samantha Broxton, a parenting coach and consultant, tells the Washington Post why educating children around the risks of using technology is the best way to help them protect themselves.

One more thing

Inside the controversial tree farms powering Apple’s carbon neutral goal

Apple (and its peers) are planting vast forests of eucalyptus trees in Brazil to try to offset their climate emissions, striking some of the largest-ever deals for carbon credits in the process.

The tech behemoth is betting that planting millions of eucalyptus trees in Brazil will be the path to a greener future. Some ecologists and local residents are far less sure.

The big question is: Can Latin America’s eucalyptus be a scalable climate solution? Read the full story.

—Gregory Barber

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Shepard Fairey’s retrospective show in LA looks very cool.
+ Check out these fascinating scientific breakthroughs that have been making waves over the past 25 years.
+ Good news—sweet little puffins are making a comeback in Ireland.
+ Maybe we should all be getting into Nordic walking.

The Download: how AI really works, and phasing out animal testing

14 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI’s new LLM exposes the secrets of how AI really works

The news: ChatGPT maker OpenAI has built an experimental large language model that is far easier to understand than typical models.

Why it matters: It’s a big deal, because today’s LLMs are black boxes: Nobody fully understands how they do what they do. Building a model that is more transparent sheds light on how LLMs work in general, helping researchers figure out why models hallucinate, why they go off the rails, and just how far we should trust them with critical tasks. Read the full story.

—Will Douglas Heaven

Google DeepMind is using Gemini to train agents inside Goat Simulator 3

Google DeepMind has built a new video-game-playing agent called SIMA 2 that can navigate and solve problems in 3D virtual worlds. The company claims it’s a big step toward more general-purpose agents and better real-world robots.   

The company first demoed SIMA (which stands for “scalable instructable multiworld agent”) last year. But this new version has been built on top of Gemini, the firm’s flagship large language model, which gives the agent a huge boost in capability. Read the full story.

—Will Douglas Heaven

These technologies could help put a stop to animal testing

Earlier this week, the UK’s science minister announced an ambitious plan: to phase out animal testing.

Testing potential skin irritants on animals will be stopped by the end of next year. By 2027, researchers are “expected to end” tests of the strength of Botox on mice. And drug tests in dogs and nonhuman primates will be reduced by 2030.

It’s good news for activists and scientists who don’t want to test on animals. And it’s timely too: In recent decades, we’ve seen dramatic advances in technologies that offer new ways to model the human body and test the effects of potential therapies, without experimenting on animals. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Chinese hackers used Anthropic’s AI to conduct an espionage campaign   
It automated a number of attacks on corporations and governments in September. (WSJ $)
+ The AI was able to handle the majority of the hacking workload itself. (NYT $)
+ Cyberattacks by AI agents are coming. (MIT Technology Review)

2 Blue Origin successfully launched and landed its New Glenn rocket
It managed to deploy two NASA satellites into space without a hitch. (CNN)
+ The New Glenn is the company’s largest reusable rocket. (FT $)
+ The launch had been delayed twice before. (WP $)

3 Brace yourself for flu season
It started five weeks earlier than usual in the UK, and the US is next. (Ars Technica)
+ Here’s why we don’t have a cold vaccine. Yet. (MIT Technology Review)

4 Google is hosting a Border Protection facial recognition app    
The app alerts officials whether to contact ICE about identified immigrants. (404 Media)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

5 OpenAI is trialling group chats in ChatGPT
It’d essentially make AI a participant in a conversation of up to 20 people. (Engadget)

6 A TikTok stunt sparked debate over how charitable America’s churches really are
Content creator Nikalie Monroe asked churches for help feeding her baby. Very few stepped up. (WP $)

7 Indian startups are attempting to tackle air pollution
But their solutions are far beyond the means of the average Indian household. (NYT $)
+ OpenAI is huge in India. Its models are steeped in caste bias. (MIT Technology Review)

8 An AI tool could help reduce wasted efforts to transplant organs
It predicts how likely the would-be recipient is to die during the brief transplantation window. (The Guardian)
+ Putin says organ transplants could grant immortality. Not quite. (MIT Technology Review)

9 3D-printing isn’t making prosthetics more affordable
It turns out that plastic prostheses are often really uncomfortable. (IEEE Spectrum)
+ These prosthetics break the mold with third thumbs, spikes, and superhero skins. (MIT Technology Review)

10 What happens when relationships with AI fall apart
Can you really file for divorce from an LLM? (Wired $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

Quote of the day

“It’s a funky time.”

—Aileen Lee, founder and managing partner of Cowboy Ventures, tells TechCrunch the AI boom has torn up the traditional investment rulebook.

One more thing

Restoring an ancient lake from the rubble of an unfinished airport in Mexico City

Weeks after Mexican President Andrés Manuel López Obrador took office in 2018, he controversially canceled ambitious plans to build an airport on the deserted site of the former Lake Texcoco—despite the fact it was already around a third complete.

Instead, he tasked Iñaki Echeverria, a Mexican architect and landscape designer, with turning it into a vast urban park, an artificial wetland that aims to transform the future of the entire Valley region.

But as López Obrador’s presidential team nears its end, the plans for Lake Texcoco’s rebirth could yet vanish. Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Maybe Gen Z is onto something when it comes to vibe dating.
+ Trust AC/DC to give the fans what they want, performing Jailbreak for the first time since 1991.
+ Nieves González, the artist behind Lily Allen’s new album cover, has an eye for detail.
+ Here’s what AI determines is a catchy tune.

The Download: AI to measure pain, and how to deal with conspiracy theorists

13 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI is changing how we quantify pain

Researchers around the world are racing to turn pain—medicine’s most subjective vital sign—into something a camera or sensor can score as reliably as blood pressure.

The push has already produced PainChek—a smartphone app that scans people’s faces for tiny muscle movements and uses artificial intelligence to output a pain score—which has been cleared by regulators on three continents and has logged more than 10 million pain assessments. Other startups are beginning to make similar inroads.

The way we assess pain may finally be shifting, but when algorithms measure our suffering, does that change the way we treat it? Read the full story.

—Deena Mousa

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about our bodies. If you haven’t already, subscribe now to receive future issues once they land.

How to help friends and family dig out of a conspiracy theory black hole

—Niall Firth 

Someone I know became a conspiracy theorist seemingly overnight.

It was during the pandemic. They suddenly started posting daily on Facebook about the dangers of covid vaccines and masks, warning of an attempt to control us.

As a science and technology journalist, I felt that my duty was to respond. I tried, but all I got was derision. Even now I still wonder: Are there things I could have done differently to talk them back down and help them see sense? 

I gave Sander van der Linden, professor of social psychology in society at the University of Cambridge, a call to ask: What would he advise if family members or friends show signs of having fallen down the rabbit hole? Read the full story.

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here. It’s also part of our How To series, giving you practical advice to help you get things done. 

If you’re interested in hearing more about how to survive in the age of conspiracies, join our features editor Amanda Silverman and executive editor Niall Firth for a subscriber-exclusive Roundtable conversation with conspiracy expert Mike Rothschild. It’s at 1pm ET on Thursday November 20—register now to join us!

Google is still aiming for its “moonshot” 2030 energy goals

—Casey Crownhart

Last week, we hosted EmTech MIT, MIT Technology Review’s annual flagship conference in Cambridge, Massachusetts. As you might imagine, some of this climate reporter’s favorite moments came in the climate sessions. I was listening especially closely to my colleague James Temple’s discussion with Lucia Tian, head of advanced energy technologies at Google.

They spoke about the tech giant’s growing energy demand and what sort of technologies the company is looking to to help meet it. In case you weren’t able to join us, let’s dig into that session and consider how the company is thinking about energy in the face of AI’s rapid rise. Read the full story.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 ChatGPT is now “warmer and more conversational”
But it’s also slightly more willing to discuss sexual and violent content. (The Register)
+ ChatGPT has a very specific writing style. (WP $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

2 The US could deny visas to visitors with obesity, cancer or diabetes
As part of its ongoing efforts to stem the flow of people trying to enter the country. (WP $)

3 Microsoft is planning to create its own AI chip
And it’s going to use OpenAI’s internal chip-building plans to do it. (Bloomberg $)
+ The company is working on a colossal data center in Atlanta. (WSJ $)

4 Early AI agent adopters are convinced they’ll see a return on their investment soon 
Mind you, they would say that. (WSJ $)
+ An AI adoption riddle. (MIT Technology Review)

5 Waymo’s robotaxis are hitting American highways

Until now, they’ve typically gone out of their way to avoid them. (The Verge)
+ Its vehicles will now reach speeds of up to 65 miles per hour. (FT $)
+ Waymo is proving long-time detractor Elon Musk wrong. (Insider $)

6 A new Russian unit is hunting down Ukraine’s drone operators
It’s tasked with killing the pilots behind Ukraine’s successful attacks. (FT $)
+ US startup Anduril wants to build drones in the UAE. (Bloomberg $)
+ Meet the radio-obsessed civilian shaping Ukraine’s drone defense. (MIT Technology Review)

7 Anthropic’s Claude successfully controlled a robot dog
It’s important to know what AI models may do when given access to physical systems. (Wired $)

8 Grok briefly claimed Donald Trump won the 2020 US election
As reliable as ever, I see. (The Guardian)

9 The Northern Lights are playing havoc with satellites
Solar storms may look spectacular, but they make it harder to keep tabs on space. (NYT $)
+ Seriously though, they look amazing. (The Atlantic $)
+ NASA’s new AI model can predict when a solar storm may strike. (MIT Technology Review)

10 Apple users can now use digital versions of their passports
But it’s strictly for internal flights within the US only. (TechCrunch)

Quote of the day

“I hope this mistake will turn into an experience.”

—Vladimir Vitukhin, chief executive of the company behind Russia’s first anthropomorphic robot AIDOL, offers a philosophical response to the machine falling flat on its face during a reveal event, the New York Times reports.

One more thing

Welcome to the oldest part of the metaverse

Headlines treat the metaverse as a hazy dream yet to be built. But if it’s defined as a network of virtual worlds we can inhabit, its oldest corner has been already running for 25 years.

It’s a medieval fantasy kingdom created for the online role-playing game Ultima Online. It was the first to simulate an entire world: a vast, dynamic realm where players could interact with almost anything, from fruit on trees to books on shelves.

Ultima Online has already endured a quarter-century of market competition, economic turmoil, and political strife. So what can this game and its players tell us about creating the virtual worlds of the future? Read the full story

—John-Clark Levin

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Unlikely duo Sting and Shaggy are starring together in a New York musical.
+ Barry Manilow was almost in Airplane!? That would be an entirely different kind of flying, altogether ✈
+ What makes someone sexy? Well, that depends.
+ Keep an eye on those pink dolphins, they’re notorious thieves.

The Download: how to survive a conspiracy theory, and moldy cities

12 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What it’s like to be in the middle of a conspiracy theory (according to a conspiracy theory expert)

—Mike Rothschild is a journalist and an expert on the growth and impact of conspiracy theories and disinformation.

It’s something of a familiar cycle by now: Tragedy hits; rampant misinformation and conspiracy theories follow. It’s often even more acute in the case of a natural disaster, when conspiracy theories about what “really” caused the calamity run right into culture-war-driven climate change denialism. Put together, these theories obscure real causes while elevating fake ones.

I’ve studied these ideas extensively, having spent the last 10 years writing about conspiracy theories and disinformation as a journalist and researcher. I’ve covered everything from the rise of QAnon to whether Donald Trump faked his assassination attempt. I’ve written three books, testified to Congress, and even written a report for the January 6th Committee. 

Still, I’d never lived it. Not until my house in Altadena, California, burned down. Read the full story.

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here. It’s also featured in this week’s MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts

If you’d like to hear more from Mike, he’ll be joining our features editor Amanda Silverman and executive editor Niall Firth for a subscriber-exclusive Roundtable conversation exploring how we can survive in the age of conspiracies. It’s at 1pm ET on Thursday November 20—register now to join us!

This startup thinks slime mold can help us design better cities

It is a yellow blob with no brain, yet some researchers believe a curious organism known as slime mold could help us build more resilient cities.

Humans have been building cities for 6,000 years, but slime mold has been around for 600 million. The team behind a new startup called Mireta wants to translate the organism’s biological superpowers into algorithms that might help improve transit times, alleviate congestion, and minimize climate-related disruptions in cities worldwide. Read the full story.

—Elissaveta M. Brandon

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about our bodies. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US government officials are skipping COP30
And American corporate executives are following their lead. (NYT $)
+ Protestors stormed the climate talks in Brazil. (The Guardian)
+ Gavin Newsom took aim at Donald Trump’s climate policies onstage. (FT $)

2 The UK may assess AI models for their ability to generate CSAM
Its government has suggested amending a legal bill to enable the tests. (BBC)
+ US investigators are using AI to detect child abuse images made by AI. (MIT Technology Review)

3 Google is suing a group of Chinese hackers
It claims they’re selling software to enable criminal scams. (FT $)
+ The group allegedly sends colossal text message phishing attacks. (CBS News)

4 A major ‘cryptoqueen’ criminal has been jailed

Qian Zhimin used money stolen from Chinese pensioners to buy cryptocurrency now worth billions. (BBC)
+ She defrauded her victims through an elaborate ponzi scheme. (CNN)

5 Carbon capture’s creators fear it’s being misused
Overreliance on the method could breed overconfidence and cause countries to delay reducing emissions. (Bloomberg $)
+ Big Tech’s big bet on a controversial carbon removal tactic. (MIT Technology Review)

6 The UK will use AI to phase out animal testing
3D bioprinted human tissues could also help to speed up the process. (The Guardian)
+ But the AI boom is looking increasingly precarious. (WSJ $)

7 Louisiana is dealing with a whooping cough outbreak
Two infants have died to date from the wholly preventative disease. (Undark)

8 Here’s how ordinary people use ChatGPT
Emotional support and discussions crop up regularly.(WP $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

9 Inside the search for lost continents
A newly-discovered mechanism is shedding light on why they may have vanished. (404 Media)
+ How environmental DNA is giving scientists a new way to understand our world. (MIT Technology Review)

10 AI is taking Gen Z’s entry-level jobs
Especially in traditionally graduate-friendly consultancies. (NY Mag $)
+ What the Industrial Revolution can teach us about how to handle AI. (Knowable Magazine)
+ America’s corporate boards are stumbling in the dark. (WSJ $)

Quote of the day

“We can’t eat money.”

—Nato, an Indigenous leader from the Tupinamba community, tells Reuters why they are protesting at the COP30 climate summit in Brazil against any potential sale of their land.

One more thing

How K-pop fans are shaping elections around the globe

Back in the early ‘90s, Korean pop music, known as K-pop, was largely conserved to its native South Korea. It’s since exploded around the globe into an international phenomenon, emphasizing choreography and elaborate performance.

It’s made bands like Girls Generation, EXO, BTS, and Blackpink into household names, and inspired a special brand of particularly fierce devotion in their fans.

Now, those same fandoms have learned how to use their digital skills to advocate for social change and pursue political goals—organizing acts of civil resistance, donating generously to charity, and even foiling white supremacist attempts to spread hate speech. Read the full story.

—Soo Youn

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ These sucker fish are having the time of their lives hitching a ride on a whale.
+ Next time you fly, ditch the WiFi. I know I will.
+ I love this colossal interactive gif.
+ The hottest scent in perfumery right now? Smelling like a robot, apparently.

The Download: surviving extreme temperatures, and the big whale-wind turbine conspiracy

11 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The quest to find out how our bodies react to extreme temperatures

Climate change is subjecting vulnerable people to temperatures that push their limits. In 2023, about 47,000 heat-related deaths are believed to have occurred in Europe. Researchers estimate that climate change could add an extra 2.3 million European heat deaths this century. That’s heightened the stakes for solving the mystery of just what happens to bodies in extreme conditions.

While we broadly know how people thermoregulate, the science of keeping warm or cool is mottled with blind spots. Researchers around the world are revising rules about when extremes veer from uncomfortable to deadly. Their findings change how we should think about the limits of hot and cold—and how to survive in a new world. Read the full story.

—Max G.Levy

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land.

Whales are dying. Don’t blame wind turbines.

Whale deaths have become a political flashpoint. There are currently three active mortality events for whales in the Atlantic, meaning clusters of deaths that experts consider unusual. And Republican lawmakers, conservative think tanks, and—most notably—President Donald Trump (a longtime enemy of wind power) are making dubious claims that offshore wind farms are responsible.

But any finger-pointing at wind turbines for whale deaths ignores the fact that whales have been washing up on beaches since long before the giant machines were rooted in the ocean floor. This is something that has always happened. And the scientific consensus is clear: There’s no evidence that wind farms are the cause of recent increases in whale deaths. Read the full story.

—Casey Crownhart

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here.

The State of AI: Energy is king, and the US is falling behind

In the age of AI, the biggest barrier to progress isn’t money but energy. That should be particularly worrying in the US, where massive data centers are waiting to come online. It doesn’t look as if the country will build the steady power supply or infrastructure needed to serve them all.

It wasn’t always like this. For about a decade before 2020, data centers were able to offset increased demand with efficiency improvements. Now, though, electricity demand is ticking up in the US, with billions of queries to popular AI models each day—and efficiency gains aren’t keeping pace.

If we want AI to have the chance to deliver on big promises without driving electricity prices sky-high for the rest of us, the US needs to learn some lessons from the rest of the world on energy abundance. Just look at China. Read the full story.

—Casey Crownhart & Pilita Clark

This is from The State of AI, our subscriber-only collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power.

Every Monday for the next four weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the magazine are able to read the whole thing.
Sign up here to receive future editions every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How China narrowed its AI divide with the US
America still has a clear lead—but for how long? (WSJ $)
+ The AI boom won’t offset tariffs and America’s immigration crackdown forever. (FT $)
+ How quickly is AI likely to progress really? (Economist $)
+ Is China about to win the AI race? (MIT Technology Review)

2 Anthropic is due to turn a profit much faster than OpenAI
The two companies are taking very different approaches to making money. (WSJ $)
+ OpenAI has lured Intel’s AI chief away. (Bloomberg $)

3 The EU is setting up a new intelligence sharing unit
It’s a bid to shore up intel in the wake of Donald Trump’s plans to reduce security support for Europe. (FT $)

4 Trump officials are poised to suggest oil drilling off the coast of California
That’s likely to rile the state’s politicians and leaders. (WP $)
+ What role should oil and gas companies play in climate tech? (MIT Technology Review)

5 America’s cyber defenses are poor
Repeated cuts and mass layoffs are making it harder to protect the nation. (The Verge)

6 China is on track to hit its peak CO2 emissions target early
Although it’s likely to miss its goal for cutting carbon intensity. (The Guardian)
+ World leaders are heading to COP30 in Brazil this week. (New Yorker $)

7 OpenAI cannot use song lyrics without a license
That’s what a German court has decided, after siding with a music rights society. (Reuters)
+ OpenAI is no stranger to legal proceedings. (The Atlantic $)
+ AI is coming for music. (MIT Technology Review)

8 A small Michigan town is fighting a proposed AI data center
The planned center is part of a collaboration between the University of Michigan and nuclear weapons scientists. (404 Media)
+ Here’s where America’s data centers should be built instead. (Wired $)
+ Communities in Latin America are pushing back, too. (The Guardian)
+ Should we be moving data centers to space? (MIT Technology Review)

9 AI models can’t tell the time ⏰

Analog clocks leave them completely stumped. (IEEE Spectrum)

10 ChatGPT is giving daters the ick
These refuseniks don’t want anything to do with AI, or love interests who use it. (The Guardian)

Quote of the day

“I never imagined that making a cup of tea or obtaining water, antibiotics, or painkillers would require such tremendous effort.”

—An anonymous member of startup accelerator Gaza Sky Geeks tells Rest of World about the impact the war has had on them.

One more thing

How Rust went from a side project to the world’s most-loved programming language

Many software projects emerge because—somewhere out there—a programmer had a personal problem to solve.

That’s more or less what happened to Graydon Hoare. In 2006, Hoare was a 29-year-old computer programmer working for Mozilla. After a software crash broke the elevator in his building, he set about designing a new computer language; one that he hoped would make it possible to write small, fast code without memory bugs.

That language developed into Rust, one of the hottest new languages on the planet. But while it isn’t unusual for someone to make a new computer language, it’s incredibly rare for one to take hold and become part of the programming pantheon. How did Rust do it? Read the full story

—Clive Thompson

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Having a bit of a rubbish day so far? Here’s how to make it better.
+ A Hungarian man played Dance Dance Revolution for 144 hours non-stop, because he knows how to have a seriously good time.
+ A new book is celebrating cats, as it should (thanks Jess!)
+ How a poem from a medieval trickster sowed the seed for hundreds of years of bubonic plague misinformation 🐀

The Download: busting weather myths, and AI heart attack prediction

10 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why it’s so hard to bust the weather control conspiracy theory

It was October 2024, and Hurricane Helene had just devastated the US Southeast. Representative Marjorie Taylor Greene of Georgia found an abstract target on which to pin the blame: “Yes they can control the weather,” she posted on X. “It’s ridiculous for anyone to lie and say it can’t be done.”

She was repeating what’s by now a pretty familiar and popular conspiracy theory: that shadowy forces are out there, wielding technology to control the weather and wreak havoc on their enemies. This preposterous claim has grown louder and more common in recent years, especially after extreme weather strikes.

But here’s the thing: While Greene and other believers are not correct, this conspiracy theory—like so many others—holds a kernel of much more modest truth. Read the full story.

—Dave Levitan

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology. Check out the rest of the series here.

AI could predict who will have a heart attack 

For all the modern marvels of cardiology, we struggle to predict who will have a heart attack. Many people never get screened at all. Now, startups are applying AI algorithms to screen millions of CT scans for early signs of heart disease.

This technology could be a breakthrough for public health, applying an old tool to uncover patients whose high risk for a heart attack is hiding in plain sight. But it remains unproven at scale, while raising thorny questions about implementation and even how we define disease. Read the full story.

—Vishal Khetpal

This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Spending on AI may be to blame for all those tech layoffs
AI isn’t necessarily replacing jobs, but spending on it is gobbling up budgets. (Fast Company $)
+ Junior roles are likely to be the first on the chopping block. (FT $)
+ Are the crazy sums that businesses are sinking into AI sustainable? (WP $)
+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)

2 Anti-vaccine activists gathered in Austin over the weekend
They celebrated RFK Jr’s rise and outlined their goals—including eliminating school vaccine mandates. (WP $)
+ We’re on the verge of stopping the next pandemic. But will we? (Vox)
+ How conspiracy theories infiltrated the doctor’s office. (MIT Technology Review)

3 People who’ve experienced AI-induced delusions are forming a movement
They’re pushing for legal action against chatbot makers. (Bloomberg $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

4 AI-generated clips of women being strangled are flooding social media
Many of them appear to have been created using OpenAI’s Sora 2. (404 Media)

5 Tech leaders are obsessed with bioengineering babies

They’re not allowed to, but they’re not letting a little thing like ethics get in the way. (WSJ $)
+ The race to make the perfect baby is creating an ethical mess. (MIT Technology Review)

6 Apple has removed two popular gay dating apps in China 

The country ordered it to take down Blued and Finka from its app. (Wired $)

7 The UK government is worried China could turn off its buses remotely
It fears hundreds of Chinese-made electric buses on British roads could be at risk. (FT $)

8 How AI is changing the world’s newsrooms 📰
It’s brilliant at analyzing large data sets—but shouldn’t be used to write stories. (NYT $)

9 How to contain an invasive species
Experts argue that too much red tape is getting in the way. (Undark)
+ The weeds are winning. (MIT Technology Review)

10 The world’s largest electric ship is charging up 🚢
Once it’s ready to go, it’ll serve as a ferry in 90 minute bursts. (IEEE Spectrum)

Quote of the day

“We would move heaven and Earth, pun intended, to try to get to the Moon sooner.” 

—Dave Limp, CEO of Blue Origin, says the company is raring to work with NASA to get humans back on the Moon, Ars Technica reports.

One more thing

Design thinking was supposed to fix the world. Where did it go wrong?

In the 1990s, a six-step methodology for innovation called design thinking started to grow in popularity. Key to its spread was its replicable aesthetic, represented by the Post-it note: a humble square that anyone can use in infinite ways.

But in recent years, for a number of reasons, the shine of design thinking has been wearing off. Critics have argued that its short-term focus on novel and naive ideas results in unrealistic and ungrounded recommendations.

Today, some groups are working to reform both design thinking’s principles and its methodologies. These new efforts seek a set of design tools capable of equitably serving diverse communities and solving diverse problems well into the future. It’s a much more daunting—and crucial—task than design thinking’s original remit. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ These tree-dwelling toads give birth to live young—who knew?!
+ Now’s the time to practice your baking skills ahead of Thanksgiving.
+ Younguk Yi’s glitching paintings are a lot of fun.
+ Place your bets! This fun game follows three balls in a race to the bottom, but who will win?

The Download: a new home under the sea, and cloning pets

7 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The first new subsea habitat in 40 years is about to launch

Vanguard feels and smells like a new RV. It has long, gray banquettes that convert into bunks, a microwave cleverly hidden under a counter, a functional steel sink with a French press and crockery above. A weird little toilet hides behind a curtain.

But you can’t just fire up Vanguard’s engine and roll off the lot. Once it is sealed and moved to its permanent home beneath the waves of the Florida Keys National Marine Sanctuary early next year, Vanguard will be the world’s first new subsea habitat in nearly four decades.

Teams of four scientists will live and work on the seabed for a week at a time, entering and leaving the habitat as scuba divers. Read our story about some of their potential missions.

—Mark Harris

Cloning isn’t just for celebrity pets like Tom Brady’s dog

This week, we heard that Tom Brady had his dog cloned. The former quarterback revealed that his Junie is actually a clone of Lua, a pit bull mix that died in 2023.

Brady’s announcement follows those of celebrities like Paris Hilton and Barbra Streisand, who also famously cloned their pet dogs. But some believe there are better ways to make use of cloning technologies, such as diversifying the genetic pools of inbred species, or potentially bringing other animals back from the brink of extinction. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is facing a wave of new lawsuits 
The cases concern wrongful death complaints, and claims ChatGPT caused mental breakdowns. (NYT $)
+ One family claims ChatGPT “goaded” their son into taking his own life. (CNN)
+ The looming crackdown on AI companionship. (MIT Technology Review)

2 Tesla shareholders approved Elon Musk’s $1 trillion pay package
More than 75% of voters backed it. (WSJ $)
+ Musk had hinted he’d leave Tesla if the deal wasn’t greenlit. (Axios)
+ Tesla has to hit its ambitious targets before he can get his hands on the cash. (Wired $)

3 The EU is poised to water down the AI act
After pressure from Big Tech and the US government. (FT $)
+ While the legislation was passed last year, many provisions haven’t kicked in yet. (Reuters)

4 Meta is earning a colossal amount of money from scam ads
They accounted for 10% of its revenue last year. (Reuters)
+ Meta claims it “aggressively” addresses scam ads on its platform. (CNBC)

5 The Chan Zuckerberg Initiative is pivoting to AI
It’s shifting its philanthropic focus from social justice programs to curing disease. (WP $)
+ To achieve its goals, the charity will need extra computing power. (NYT $)

6 Unesco has adopted global standards on neurotechnology
Experts were increasingly concerned that a lack of guardrails could give rise to unethical practices. (The Guardian)
+ Meet the other companies developing brain-computer interfaces. (MIT Technology Review)

7 Benchmarks hugely oversell AI performance
A new study questions their reliability and the validity of their results. (NBC News)
+ How to build a better AI benchmark. (MIT Technology Review)

8 Kim Kardashian blames ChatGPT for failing her law exams
It’s almost like she shouldn’t have been consulting it for legal expertise in the first place. (Hollywood Reporter)
+ AI and social media is worsening brain rot. (NYT $)
+ How AI is introducing errors into courtrooms. (MIT Technology Review)

9 Hyundai is using robot dogs to inspect its EV production line
And they may soon be joined by a bipedal master. (IEEE Spectrum)

10 Grand Theft Auto VI has been delayed yet again
The highly anticipated video game has big, big shoes to fill. (Bloomberg $)
+ It’ll land a full 13 years after its previous incarnation—or will it? (BBC)

Quote of the day

“This is what oligarchy looks like.”

—Senator Bernie Sanders reacts to Tesla shareholders’ decision to award Elon Musk a $1 trillion pay package in a post on X.

One more thing

Finding forgotten Indigenous landscapes with electromagnetic technology

The fertile river valleys of the American Midwest hide tens of thousands of Indigenous earthworks, according to experts: geometric structures consisting of walls, mounds, ditches, and berms, some dating back nearly 3,000 years.

Archaeologists now believe that the earthworks functioned as religious gathering places, tombs for culturally important clans, and annual calendars, perhaps all at the same time. They can take the form of giant circles and squares, cloverleafs and octagons, complex S-curves and simple mounds.

Until recently, it seemed as if much of the continent’s pre-European archaeological heritage had been carelessly wiped out, uprooted, and lost for good. But traces remain: electromagnetic remnants in the soil that can be detected using specialty surveying equipment. And archaeologists and tribal historians are working together to uncover them. Read the full story.

—Geoff Manaugh

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re a wildlife fan, take a look at this compilation of the best places to catch a glimpse of unusual animals.
+ El Salvador’s annual fireball festival is a completely unhinged celebration of all things volcanic.
+ The most influential Bostonians of 2025 have been announced.
+ Get me in a potato bed, stat.

The Download: how doctors fight conspiracy theories, and your AI footprint

6 November 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How conspiracy theories infiltrated the doctor’s office

As anyone who has googled their symptoms and convinced themselves that they’ve got a brain tumor will attest, the internet makes it very easy to self-(mis)diagnose your health problems. And although social media and other digital forums can be a lifeline for some people looking for a diagnosis or community, when that information is wrong, it can put their well-being and even lives in danger.

We spoke to a number of health-care professionals who told us how this modern impulse to “do your own research” is changing their profession. Read the full story.

—Rhiannon Williams

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.

Stop worrying about your AI footprint. Look at the big picture instead.

—Casey Crownhart

As a climate technology reporter, I’m often asked by people whether they should be using AI, given how awful it is for the environment. Generally, I tell them not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want.

That response might surprise some. I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. But I feel strongly about not putting the onus on individuals. Here’s why.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

A new ion-based quantum computer makes error correction simpler

A company called Quantinuum has just unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability.

Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling.

But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s. Read the full story.

—Sophia Chen

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A new California law could change how all Americans browse online 
It gives web users the chance to opt out of having their personal information sold or shared. (The Markup)

2 The FDA has fast-tracked a pill to treat pancreatic cancer
The experimental drug appears promising, but experts worry corners may be cut. (WP $)
+ Demand for AstraZeneca’s cancer and diabetes drugs is pushing profits up. (Bloomberg $)
+ A new cancer treatment kills cells using localized heat. (Wired $)

3 AI pioneers claim it is already superior to humans in many tasks
But not all tasks are created equal. (FT $)
+ Are we all wandering into an AGI trap? (Vox)
+ How AGI became the most consequential conspiracy theory of our time. (MIT Technology Review)

4 IBM is planning on cutting thousands of jobs
It’s shifting its focus to software and AI consulting, apparently. (Bloomberg $)
+ It’s keen to grow the number of its customers seeking AI advice. (NYT $)

5 Big Tech’s data centers aren’t the job-generators we were promised
The jobs they do create are largely in security and cleaning. (Rest of World)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

6 Microsoft let AI shopping agents loose in a fake marketplace 
They were easily manipulated into buying goods, it found. (TechCrunch)
+ When AIs bargain, a less advanced agent could cost you. (MIT Technology Review)

7 Sony has compiled a dataset to test the fairness of computer vision models
And it’s confident it’s been compiled in a fair and ethical way. (The Register)
+ These new tools could make AI vision systems less biased. (MIT Technology Review)

8 The social network is no more
We’re living in an age of anti-social media. (The Atlantic $)
+ Scam ads are rife across platforms, but these former Meta workers have a plan. (Wired $)
+ The ultimate online flex? Having no followers. (New Yorker $)

9 Vibe coding is Collins dictionary’s word of 2025 📖
Beating stiff competition from “clanker.” (The Guardian)
+ What is vibe coding, exactly? (MIT Technology Review)

10 These people found romance with their chatbot companions
The AI may not be real, but the humans’ feelings certainly are. (NYT $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

Quote of the day

“The opportunistic side of me is realizing that your average accountant won’t be doing this.”

—Sal Abdulla, founder of accounting-software startup NixSheets, tells the Wall Street Journal he’s using AI tools to gain an edge on his competitors.

One more thing

Ethically sourced “spare” human bodies could revolutionize medicine

Many challenges in medicine stem, in large part, from a common root cause: a severe shortage of ethically-sourced human bodies.

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain.

Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman. Read the full story.

—Carsten T. Charlesworth, Henry T. Greely & Hiromitsu Nakauchi

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Make sure to look up so you don’t miss November’s supermoon.
+ If you keep finding yourself mindlessly scrolling (and who doesn’t?), maybe this whopping six-pound phone case could solve your addiction.
+ Life lessons from a 101-year old who has no plans to retire.
+ Are you a fan of movement snacking?

❌