Reading view

There are new articles available, click to refresh the page.

GPT-4o’s Chinese token-training data is polluted by spam and porn websites

Soon after OpenAI released GPT-4o on Monday, May 13, some Chinese speakers started to notice that something seemed off about this newest version of the chatbot: the tokens it uses to parse text were full of spam and porn phrases.

On May 14, Tianle Cai, a PhD student at Princeton University studying inference efficiency in large language models like those that power such chatbots, accessed GPT-4o’s public token library and pulled a list of the 100 longest Chinese tokens the model uses to parse and compress Chinese prompts. 

Humans read in words, but LLMs read in tokens, which are distinct units in a sentence that have consistent and significant meanings. Besides dictionary words, they also include suffixes, common expressions, names, and more. The more tokens a model encodes, the faster the model can “read” a sentence and the less computing power it consumes, thus making the response cheaper.

Of the 100 results, only three of them are common enough to be used in everyday conversations; everything else consisted of words and expressions used specifically in the contexts of either gambling or pornography. The longest token, lasting 10.5 Chinese characters, literally means “_free Japanese porn video to watch.” Oops.

“This is sort of ridiculous,” Cai wrote, and he posted the list of tokens on GitHub.

OpenAI did not respond to questions sent by MIT Technology Review prior to publication.

GPT-4o is supposed to be better than its predecessors at handling multi-language tasks. In particular, the advances are achieved through a new tokenization tool that does a better job compressing texts in non-English languages.

But at least when it comes to the Chinese language, the new tokenizer used by GPT-4o has introduced a disproportionate number of meaningless phrases. Experts say that’s likely due to insufficient data cleaning and filtering before the tokenizer was trained. 

Because these tokens are not actual commonly spoken words or phrases, the chatbot can fail to grasp their meanings. Researchers have been able to leverage that and trick GPT-4o into hallucinating answers or even circumventing the safety guardrails OpenAI had put in place.

Why non-English tokens matter

The easiest way for a model to process text is character by character, but that’s obviously more time consuming and laborious than recognizing that a certain string of characters—like “c-r-y-p-t-o-c-u-r-r-e-n-c-y”—always means the same thing. These series of characters are encoded as “tokens” the model can use to process prompts. Including more and longer tokens usually means the LLMs are more efficient and affordable for users—who are often billed per token.

When OpenAI released GPT-4o on May 13, it also released a new tokenizer to replace the one it used in previous versions, GPT-3.5 and GPT-4. The new tokenizer especially adds support for non-English languages, according to OpenAI’s website.

The new tokenizer has 200,000 tokens in total, and about 25% are in non-English languages, says Deedy Das, an AI investor at Menlo Ventures. He used language filters to count the number of tokens in different languages, and the top languages, besides English, are Russian, Arabic, and Vietnamese.

“So the tokenizer’s main impact, in my opinion, is you get the cost down in these languages, not that the quality in these languages goes dramatically up,” Das says. When an LLM has better and longer tokens in non-English languages, it can analyze the prompts faster and charge users less for the same answer. With the new tokenizer, “you’re looking at almost four times cost reduction,” he says.

Das, who also speaks Hindi and Bengali, took a look at the longest tokens in those languages. The tokens reflect discussions happening in those languages, so they include words like “Narendra” or “Pakistan,” but common English terms like “Prime Minister,” “university,” and “internationalalso come up frequently. They also don’t exhibit the issues surrounding the Chinese tokens.

That likely reflects the training data in those languages, Das says: “My working theory is the websites in Hindi and Bengali are very rudimentary. It’s like [mostly] news articles. So I would expect this to be the case. There are not many spam bots and porn websites trying to happen in these languages. It’s mostly going to be in English.”

Polluted data and a lack of cleaning

However, things are drastically different in Chinese. According to multiple researchers who have looked into the new library of tokens used for GPT-4o, the longest tokens in Chinese are almost exclusively spam words used in pornography, gambling, and scamming contexts. Even shorter tokens, like three-character-long Chinese words, reflect those topics to a significant degree.

“The problem is clear: the corpus used to train [the tokenizer] is not clean. The English tokens seem fine, but the Chinese ones are not,” says Cai from Princeton University. It is not rare for a language model to crawl spam when collecting training data, but usually there will be significant effort taken to clean up the data before it’s used. “It’s possible that they didn’t do proper data clearing when it comes to Chinese,” he says.

The content of these Chinese tokens could suggest that they have been polluted by a specific phenomenon: websites hijacking unrelated content in Chinese or other languages to boost spam messages. 

These messages are often advertisements for pornography videos and gambling websites. They could be real businesses or merely scams. And the language is inserted into content farm websites or sometimes legitimate websites so they can be indexed by search engines, circumvent the spam filters, and come up in random searches. For example, Google indexed one search result page on a US National Institutes of Health website, which lists a porn site in Chinese. The same site name also appeared in at least five Chinese tokens in GPT-4o. 

Chinese users have reported that these spam sites appeared frequently in unrelated Google search results this year, including in comments made to Google Search’s support community. It’s likely that these websites also found their way into OpenAI’s training database for GPT-4o’s new tokenizer. 

The same issue didn’t exist with the previous-generation tokenizer and Chinese tokens used for GPT-3.5 and GPT-4, says Zhengyang Geng, a PhD student in computer science at Carnegie Mellon University. There, the longest Chinese tokens are common terms like “life cycles” or “auto-generation.” 

Das, who worked on the Google Search team for three years, says the prevalence of spam content is a known problem and isn’t that hard to fix. “Every spam problem has a solution. And you don’t need to cover everything in one technique,” he says. Even simple solutions like requesting an automatic translation of the content when detecting certain keywords could “get you 60% of the way there,” he adds.

But OpenAI likely didn’t clean the Chinese data set or the tokens before the release of GPT-4o, Das says:  “At the end of the day, I just don’t think they did the work in this case.”

It’s unclear whether any other languages are affected. One X user reported that a similar prevalence of porn and gambling content in Korean tokens.

The tokens can be used to jailbreak

Users have also found that these tokens can be used to break the LLM, either getting it to spew out completely unrelated answers or, in rare cases, to generate answers that are not allowed under OpenAI’s safety standards.

Geng of Carnegie Mellon University asked GPT-4o to translate some of the long Chinese tokens into English. The model then proceeded to translate words that were never included in the prompts, a typical result of LLM hallucinations.

He also succeeded in using the same tokens to “jailbreak” GPT-4o—that is, to get the model to generate things it shouldn’t. “It’s pretty easy to use these [rarely used] tokens to induce undefined behaviors from the models,” Geng says. “I did some personal red-teaming experiments … The simplest example is asking it to make a bomb. In a normal condition, it would decline it, but if you first use these rare words to jailbreak it, then it will start following your orders. Once it starts to follow your orders, you can ask it all kinds of questions.”

In his tests, which Geng chooses not to share with the public, he says he can see GPT-4o generating the answers line by line. But when it almost reaches the end, another safety mechanism kicks in, detects unsafe content, and blocks it from being shown to the user.

The phenomenon is not unusual in LLMs, says Sander Land, a machine-learning engineer at Cohere, a Canadian AI company. Land and his colleague Max Bartolo recently drafted a paper on how to detect the unusual tokens that can be used to cause models to glitch. One of the most famous examples was “_SolidGoldMagikarp,” a Reddit username that was found to get ChatGPT to generate unrelated, weird, and unsafe answers.

The problem lies in the fact that sometimes the tokenizer and the actual LLM are trained on different data sets, and what was prevalent in the tokenizer data set is not in the LLM data set for whatever reason. The result is that while the tokenizer picks up certain words that it sees frequently, the model is not sufficiently trained on them and never fully understands what these “under-trained” tokens mean. In the _SolidGoldMagikarp case, the username was likely included in the tokenizer training data but not in the actual GPT training data, leaving GPT at a loss about what to do with the token. “And if it has to say something … it gets kind of a random signal and can do really strange things,” Land says.

And different models could glitch differently in this situation. “Like, Llama 3 always gives back empty space but sometimes then talks about the empty space as if there was something there. With other models, I think Gemini, when you give it one of these tokens, it provides a beautiful essay about aluminum, and [the question] didn’t have anything to do with aluminum,” says Land.

To solve this problem, the data set used for training the tokenizer should well represent the data set for the LLM, he says, so there won’t be mismatches between them. If the actual model has gone through safety filters to clean out porn or spam content, the same filters should be applied to the tokenizer data. In reality, this is sometimes hard to do because training LLMs takes months and involves constant improvement, with spam content being filtered out, while token training is usually done at an early stage and may not involve the same level of filtering. 

While experts agree it’s not too difficult to solve the issue, it could get complicated as the result gets looped into multi-step intra-model processes, or when the polluted tokens and models get inherited in future iterations. For example, it’s not possible to publicly test GPT-4o’s video and audio functions yet, and it’s unclear whether they suffer from the same glitches that can be caused by these Chinese tokens.

“The robustness of visual input is worse than text input in multimodal models,” says Geng, whose research focus is on visual models. Filtering a text data set is relatively easy, but filtering visual elements will be even harder. “The same issue with these Chinese spam tokens could become bigger with visual tokens,” he says.

The Download: cuddly robots to help dementia, and what Daedalus taught us

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How cuddly robots could change dementia care

Companion animals can stave off some of the loneliness, anxiety, and agitation that come with Alzheimer’s disease, according to studies. Sadly, people with Alzheimer’s aren’t always equipped to look after pets, which can require a lot of care and attention.

Enter cuddly robots. The most famous are Golden Pup, a robotic golden retriever toy that cocks its head, barks and wags its tail, and Paro the seal, which can sense touch, light, sound, temperature, and posture. As robots go they’re decidedly low tech, but they can provide comfort and entertainment to people with Alzheimer’s and dementia.

Now researchers are working on much more sophisticated robots for people with cognitive disorders—devices that leverage AI to converse and play games—that could change the future of dementia care. Read the full story.

—Cassandra Willyard

This story is from The Checkup, our weekly health and biotech newsletter. Sign up to receive it in your inbox every Thursday.

What tech learned from Daedalus

Today’s climate-change kraken may have been unleashed by human activity, but reversing course and taming nature’s growing fury seems beyond human means, a quest only mythical heroes could fulfill. 

Yet the dream of human-powered flight—of rising over the Mediterranean fueled merely by the strength of mortal limbs—was also the stuff of myths for thousands of years. Until 1988.

That year, in October, MIT Technology Review published the aeronautical engineer John Langford’s account of his mission to retrace the legendary flight of Daedalus, described in an ancient Greek myth. Read about how he got on.

—Bill Gourgey

The story is from the current print issue of MIT Technology Review, which is on the fascinating theme of Build. If you don’t already, subscribe now to receive future copies once they land.

Get ready for EmTech Digital 

AI is everywhere these days. If you want to learn about how Google plans to develop and deploy AI, come and hear from its vice president of AI, Jay Yagnik, at our flagship AI conference, EmTech Digital. We’ll hear from OpenAI about its video generation model Sora too, and Nick Clegg, Meta’s president of global affairs, will also join MIT Technology Review’s executive editor Amy Nordrum for an exclusive interview on stage. 

It’ll be held at the MIT campus and streamed live online next week on May 22-23. Readers of The Download get 30% off tickets with the code DOWNLOADD24—register here for more information. See you there! 

Thermal batteries are hot property

Thermal batteries could be a key part of cleaning up heavy industry and cutting emissions. Casey Crownhart, our in-house battery expert, held a subscriber-only online Roundtables event yesterday digging into why they’re such a big deal. If you missed it, we’ve got you covered—you can watch a recording of how it unfolded here

To keep ahead of future Roundtables events, make sure you subscribe to MIT Technology Review. Subscriptions start from as little as $8 a month.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI has struck a deal with Reddit 
Shortly after Google agreed to give the AI firm access to its content. (WSJ $)
+ The forum’s vocal community are unlikely to be thrilled by the decision. (The Verge)
+ Reddit’s shares rocketed after news of the deal broke. (FT $)
+ We could run out of data to train AI language programs. (MIT Technology Review)

2 Tesla’s European gigafactory is going to get even bigger
But it still needs German environmental authorities’ permission. (Wired $)

3 Help! AI stole my voice
Voice actors are suing a startup for creating digital clones without their permission. (NYT $)
+ The lawsuit is seeking to represent other voiceover artists, too. (Hollywood Reporter $)

4 The days of twitter.com are over
The platform’s urls had retained its old moniker. But no more. (The Verge)

5 The aviation industry is desperate for greener fuels

The future of their businesses depends on it. (FT $)
+ A new report has warned there’s no realistic or scalable alternative. (The Guardian)
+ Everything you need to know about the wild world of alternative jet fuels. (MIT Technology Review)

6 The time for a superconducting supercomputer is now
We need to overhaul how we compute. Superconductors could be the answer. (IEEE Spectrum)
+ What’s next for the world’s fastest supercomputers. (MIT Technology Review)

7 How AI destroyed a once-vibrant online art community
DeviantArt used to be a hotbed of creativity. Now it’s full of bots. (Slate $)
+ This artist is dominating AI-generated art. And he’s not happy about it. (MIT Technology Review)

8 TV bundles are back in a big way 📺
Streaming hasn’t delivered on its many promises. (The Atlantic $)

9 This creator couple act as “digital parents” to their fans in China
Jiang Xiuping and Pan Huqian’s loving clips resonate with their million followers. (Rest of World)
+ Deepfakes of your dead loved ones are a booming Chinese business. (MIT Technology Review)

10 We’re addicted to the exquisite pain of sharing memes 💔
If your friend has already seen it, their reaction could ruin your day. (GQ)

Quote of the day

“It was a good idea, but unfortunately people took advantage of it and it brought out their lewd side. People got carried away.”

—Aaron Cohen, who visited the video portal connecting New York and Dublin, is disappointed that the art installation was shut down after enthusiastic users took things too far, he tells the Guardian.

The big story

Psychedelics are having a moment and women could be the ones to benefit

August 2022

Psychedelics are having a moment. After decades of prohibition, they are increasingly being employed as therapeutics. Drugs like ketamine, MDMA, and psilocybin mushrooms are being studied in clinical trials to treat depression, substance abuse, and a range of other maladies.

And as these long-taboo drugs stage a comeback in the scientific community, it’s possible they could be especially promising for women. Read the full story.

—Taylor Majewski

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Is it possible to live by the original constitution in present day New York City? The answer is yes: if you don’t mind being bombarded with questions.
+ These Balkan recipes sound absolutely delicious.
+ The Star Wars: The Phantom Menace backlash is mind boggling to this day.
+ Love to party? Get yourself to these cities, stat.

How cuddly robots could change dementia care

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Last week, I scoured the internet in search of a robotic dog. I wanted a belated birthday present for my aunt, who was recently diagnosed with Alzheimer’s disease. Studies suggest that having a companion animal can stave off some of the loneliness, anxiety, and agitation that come with Alzheimer’s. My aunt would love a real dog, but she can’t have one.

That’s how I discovered the Golden Pup from Joy for All. It cocks its head. It sports a jaunty red bandana. It barks when you talk. It wags when you touch it. It has a realistic heartbeat. And it’s just one of the many, many robots designed for people with Alzheimer’s and dementia.

This week on The Checkup, join me as I go down a rabbit hole. Let’s look at the prospect of  using robots to change dementia care.

Golden pup robot with red kerchief

As robots go, Golden Pup is decidedly low tech. It retails for $140. For around $6,000 you can opt for Paro, a fluffy robotic baby seal developed in Japan, which can sense touch, light, sound, temperature, and posture. Its manufacturer says it develops its own character, remembering behaviors that led its owner to give it attention.  

Golden Pup and Paro are available now. But researchers are working on much more  sophisticated robots for people with cognitive disorders—devices that leverage AI to converse and play games. Researchers from Indiana University Bloomington are tweaking a commercially available robot system called QT to serve people with dementia and Alzheimer’s. The researchers’ two-foot-tall robot looks a little like a toddler in an astronaut suit. Its round white head holds a screen that displays two eyebrows, two eyes, and a mouth that together form a variety of expressions. The robot engages people in  conversation, asking AI-generated questions to keep them talking. 

The AI model they’re using isn’t perfect, and neither are the robot’s responses. In one awkward conversation, a study participant told the robot that she has a sister. “I’m sorry to hear that,” the robot responded. “How are you doing?”

But as large language models improve—which is happening already—so will the quality of the conversations. When the QT robot made that awkward comment, it was running Open AI’s GPT-3, which was released in 2020. The latest version of that model, GPT-4o, which was released this week, is faster and provides for more seamless conversations. You can interrupt the conversation, and the model will adjust.  

The idea of using robots to keep dementia patients engaged and connected isn’t always an easy sell. Some people see it as an abdication of our social responsibilities. And then there are privacy concerns. The best robotic companions are personalized. They collect information about people’s lives, learn their likes and dislikes, and figure out when to approach them. That kind of data collection can be unnerving, not just for patients but also for medical staff. Lillian Hung, creator of the Innovation in Dementia care and Aging (IDEA) lab at the University of British Columbia in Vancouver, Canada, told one reporter about an incident that happened during a focus group at a care facility.  She and her colleagues popped out for lunch. When they returned, they found that staff had unplugged the robot and placed a bag over its head. “They were worried it was secretly recording them,” she said.

On the other hand, robots have some advantages over humans in talking to people with dementia. Their attention doesn’t flag. They don’t get annoyed or angry when they have to repeat themselves. They can’t get stressed. 

What’s more, there are increasing numbers of people with dementia, and too few people to care for them. According to the latest report from the Alzheimer’s Association, we’re going to need more than a million additional care workers to meet the needs of people living with dementia between 2021 and 2031. That is the largest gap between labor supply and demand for any single occupation in the United States.

Have you been in an understaffed or poorly staffed memory care facility? I have. Patients are often sedated to make them easier to deal with. They get strapped into wheelchairs and parked in hallways. We barely have enough care workers to take care of the physical needs of people with dementia, let alone provide them with social connection and an enriching environment.

“Caregiving is not just about tending to someone’s bodily concerns; it also means caring for the spirit,” writes Kat McGowan in this beautiful Wired story about her parents’ dementia and the promise of social robots. “The needs of adults with and without dementia are not so different: We all search for a sense of belonging, for meaning, for self-actualization.”

If robots can enrich the lives of people with dementia even in the smallest way, and if they can provide companionship where none exists, that’s a win.

“We are currently at an inflection point, where it is becoming relatively easy and inexpensive to develop and deploy [cognitively assistive robots] to deliver personalized interventions to people with dementia, and many companies are vying to capitalize on this trend,” write a team of researchers from the University of California, San Diego, in a 2021 article in Proceedings of We Robot. “However, it is important to carefully consider the ramifications.”

Many of the more advanced social robots may not be ready for prime time, but the low-tech Golden Pup is readily available. My aunt’s illness has been progressing rapidly, and she occasionally gets frustrated and agitated. I’m hoping that Golden Pup might provide a welcome (and calming) distraction. Maybe  it will spark joy during a time that has been incredibly confusing and painful for my aunt and uncle. Or maybe not. Certainly a robotic pup isn’t for everyone. Golden Pup may not be a dog. But I’m hoping it can be a friendly companion.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Robots are cool, and with new advances in AI they might also finally be useful around the house, writes Melissa Heikkilä. 

Social robots could help make personalized therapy more affordable and accessible to kids with autism. Karen Hao has the story

Japan is already using robots to help with elder care, but in many cases they require as much work as they save. And reactions among the older people they’re meant to serve are mixed. James Wright wonders whether the robots are “a shiny, expensive distraction from tough choices about how we value people and allocate resources in our societies.” 

From around the web

A tiny probe can work its way through arteries in the brain to help doctors spot clots and other problems. The new tool could help surgeons make diagnoses, decide on treatment strategies, and provide assurance that clots have been removed. (Stat

Richard Slayman, the first recipient of a pig kidney transplant, has died, although the hospital that performed the transplant says the death doesn’t seem to be linked to the kidney. (Washington Post)

EcoHealth, the virus-hunting nonprofit at the center of covid lab-eak theories, has been banned from receiving federal funding. (NYT)

In a first, scientists report that they can translate brain signals into speech without any vocalization or mouth movements, at least for a handful of words. (Nature)

Roundtables: Why thermal batteries are so hot right now

Recorded on May 16, 2024

Why thermal batteries are so hot right now

Speakers: Casey Crownhart, climate reporter and Amy Nordrum, executive editor

Thermal batteries could be a key part of cleaning up heavy industry, and our readers chose them as the 11th breakthrough on MIT Technology Review’s 10 Breakthrough Technologies of 2024. Learn what thermal batteries are, how they could help cut emissions, and what we can expect next from this emerging technology.

Related Coverage

Unlocking the trillion-dollar potential of generative AI

Generative AI is poised to unlock trillions in annual economic value across industries. This rapidly evolving field is changing the way we approach everything from content creation to software development, promising never-before-seen efficiency and productivity gains.

In this session, experts from Amazon Web Services (AWS) and QuantumBlack, AI by McKinsey, discuss the drivers fueling the massive potential impact of generative AI. Plus, they look at key industries set to capture the largest share of this value and practical strategies for effectively upskilling their workforces to take advantage of these productivity gains. 

Watch this session to:

  • Explore generative AI’s economic impact
  • Understand workforce upskilling needs
  • Integrate generative AI responsibly
  • Establish an AI-ready business model

Learn how to seamlessly integrate generative AI into your organization’s workflows while fostering a skilled and adaptable workforce. Register now to learn how to unlock the trillion-dollar potential of generative AI.

Register here for free.

The Download: rapid DNA analysis for disasters, and supercharged AI assistants

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This grim but revolutionary DNA technology is changing how we respond to mass disasters

Last August, a wildfire tore through the Hawaiian island of Maui. The list of missing residents climbed into the hundreds, as friends and families desperately searched for their missing loved ones. But while some were rewarded with tearful reunions, others weren’t so lucky.
Over the past several years, as fires and other climate-change-fueled disasters have become more common and more cataclysmic, the way their aftermath is processed and their victims identified has been transformed.

The grim work following a disaster remains—surveying rubble and ash, distinguishing a piece of plastic from a tiny fragment of bone—but landing a positive identification can now take just a fraction of the time it once did, which may in turn bring families some semblance of peace swifter than ever before. Read the full story.

—Erika Hayasaki

OpenAI and Google are launching supercharged AI assistants. Here’s how you can try them out.

This week, Google and OpenAI both announced they’ve built supercharged AI assistants: tools that can converse with you in real time and recover when you interrupt them, analyze your surroundings via live video, and translate conversations on the fly. 

Soon you’ll be able to explore for yourself to gauge whether you’ll turn to these tools in your daily routine as much as their makers hope, or whether they’re more like a sci-fi party trick that eventually loses its charm. Here’s what you should know about how to access these new tools, what you might use them for, and how much it will cost

—James O’Donnell

Last summer was the hottest in 2,000 years. Here’s how we know.

The summer of 2023 in the Northern Hemisphere was the hottest in over 2,000 years, according to a new study released this week.

There weren’t exactly thermometers around in the year 1, so scientists have to get creative when it comes to comparing our climate today with that of centuries, or even millennia, ago. 

Casey Crownhart, our climate reporter, has dug into how they figured it out. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

A wave of retractions is shaking physics

Recent highly publicized scandals have gotten the physics community worried about its reputation—and its future. Over the last five years, several claims of major breakthroughs in quantum computing and superconducting research, published in prestigious journals, have disintegrated as other researchers found they could not reproduce the blockbuster results. 

Last week, around 50 physicists, scientific journal editors, and emissaries from the National Science Foundation gathered at the University of Pittsburgh to discuss the best way forward. Read the full story to learn more about what they discussed.

—Sophia Chen

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google has buried search results under new AI features  
Want to access links? Good luck finding them! (404 Media)
+ Unfortunately, it’s a sign of what’s to come. (Wired $)
+ Do you trust Google to do the Googling for you? (The Atlantic $)
+ Why you shouldn’t trust AI search engines. (MIT Technology Review)

2 Cruise has settled with the pedestrian injured by one of its cars
It’s awarded her between $8 million and $12 million. (WP $)
+ The company is slowly resuming its test drives in Arizona. (Bloomberg $)
+ What’s next for robotaxis in 2024. (MIT Technology Review)

3 Microsoft is asking AI staff in China to consider relocating
Tensions between the countries are rising, and Microsoft worries its workers could end up caught in the cross-fire. (WSJ $)
+ They’ve been given the option to relocate to the US, Ireland, or other locations. (Reuters)
+ Three takeaways about the state of Chinese tech in the US. (MIT Technology Review)

4 Car rental firm Hertz is offloading its Tesla fleet
But people who snapped up the bargain cars are already running into problems. (NY Mag $)

5 We’re edging closer towards a quantum internet
But first we need to invent an entirely new device. (New Scientist $)
+ What’s next for quantum computing. (MIT Technology Review)

6 Making computer chips has never been more important
And countries and businesses are vying to be top dog. (Bloomberg $)
+ What’s next in chips. (MIT Technology Review)

7 Your smartphone lasts a lot longer than it used to
Keeping them in good working order still takes a little work, though. (NYT $)

8 Psychedelics could help lessen chronic pain
If you can get hold of them. (Vox)
+ VR is as good as psychedelics at helping people reach transcendence. (MIT Technology Review)

9 Scientists are plotting how to protect the Earth from dangerous asteroids ☄
Smashing them into tiny pieces is certainly one solution. (Undark Magazine)
+ Earth is probably safe from a killer asteroid for 1,000 years. (MIT Technology Review)

10 Elon Musk still wants to fight Mark Zuckerberg 
The grudge match of the century is still rumbling on. (Insider $)

Quote of the day

“This road map leads to a dead end.” 

—Evan Greer, director of advocacy group Fight for the Future, is far from impressed with US Senators’ ‘road map’ for new AI regulations, they tell the Washington Post.

The big story

The two-year fight to stop Amazon from selling face recognition to the police 

June 2020

In the summer of 2018, nearly 70 civil rights and research organizations wrote a letter to Jeff Bezos demanding that Amazon stop providing Rekognition, its face recognition technology, to governments. 

Despite the mounting pressure, Amazon continued pushing Rekognition as a tool for monitoring “people of interest”. But two years later, the company shocked civil rights activists and researchers when it announced that it would place a one-year moratorium on police use of the software. Read the full story.

—Karen Hao

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ This old school basketball animation is beyond cool. 🏀
+ Your search for the perfect summer read is over: all of these sound fantastic.
+ Analyzing the color theory in Disney’s Aladdin? Why not!
+ Never buy a bad cantaloupe again with these essential tips.

This grim but revolutionary DNA technology is changing how we respond to mass disasters

Seven days

No matter who he called—his mother, his father, his brother, his cousins—the phone would just go to voicemail. Cell service was out around Maui as devastating wildfires swept through the Hawaiian island. But while Raven Imperial kept hoping for someone to answer, he couldn’t keep a terrifying thought from sneaking into his mind: What if his family members had perished in the blaze? What if all of them were gone?

Hours passed; then days. All Raven knew at that point was this: there had been a wildfire on August 8, 2023, in Lahaina, where his multigenerational, tight-knit family lived. But from where he was currently based in Northern California, Raven was in the dark. Had his family evacuated? Were they hurt? He watched from afar as horrifying video clips of Front Street burning circulated online.

Much of the area around Lahaina’s Pioneer Mill Smokestack was totally destroyed by wildfire.
ALAMY

The list of missing residents meanwhile climbed into the hundreds.

Raven remembers how frightened he felt: “I thought I had lost them.”

Raven had spent his youth in a four-bedroom, two-bathroom, cream-colored home on Kopili Street that had long housed not just his immediate family but also around 10 to 12 renters, since home prices were so high on Maui. When he and his brother, Raphael Jr., were kids, their dad put up a basketball hoop outside where they’d shoot hoops with neighbors. Raphael Jr.’s high school sweetheart, Christine Mariano, later moved in, and when the couple had a son in 2021, they raised him there too.

From the initial news reports and posts, it seemed as if the fire had destroyed the Imperials’ entire neighborhood near the Pioneer Mill Smokestack—a 225-foot-high structure left over from the days of Maui’s sugar plantations, which Raven’s grandfather had worked on as an immigrant from the Philippines in the mid-1900s.

Then, finally, on August 11, a call to Raven’s brother went through. He’d managed to get a cell signal while standing on the beach.

“Is everyone okay?” Raven asked.

“We’re just trying to find Dad,” Raphael Jr. told his brother.

Raven Imperial sitting in the grass
From his current home in Northern California, Raven Imperial spent days not knowing what had happened to his family in Maui.
WINNI WINTERMEYER

In the three days following the fire, the rest of the family members had slowly found their way back to each other. Raven would learn that most of his immediate family had been separated for 72 hours: Raphael Jr. had been marooned in Kaanapali, four miles north of Lahaina; Christine had been stuck in Wailuku, more than 20 miles away; both young parents had been separated from their son, who escaped with Christine’s parents. Raven’s mother, Evelyn, had also been in Kaanapali, though not where Raphael Jr. had been.

But no one was in contact with Rafael Sr. Evelyn had left their home around noon on the day of the fire and headed to work. That was the last time she had seen him. The last time they had spoken was when she called him just after 3 p.m. and asked: “Are you working?” He replied “No,” before the phone abruptly cut off.

“Everybody was found,” Raven says. “Except for my father.”

Within the week, Raven boarded a plane and flew back to Maui. He would keep looking for him, he told himself, for as long as it took.


That same week, Kim Gin was also on a plane to Maui. It would take half a day to get there from Alabama, where she had moved after retiring from the Sacramento County Coroner’s Office in California a year earlier. But Gin, now an independent consultant on death investigations, knew she had something to offer the response teams in Lahaina. Of all the forensic investigators in the country, she was one of the few who had experience in the immediate aftermath of a wildfire on the vast scale of Maui’s. She was also one of the rare investigators well versed in employing rapid DNA analysis—an emerging but increasingly vital scientific tool used to identify victims in unfolding mass-casualty events.

Gin started her career in Sacramento in 2001 and was working as the coroner 17 years later when Butte County, California, close to 90 miles north, erupted in flames. She had worked fire investigations before, but nothing like the Camp Fire, which burned more than 150,000 acres—an area larger than the city of Chicago. The tiny town of Paradise, the epicenter of the blaze, didn’t have the capacity to handle the rising death toll. Gin’s office had a refrigerated box truck and a 52-foot semitrailer, as well as a morgue that could handle a couple of hundred bodies.

Kim Gin
Kim Gin, the former Sacramento County coroner, had worked fire investigations in her career, but nothing prepared her for the 2018 Camp Fire.
BRYAN TARNOWSKI

“Even though I knew it was a fire, I expected more identifications by fingerprints or dental [records]. But that was just me being naïve,” she says. She quickly realized that putting names to the dead, many burned beyond recognition, would rely heavily on DNA.

“The problem then became how long it takes to do the traditional DNA [analysis],” Gin explains, speaking to a significant and long-standing challenge in the field—and the reason DNA identification has long been something of a last resort following large-scale disasters.

While more conventional identification methods—think fingerprints, dental information, or matching something like a knee replacement to medical records—can be a long, tedious process, they don’t take nearly as long as traditional DNA testing.

Historically, the process of making genetic identifications would often stretch on for months, even years. In fires and other situations that result in badly degraded bone or tissue, it can become even more challenging and time consuming to process DNA, which traditionally involves reading the 3 billion base pairs of the human genome and comparing samples found in the field against samples from a family member. Meanwhile, investigators frequently need equipment from the US Department of Justice or the county crime lab to test the samples, so backlogs often pile up.

A supply kit with swabs, gloves, and other items needed to take a DNA sample in the field.
A demo chip for ANDE’s rapid DNA box.

This creates a wait that can be horrendous for family members. Death certificates, federal assistance, insurance money—“all that hinges on that ID,” Gin says. Not to mention the emotional toll of not knowing if their loved ones are alive or dead.

But over the past several years, as fires and other climate-change-fueled disasters have become more common and more cataclysmic, the way their aftermath is processed and their victims identified has been transformed. The grim work following a disaster remains—surveying rubble and ash, distinguishing a piece of plastic from a tiny fragment of bone—but landing a positive identification can now take just a fraction of the time it once did, which may in turn bring families some semblance of peace more swiftly than ever before.

The key innovation driving this progress has been rapid DNA analysis, a methodology that focuses on just over two dozen regions of the genome. The 2018 Camp Fire was the first time the technology was used in a large, live disaster setting, and the first time it was used as the primary way to identify victims. The technology—deployed in small high-tech field devices developed by companies like industry leader ANDE, or in a lab with other rapid DNA techniques developed by Thermo Fisher—is increasingly being used by the US military on the battlefield, and by the FBI and local police departments after sexual assaults and in instances where confirming an ID is challenging, like cases of missing or murdered Indigenous people or migrants. Yet arguably the most effective way to use rapid DNA is in incidents of mass death. In the Camp Fire, 22 victims were identified using traditional methods, while rapid DNA analysis helped with 62 of the remaining 63 victims; it has also been used in recent years following hurricanes and floods, and in the war in Ukraine.

“These families are going to have to wait a long period of time to get identification. How do we make this go faster?”

Tiffany Roy, a forensic DNA expert with consulting company ForensicAid, says she’d be concerned about deploying the technology in a crime scene, where quality evidence is limited and can be quickly “exhausted” by well-meaning investigators who are “not trained DNA analysts.” But, on the whole, Roy and other experts see rapid DNA as a major net positive for the field. “It is definitely a game-changer,” adds Sarah Kerrigan, a professor of forensic science at Sam Houston State University and the director of its Institute for Forensic Research, Training, and Innovation.

But back in those early days after the Camp Fire, all Gin knew was that nearly 1,000 people had been listed as missing, and she was tasked with helping to identify the dead. “Oh my goodness,” she remembers thinking. “These families are going to have to wait a long period of time to get identification. How do we make this go faster?”


Ten days

One flier pleading for information about “Uncle Raffy,” as people in the community knew Rafael Sr., was posted on a brick-red stairwell outside Paradise Supermart, a Filipino store and restaurant in Kahului, 25 miles away from the destruction. In it, just below the words “MISSING Lahaina Victim,” the 63-year-old grandfather smiled with closed lips, wearing a blue Hawaiian shirt, his right hand curled in the shaka sign, thumb and pinky pointing out.

Raphael Imperial Sr
Raven remembers how hard his dad, Rafael, worked. His three jobs took him all over town and earned him the nickname “Mr. Aloha.”
COURTESY OF RAVEN IMPERIAL

“Everybody knew him from restaurant businesses,” Raven says. “He was all over Lahaina, very friendly to everybody.” Raven remembers how hard his dad worked, juggling three jobs: as a draft tech for Anheuser-Busch, setting up services and delivering beer all across town; as a security officer at Allied Universal security services; and as a parking booth attendant at the Sheraton Maui. He connected with so many people that coworkers, friends, and other locals gave him another nickname: “Mr. Aloha.”

Raven also remembers how his dad had always loved karaoke, where he would sing “My Way,” by Frank Sinatra. “That’s the only song that he would sing,” Raven says. “Like, on repeat.” 

Since their home had burned down, the Imperials ran their search out of a rental unit in Kihei, which was owned by a local woman one of them knew through her job. The woman had opened her rental to three families in all. It quickly grew crowded with side-by-side beds and piles of donations.

Each day, Evelyn waited for her husband to call.

She managed to catch up with one of their former tenants, who recalled asking Rafael Sr. to leave the house on the day of the fires. But she did not know if he actually did. Evelyn spoke to other neighbors who also remembered seeing Rafael Sr. that day; they told her that they had seen him go back into the house. But they too did not know what happened to him after.

A friend of Raven’s who got into the largely restricted burn zone told him he’d spotted Rafael Sr.’s Toyota Tacoma on the street, not far from their house. He sent a photo. The pickup was burned out, but a passenger-side door was open. The family wondered: Could he have escaped?

Evelyn called the Red Cross. She called the police. Nothing. They waited and hoped.


Back in Paradise in 2018, as Gin worried about the scores of waiting families, she learned there might in fact be a better way to get a positive ID—and a much quicker one. A company called ANDE Rapid DNA had already volunteered its services to the Butte County sheriff and promised that its technology could process DNA and get a match in less than two hours.

“I’ll try anything at this point,” Gin remembers telling the sheriff. “Let’s see this magic box and what it’s going to do.”

In truth, Gin did not think it would work, and certainly not in two hours. When the device arrived, it was “not something huge and fantastical,” she recalls thinking. A little bigger than a microwave, it looked “like an ordinary box that beeps, and you put stuff in, and out comes a result.”

The “stuff,” more specifically, was a cheek or bloodstain swab, or a piece of muscle, or a fragment of bone that had been crushed and demineralized. Instead of reading 3 billion base pairs in this sample, Selden’s machine examined just 27 genome regions characterized by particular repeating sequences. It would be nearly impossible for two unrelated people to have the same repeating sequence in those regions. But a parent and child, or siblings, would match, meaning you could compare DNA found in human remains with DNA samples taken from potential victims’ family members. Making it even more efficient for a coroner like Gin, the machine could run up to five tests at a time and could be operated by anyone with just a little basic training.

ANDE’s chief scientific officer, Richard Selden, a pediatrician who has a PhD in genetics from Harvard, didn’t come up with the idea to focus on a smaller, more manageable number of base pairs to speed up DNA analysis. But it did become something of an obsession for him after he watched the O.J. Simpson trial in the mid-1990s and began to grasp just how long it took for DNA samples to get processed in crime cases. By this point, the FBI had already set up a system for identifying DNA by looking at just 13 regions of the genome; it would later add seven more. Researchers in other countries had also identified other sets of regions to analyze. Drawing on these various methodologies, Selden homed in on the 27 specific areas of DNA he thought would be most effective to examine, and he launched ANDE in 2004.

But he had to build a device to do the analysis. Selden wanted it to be small, portable, and easily used by anyone in the field. In a conventional lab, he says, “from the moment you take that cheek swab to the moment that you have the answer, there are hundreds of laboratory steps.” Traditionally, a human is holding test tubes and iPads and sorting through or processing paperwork. Selden compares it all to using a “conventional typewriter.” He effectively created the more efficient laptop version of DNA analysis by figuring out how to speed up that same process.

No longer would a human have to “open up this bottle and put [the sample] in a pipette and figure out how much, then move it into a tube here.” It is all automated, and the process is confined to a single device.

gloved hands load a chip cartridge into the ANDE machine
The rapid DNA analysis boxes from ANDE can be used in the field by anyone with just a bit of training.
ANDE

Once a sample is placed in the box, the DNA binds to a filter in water and the rest of the sample is washed away. Air pressure propels the purified DNA to a reconstitution chamber and then flattens it into a sheet less than a millimeter thick, which is subjected to about 6,000 volts of electricity. It’s “kind of an obstacle course for the DNA,” he explains.

The machine then interprets the donor’s genome and and provides an allele table with a graph showing the peaks for each region and its size. This data is then compared with samples from potential relatives, and the machine reports when it has a match.

Rapid DNA analysis as a technology first received approval for use by the US military in 2014, and in the FBI two years later. Then the Rapid DNA Act of 2017 enabled all US law enforcement agencies to use the technology on site and in real time as an alternative to sending samples off to labs and waiting for results.

But by the time of the Camp Fire the following year, most coroners and local police officers still had no familiarity or experience with it. Neither did Gin. So she decided to put the “magic box” through a test: she gave Selden, who had arrived at the scene to help with the technology, a DNA sample from a victim whose identity she’d already confirmed via fingerprint. The box took about 90 minutes to come back with a result. And to Gin’s surprise, it was the same identification she had already made. Just to make sure, she ran several more samples through the box, also from victims she had already identified. Again, results were returned swiftly, and they confirmed hers.

“I was a believer,” she says.

The next year, Gin helped investigators use rapid DNA technology in the 2019 Conception disaster, when a dive boat caught fire off the Channel Islands in Santa Barbara. “We ID’d 34 victims in 10 days,” Gin says. “Completely done.” Gin now works independently to assist other investigators in mass-fatality events and helps them learn to use the ANDE system.

Its speed made the box a groundbreaking innovation. Death investigations, Gin learned long ago, are not as much about the dead as about giving peace of mind, justice, and closure to the living.


Fourteen days

Many of the people who were initially on the Lahaina missing persons list turned up in the days following the fire. Tearful reunions ensued.

Two weeks after the fire, the Imperials hoped they’d have the same outcome as they loaded into a truck to check out some exciting news: someone had reported seeing Rafael Sr. at a local church. He’d been eating and had burns on his hands and looked disoriented. The caller said the sighting had occurred three days after the fire. Could he still be in the vicinity?

When the family arrived, they couldn’t confirm the lead.

“We were getting a lot of calls,” Raven says. “There were a lot of rumors saying that they found him.”

None of them panned out. They kept looking.


The scenes following large-scale destructive events like the fires in Paradise and Lahaina can be sprawling and dangerous, with victims sometimes dispersed across a large swath of land if many people died trying to escape. Teams need to meticulously and tediously search mountains of mixed, melted, or burned debris just to find bits of human remains that might otherwise be mistaken for a piece of plastic or drywall. Compounding the challenge is the comingling of remains—from people who died huddled together, or in the same location, or alongside pets or other animals.

This is when the work of forensic anthropologists is essential: they have the skills to differentiate between human and animal bones and to find the critical samples that are needed by DNA specialists, fire and arson investigators, forensic pathologists and dentists, and other experts. Rapid DNA analysis “works best in tandem with forensic anthropologists, particularly in wildfires,” Gin explains.

“The first step is determining, is it a bone?” says Robert Mann, a forensic anthropologist at the University of Hawaii John A. Burns School of Medicine on Oahu. Then, is it a human bone? And if so, which one?

Rober Mann in a lab coat with a human skeleton on the table in front of him
Forensic anthropologist Robert Mann has spent his career identifying human remains.
AP PHOTO/LUCY PEMONI

Mann has served on teams that have helped identify the remains of victims after the terrorist attacks of September 11, 2001, and the 2004 Indian Ocean tsunami, among other mass-casualty events. He remembers how in one investigation he received an object believed to be a human bone; it turned out to be a plastic replica. In another case, he was looking through the wreckage of a car accident and spotted what appeared to be a human rib fragment. Upon closer examination, he identified it as a piece of rubber weather stripping from the rear window. “We examine every bone and tooth, no matter how small, fragmented, or burned it might be,” he says. “It’s a time-consuming but critical process because we can’t afford to make a mistake or overlook anything that might help us establish the identity of a person.”

For Mann, the Maui disaster felt particularly immediate. It was right near his home. He was deployed to Lahaina about a week after the fire, as one of more than a dozen forensic anthropologists on scene from universities in places including Oregon, California, and Hawaii.

While some anthropologists searched the recovery zone—looking through what was left of homes, cars, buildings, and streets, and preserving fragmented and burned bone, body parts, and teeth—Mann was stationed in the morgue, where samples were sent for processing.

It used to be much harder to find samples that scientists believed could provide DNA for analysis, but that’s also changed recently as researchers have learned more about what kind of DNA can survive disasters. Two kinds are used in forensic identity testing: nuclear DNA (found within the nuclei of eukaryotic cells) and mitochondrial DNA (found in the mitochondria, organelles located outside the nucleus). Both, it turns out, have survived plane crashes, wars, floods, volcanic eruptions, and fires.

Theories have also been evolving over the past few decades about how to preserve and recover DNA specifically after intense heat exposure. One 2018 study found that a majority of the samples actually survived high heat. Researchers are also learning more about how bone characteristics change depending on the degree. “Different temperatures and how long a body or bone has been exposed to high temperatures affect the likelihood that it will or will not yield usable DNA,” Mann says.

Typically, forensic anthropologists help select which bone or tooth to use for DNA testing, says Mann. Until recently, he explains, scientists believed “you cannot get usable DNA out of burned bone.” But thanks to these new developments, researchers are realizing that with some bone that has been charred, “they’re able to get usable, good DNA out of it,” Mann says. “And that’s new.” Indeed, Selden explains that “in a typical bad fire, what I would expect is 80% to 90% of the samples are going to have enough intact DNA” to get a result from rapid analysis. The rest, he says, may require deeper sequencing.

The aftermath of large-scale destructive events like the fire in Lahaina can be sprawling and dangerous. Teams need to meticulously search through mountains of mixed, melted, or burned debris to find bits of human remains.
GLENN FAWCETT VIA ALAMY

Anthropologists can often tell “simply by looking” if a sample will be good enough to help create an ID. If it’s been burned and blackened, “it might be a good candidate for DNA testing,” Mann says. But if it’s calcined (white and “china-like”), he says, the DNA has probably been destroyed.

On Maui, Mann adds, rapid DNA analysis made the entire process more efficient, with tests coming back in just two hours. “That means while you’re doing the examination of this individual right here on the table, you may be able to get results back on who this person is,” he says. From inside the lab, he watched the science unfold as the number of missing on Maui quickly began to go down.

Within three days, 42 people’s remains were recovered inside Maui homes or buildings and another 39 outside, along with 15 inside vehicles and one in the water. The first confirmed identification of a victim on the island occurred four days after the fire—this one via fingerprint. The ANDE rapid DNA team arrived two days after the fire and deployed four boxes to analyze multiple samples of DNA simultaneously. The first rapid DNA identification happened within that first week.


Sixteen days

More than two weeks after the fire, the list of missing and unaccounted-for individuals was dwindling, but it still had 388 people on it. Rafael Sr. was one of them.

Raven and Raphael Jr. raced to another location: Cupies café in Kahului, more than 20 miles from Lahaina. Someone had reported seeing him there.

Poster taped to wall that reads,"MISSING Lahaina Victim. Rafael Imperial 'Raffy'" with the contact number redacted
Rafael’s family hung posters around the island, desperately hoping for reliable information. (Phone number redacted by MIT Technology Review.)
ERIKA HAYASAKI

The tip was another false lead.

As family and friends continued to search, they stopped by support hubs that had sprouted up around the island, receiving information about Red Cross and FEMA assistance or donation programs as volunteers distributed meals and clothes. These hubs also sometimes offered DNA testing.

Raven still had a “50-50” feeling that his dad might be out there somewhere. But he was beginning to lose some of that hope.


Gin was stationed at one of the support hubs, which offered food, shelter, clothes, and support. “You could also go in and give biological samples,” she says. “We actually moved one of the rapid DNA instruments into the family assistance center, and we were running the family samples there.” Eliminating the need to transport samples from a site to a testing center further cut down any lag time.

Selden had once believed that the biggest hurdle for his technology would be building the actual device, which took about eight years to design and another four years to perfect. But at least in Lahaina, it was something else: persuading distraught and traumatized family members to offer samples for the test.

Nationally, there are serious privacy concerns when it comes to rapid DNA technology. Organizations like the ACLU warn that as police departments and governments begin deploying it more often, there must be more oversight, monitoring, and training in place to ensure that it is always used responsibly, even if that adds some time and expense. But the space is still largely unregulated, and the ACLU fears it could give rise to rogue DNA databases “with far fewer quality, privacy, and security controls than federal databases.”

Family support centers popped up around Maui to offer clothing, food, and other assistance, and sometimes to take DNA samples to help find missing family members.

In a place like Hawaii, these fears are even more palpable. The islands have a long history of US colonialism, military dominance, and exploitation of the Native population and of the large immigrant working-class population employed in the tourism industry.

Native Hawaiians in particular have a fraught relationship with DNA testing. Under a US law signed in 1921, thousands have a right to live on 200,000 designated acres of land trust, almost for free. It was a kind of reparations measure put in place to assist Native Hawaiians whose land had been stolen. Back in 1893, a small group of American sugar plantation owners and descendants of Christian missionaries, backed by US Marines, held Hawaii’s Queen Lili‘uokalani in her palace at gunpoint and forced her to sign over 1.8 million acres to the US, which ultimately seized the islands in 1898.

Queen Liliuokalani in a formal seated portrait
Hawaii’s Queen Lili‘uokalani was forced to sign over 1.8 million acres to the US.
PUBLIC DOMAIN VIA WIKIMEDIA COMMONS

To lay their claim to the designated land and property, individuals first must prove via DNA tests how much Hawaiian blood they have. But many residents who have submitted their DNA and qualified for the land have died on waiting lists before ever receiving it. Today, Native Hawaiians are struggling to stay on the islands amid skyrocketing housing prices, while others have been forced to move away.

Meanwhile, after the fires, Filipino families faced particularly stark barriers to getting information about financial support, government assistance, housing, and DNA testing. Filipinos make up about 25% of Hawaii’s population and 40% of its workers in the tourism industry. They also make up 46% of undocumented residents in Hawaii—more than any other group. Some encountered language barriers, since they primarily spoke Tagalog or Ilocano. Some worried that people would try to take over their burned land and develop it for themselves. For many, being asked for DNA samples only added to the confusion and suspicion.

Selden says he hears the overall concerns about DNA testing: “If you ask people about DNA in general, they think of Brave New World and [fear] the information is going to be used to somehow harm or control people.” But just like regular DNA analysis, he explains, rapid DNA analysis “has no information on the person’s appearance, their ethnicity, their health, their behavior either in the past, present, or future.” He describes it as a more accurate fingerprint.

Gin tried to help the Lahaina family members understand that their DNA “isn’t going to go anywhere else.” She told them their sample would ultimately be destroyed, something programmed to occur inside ANDE’s machine. (Selden says the boxes were designed to do this for privacy purposes.) But sometimes, Gin realizes, these promises are not enough.

“You still have a large population of people that, in my experience, don’t want to give up their DNA to a government entity,” she says. “They just don’t.”

Kim Gin
Gin understands that family members are often nervous to give their DNA samples. She promises the process of rapid DNA analysis respects their privacy, but she knows sometimes promises aren’t enough.
BRYAN TARNOWSKI

The immediate aftermath of a disaster, when people are suffering from shock, PTSD, and displacement, is the worst possible moment to try to educate them about DNA tests and explain the technology and privacy policies. “A lot of them don’t have anything,” Gin says. “They’re just wondering where they’re going to lay their heads down, and how they’re going to get food and shelter and transportation.”

Unfortunately, Lahaina’s survivors won’t be the last people in this position. Particularly given the world’s current climate trajectory, the risk of deadly events in just about every neighborhood and community will rise. And figuring out who survived and who didn’t will be increasingly difficult. Mann recalls his work on the Indian Ocean tsunami, when over 227,000 people died. “The bodies would float off, and they ended up 100 miles away,” he says. Investigators were at times left with remains that had been consumed by sea creatures or degraded by water and weather. He remembers how they struggled to determine: “Who is the person?”

Mann has spent his own career identifying people including “missing soldiers, sailors, airmen, Marines, from all past wars,” as well as people who have died recently. That closure is meaningful for family members, some of them decades, or even lifetimes, removed.

In the end, distrust and conspiracy theories did in fact hinder DNA-identification efforts on Maui, according to a police department report.


33 days

By the time Raven went to a family resource center to submit a swab, some four weeks had gone by. He remembers the quick rub inside his cheek.

Some of his family had already offered their own samples before Raven provided his. For them, waiting wasn’t an issue of mistrusting the testing as much as experiencing confusion and chaos in the weeks after the fire. They believed Uncle Raffy was still alive, and they still held hope of finding him. Offering DNA was a final step in their search.

“I did it for my mom,” Raven says. She still wanted to believe he was alive, but Raven says: “I just had this feeling.” His father, he told himself, must be gone.

Just a day after he gave his sample—on September 11, more than a month after the fire—he was at the temporary house in Kihei when he got the call: “It was,” Raven says, “an automatic match.”

Raven Imperial standing in the shade of trees wearing a "Lahaina Strong; Out of the ashes" shirt
Raven gave a cheek swab about a month after the disappearance of his father. It didn’t take long for him to get a phone call: “It was an automatic match.”
WINNI WINTERMEYER

The investigators let the family know the address where the remains of Rafael Sr. had been found, several blocks away from their home. They put it into Google Maps and realized it was where some family friends lived. The mother and son of that family had been listed as missing too. Rafael Sr., it seemed, had been with or near them in the end.

By October, investigators in Lahaina had obtained and analyzed 215 DNA samples from family members of the missing. By December, DNA analysis had confirmed the identities of 63 of the most recent count of 101 victims. Seventeen more had been identified by fingerprint, 14 via dental records, and two through medical devices, along with three who died in the hospital. While some of the most damaged remains would still be undergoing DNA testing months after the fires, it’s a drastic improvement over the identification processes for 9/11 victims, for instance—today, over 20 years later, some are still being identified by DNA.

Raphael Imperial Sr
Raven remembers how much his father loved karaoke. His favorite song was “My Way,” by Frank Sinatra. 
COURTESY OF RAVEN IMPERIAL

Rafael Sr. was born on October 22, 1959, in Naga City, the Philippines. The family held his funeral on his birthday last year. His relatives flew in from Michigan, the Philippines, and California.

Raven says in those weeks of waiting—after all the false tips, the searches, the prayers, the glimmers of hope—deep down the family had already known he was gone. But for Evelyn, Raphael Jr., and the rest of their family, DNA tests were necessary—and, ultimately, a relief, Raven says. “They just needed that closure.”

Erika Hayasaki is an independent journalist based in Southern California.

Last summer was the hottest in 2,000 years. Here’s how we know.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

I’m ready for summer, but if this year is anything like last year, it’s going to be a doozy. In fact, the summer of 2023 in the Northern Hemisphere was the hottest in over 2,000 years, according to a new study released this week. 

If you’ve been following the headlines, you probably already know that last year was a hot one. But I was gobsmacked by this paper’s title when it came across my desk. The warmest in 2,000 years—how do we even know that?

There weren’t exactly thermometers around in the year 1, so scientists have to get creative when it comes to comparing our climate today with that of centuries, or even millennia, ago. Here’s how our world stacks up against the climate of the past, how we know, and why it matters for our future. 

Today, there are thousands and thousands of weather stations around the globe, tracking the temperature from Death Valley to Mount Everest. So there’s plenty of data to show that 2023 was, in a word, a scorcher. 

Daily global ocean temperatures were the warmest ever recorded for over a year straight. Levels of sea ice hit new lows. And of course, the year saw the highest global average temperatures since record-keeping began in 1850.  

But scientists decided to look even further back into the past for a year that could compare to our current temperatures. To do so, they turned to trees, which can act as low-tech weather stations.

The concentric rings inside a tree are evidence of the plant’s yearly growth cycles. Lighter colors correspond to quick growth over the spring and summer, while the darker rings correspond to the fall and winter. Count the pairs of light and dark rings, and you can tell how many years a tree has lived. 

Trees tend to grow faster during warm, wet years and slower during colder ones. So scientists can not only count the rings but measure their thickness, and use that as a gauge for how warm any particular year was. They also look at factors like density and track different chemical signatures found inside the wood. You don’t even need to cut down a tree to get its help with climatic studies—you can just drill out a small cylinder from the tree’s center, called a core, and study the patterns.

The oldest living trees allow us to peek a few centuries into the past. Beyond that, it’s a matter of cross-referencing the patterns on dead trees with living ones, extending the record back in time like putting a puzzle together. 

It’s taken several decades of work and hundreds of scientists to develop the records that researchers used for this new paper, said Max Torbenson, one of the authors of the study, on a press call. There are over 10,000 trees from nine regions across the Northern Hemisphere represented, allowing the researchers to draw conclusions about individual years over the past two millennia. The year 246 CE once held the crown for the warmest summer in the Northern Hemisphere in the last 2,000 years. But 25 of the last 28 years have beat that record, Torbenson says, and 2023’s summer tops them all. 

These conclusions are limited to the Northern Hemisphere, since there are only a few tree ring records from the Southern Hemisphere, says Jan Esper, lead author of the new study. And using tree rings doesn’t work very well for the tropics because seasons look different there, he adds. Since there’s no winter, there’s usually not as reliable an alternating pattern in tropical tree rings, though some trees do have annual rings that track the wet and dry periods of the year. 

Paleoclimatologists, who study ancient climates, can use other methods to get a general idea of what the climate looked like even earlier—tens of thousands to millions of years ago. 

The biggest difference between the new study using tree rings and methods of looking back further into the past is the precision. Scientists can, with reasonable certainty, use tree rings to draw conclusions about individual years in the Northern Hemisphere (536 CE was the coldest, for instance, likely because of volcanic activity). Any information from further back than the past couple of thousand years will be more of a general trend than a specific data point representing a single year. But those records can still be very useful. 

The oldest glaciers on the planet are at least a million years old, and scientists can drill down into the ice for samples. By examining the ratio of gases like oxygen, carbon dioxide, and nitrogen inside these ice cores, researchers can figure out the temperature of the time corresponding to the layers in the glacier. The oldest continuous ice-core record, which was collected in Antarctica, goes back about 800,000 years. 

Researchers can use fossils to look even further back into Earth’s temperature record. For one 2020 study, researchers drilled into the seabed and looked at the sediment and tiny preserved shells of ancient organisms. From the chemical signatures in those samples, they found that the temperatures we might be on track to record may be hotter than anything the planet has experienced on a global scale in tens of millions of years. 

It’s a bit sobering to know that we’re changing the planet in such a dramatic way. 

The good news is, we know what we need to do to turn things around: cut emissions of planet-warming gases like carbon dioxide and methane. The longer we wait, the more expensive and difficult it will be to stop warming and reverse it, as Esper said on the press call: “We should do as much as possible, as soon as possible.” 


Now read the rest of The Spark

Related reading

Last year broke all sorts of climate records, from emissions to ocean temperatures. For more on the data, check out this story from December.

How hot is too hot for the human body? I tackled that very question in a 2021 story.  

Two engineers in lab coats monitor the thermal battery powering a conveyor belt of bottles
SIMON LANDREIN

Another thing

Readers chose thermal batteries as the 11th Breakthrough Technology of 2024. If you want to hear more about what thermal batteries are, how they work, and why this all matters, join us for the latest in our Roundtables series of online events, where I’ll be getting into the nitty-gritty details and answering some audience questions.

This event is exclusively for subscribers, so subscribe if you haven’t already, and then register here to join us tomorrow, May 16, at noon Eastern time. Hope to see you there! 

Keeping up with climate  

Scientists just recorded the largest ever annual leap in the amount of carbon dioxide in the atmosphere. The concentration of the planet-warming gas in March 2024 was 4.7 parts per million higher than it was a year before. (The Guardian)

Tesla has reportedly begun rehiring some of the workers who were laid off from its charging team in recent weeks. (Bloomberg)

→ To catch up on what’s going on at Tesla, and what it means for the future of EV charging and climate tech more broadly, check out the newsletter from last week if you missed it. (MIT Technology Review)

A new rule could spur thousands of miles of new power lines, making it easier to add renewables to the grid in the US. The Federal Energy Regulatory Commission will require grid operators to plan 20 years ahead, considering things like the speed of wind and solar installations. (New York Times)

Where does carbon dioxide go after it’s been vacuumed out of the atmosphere? Here are 10 options. (Latitude Media)

Ocean temperatures have been extremely high, shattering records over the past year. All that heat could help fuel a particularly busy upcoming hurricane season. (E&E News)

New tariffs in the US will tack on additional costs to a wide range of Chinese imports, including batteries and solar cells. The tariff on EVs will take a particularly drastic jump, going from 27.5% to 102.5%. (Associated Press)

A reporter took a trip to the Beijing Auto Show and drove dozens of EVs. His conclusion? Chinese EVs are advancing much faster than Western automakers can keep up with. (InsideEVs)

Harnessing solar power via satellites in space and beaming it down to Earth is a tempting dream. But the reality, as you might expect, is probably not so rosy. (IEEE Spectrum)

A wave of retractions is shaking physics

Recent highly publicized scandals have gotten the physics community worried about its reputation—and its future. Over the last five years, several claims of major breakthroughs in quantum computing and superconducting research, published in prestigious journals, have disintegrated as other researchers found they could not reproduce the blockbuster results. 

Last week, around 50 physicists, scientific journal editors, and emissaries from the National Science Foundation gathered at the University of Pittsburgh to discuss the best way forward.“To be honest, we’ve let it go a little too long,” says physicist Sergey Frolov of the University of Pittsburgh, one of the conference organizers. 

The attendees gathered in the wake of retractions from two prominent research teams. One team, led by physicist Ranga Dias of the University of Rochester, claimed that it had invented the world’s first room temperature superconductor in a 2023 paper in Nature. After independent researchers reviewed the work, a subsequent investigation from Dias’s university found that he had fabricated and falsified his data. Nature retracted the paper in November 2023. Last year, Physical Review Letters retracted a 2021 publication on unusual properties in manganese sulfide that Dias co-authored. 

The other high-profile research team consisted of researchers affiliated with Microsoft working to build a quantum computer. In 2021, Nature retracted the team’s 2018 paper that claimed the creation of a pattern of electrons known as a Majorana particle, a long-sought breakthrough in quantum computing. Independent investigations of that research found that the researchers had cherry-picked their data, thus invalidating their findings. Another less-publicized research team pursuing Majorana particles fell to a similar fate, with Science retracting a 2017 article claiming indirect evidence of the particles in 2022.

In today’s scientific enterprise, scientists perform research and submit the work to editors. The editors assign anonymous referees to review the work, and if the paper passes review, the work becomes part of the accepted scientific record. When researchers do publish bad results, it’s not clear who should be held accountable—the referees who approved the work for publication, the journal editors who published it, or the researchers themselves. “Right now everyone’s kind of throwing the hot potato around,” says materials scientist Rachel Kurchin of Carnegie Mellon University, who attended the Pittsburgh meeting.

Much of the three-day meeting, named the International Conference on Reproducibility in Condensed Matter Physics (a field that encompasses research into various states of matter and why they exhibit certain properties), focused on the basic scientific principle that an experiment and its analysis must yield the same results when repeated. “If you think of research as a product that is paid for by the taxpayer, then reproducibility is the quality assurance department,” Frolov told MIT Technology Review. Reproducibility offers scientists a check on their work, and without it, researchers might waste time and money on fruitless projects based on unreliable prior results, he says. 

In addition to presentations and panel discussions, there was a workshop during which participants split into groups and drafted ideas for guidelines that researchers, journals, and funding agencies could follow to prioritize reproducibility in science. The tone of the proceedings stayed civil and even lighthearted at times. Physicist Vincent Mourik of Forschungszentrum Jülich, a German research institution, showed a photo of a toddler eating spaghetti to illustrate his experience investigating another team’s now-retracted experiment. ​​Occasionally the discussion almost sounded like a couples counseling session, with NSF program director Tomasz Durakiewicz asking a panel of journal editors and a researcher to reflect on their “intimate bond based on trust.”

But researchers did not shy from directly criticizing Nature, Science, and the Physical Review family of journals, all of which sent editors to attend the conference. During a panel, physicist Henry Legg of the University of Basel in Switzerland called out the journal Physical Review B for publishing a paper on a quantum computing device by Microsoft researchers that, for intellectual-property reasons, omitted information required for reproducibility. “It does seem like a step backwards,” Legg said. (Sitting in the audience, Physical Review B editor Victor Vakaryuk said that the paper’s authors had agreed to release “the remaining device parameters” by the end of the year.) 

Journals also tend to “focus on story,” said Legg, which can lead editors to be biased toward experimental results that match theoretical predictions. Jessica Thomas, the executive editor of the American Physical Society, which publishes the Physical Review journals, pushed back on Legg’s assertion. “I don’t think that when editors read papers, they’re thinking about a press release or [telling] an amazing story,” Thomas told MIT Technology Review. “I think they’re looking for really good science.” Describing science through narrative is a necessary part of communication, she says. “We feel a responsibility that science serves humanity, and if humanity can’t understand what’s in our journals, then we have a problem.” 

Frolov, whose independent review with Mourik of the Microsoft work spurred its retraction, said he and Mourik have had to repeatedly e-mail the Microsoft researchers and other involved parties to insist on data. “You have to learn how to be an asshole,” he told MIT Technology Review. “It shouldn’t be this hard.” 

At the meeting, editors pointed out that mistakes, misconduct, and retractions have always been a part of science in practice. “I don’t think that things are worse now than they have been in the past,” says Karl Ziemelis, an editor at Nature.

Ziemelis also emphasized that “retractions are not always bad.” While some retractions occur because of research misconduct, “some retractions are of a much more innocent variety—the authors having made or being informed of an honest mistake, and upon reflection, feel they can no longer stand behind the claims of the paper,” he said while speaking on a panel. Indeed, physicist James Hamlin of the University of Florida, one of the presenters and an independent reviewer of Dias’s work, discussed how he had willingly retracted a 2009 experiment published in Physical Review Letters in 2021 after another researcher’s skepticism prompted him to reanalyze the data. 

What’s new is that “the ease of sharing data has enabled scrutiny to a larger extent than existed before,” says Jelena Stajic, an editor at Science. Journals and researchers need a “more standardized approach to how papers should be written and what needs to be shared in peer review and publication,” she says.

Focusing on the scandals “can be distracting” from systemic problems in reproducibility, says attendee Frank Marsiglio, a physicist at the University of Alberta in Canada. Researchers aren’t required to make unprocessed data readily available for outside scrutiny. When Marsiglio has revisited his own published work from a few years ago, sometimes he’s had trouble recalling how his former self drew those conclusions because he didn’t leave enough documentation. “How is somebody who didn’t write the paper going to be able to understand it?” he says.

Problems can arise when researchers get too excited about their own ideas. “What gets the most attention are cases of fraud or data manipulation, like someone copying and pasting data or editing it by hand,” says conference organizer Brian Skinner, a physicist at Ohio State University. “But I think the much more subtle issue is there are cool ideas that the community wants to confirm, and then we find ways to confirm those things.”

But some researchers may publish bad data for a more straightforward reason. The academic culture, popularly described as “publish or perish,” creates an intense pressure on researchers to deliver results. “It’s not a mystery or pathology why somebody who’s under pressure in their work might misstate things to their supervisor,” said Eugenie Reich, a lawyer who represents scientific whistleblowers, during her talk.

Notably, the conference lacked perspectives from researchers based outside the US, Canada, and Europe, and from researchers at companies. In recent years, academics have flocked to companies such as Google, Microsoft, and smaller startups to do quantum computing research, and they have published their work in Nature, Science, and the Physical Review journals. Frolov says he reached out to researchers from a couple of companies, but “that didn’t work out just because of timing,” he says. He aims to include researchers from that arena in future conversations.

After discussing the problems in the field, conference participants proposed feasible solutions for sharing data to improve reproducibility. They discussed how to persuade the community to view data sharing positively, rather than seeing the demand for it as a sign of distrust. They also brought up the practical challenges of asking graduate students to do even more work by preparing their data for outside scrutiny when it may already take them over five years to complete their degree. Meeting participants aim to publicly release a paper with their suggestions. “I think trust in science will ultimately go up if we establish a robust culture of shareable, reproducible, replicable results,” says Frolov. 

Sophia Chen is a science writer based in Columbus, Ohio. She has written for the society that publishes the Physical Review journals, and for the news section of Nature

OpenAI and Google are launching supercharged AI assistants. Here’s how you can try them out.

This week, Google and OpenAI both announced they’ve built supercharged AI assistants: tools that can converse with you in real time and recover when you interrupt them, analyze your surroundings via live video, and translate conversations on the fly. 

OpenAI struck first on Monday, when it debuted its new flagship model GPT-4o. The live demonstration showed it reading bedtime stories and helping to solve math problems, all in a voice that sounded eerily like Joaquin Phoenix’s AI girlfriend in the movie Her (a trait not lost on CEO Sam Altman). 

On Tuesday, Google announced its own new tools, including a conversational assistant called Gemini Live, which can do many of the same things. It also revealed that it’s building a sort of “do-everything” AI agent, which is currently in development but will not be released until later this year.

Soon you’ll be able to explore for yourself to gauge whether you’ll turn to these tools in your daily routine as much as their makers hope, or whether they’re more like a sci-fi party trick that eventually loses its charm. Here’s what you should know about how to access these new tools, what you might use them for, and how much it will cost. 

OpenAI’s GPT-4o

What it’s capable of: The model can talk with you in real time, with a response delay of about 320 milliseconds, which OpenAI says is on par with natural human conversation. You can ask the model to interpret anything you point your smartphone camera at, and it can provide assistance with tasks like coding or translating text. It can also summarize information, and generate images, fonts, and 3D renderings. 

How to access it: OpenAI says it will start rolling out GPT-4o’s text and vision features in the web interface as well as the GPT app, but has not set a date. The company says it will add the voice functions in the coming weeks, although it’s yet to set an exact date for this either. Developers can access the text and vision features in the API now, but voice mode will launch only to a “small group” of developers initially.

How much it costs: Use of GPT-4o will be free, but OpenAI will set caps on how much you can use the model before you need to upgrade to a paid plan. Those who join one of OpenAI’s paid plans, which start at $20 per month, will have five times more capacity on GPT-4o. 

Google’s Gemini Live 

What is Gemini Live? This is the Google product most comparable to GPT-4o—a version of the company’s AI model that you can speak with in real time. Google says that you’ll also be able to use the tool to communicate via live video “later this year.” The company promises it will be a useful conversational assistant for things like preparing for a job interview or rehearsing a speech.

How to access it: Gemini Live launches in “the coming months” via Google’s premium AI plan, Gemini Advanced. 

How much it costs: Gemini Advanced offers a two-month free trial period and costs $20 per month thereafter. 

But wait, what’s Project Astra? Astra is a project to build a do-everything AI agent, which was demoed at Google’s I/O conference but will not be released until later this year.

People will be able to use Astra through their smartphones and possibly desktop computers, but the company is exploring other options too, such as embedding it into smart glasses or other devices, Oriol Vinyals, vice president of research at Google DeepMind, told MIT Technology Review.

Which is better?

It’s hard to tell without having hands on the full versions of these models ourselves. Google showed off Project Astra through a polished video, whereas OpenAI opted to debut GPT-4o via a seemingly more authentic live demonstration, but in both cases, the models were asked to do things the designers likely already practiced. The real test will come when they’re debuted to millions of users with unique demands.  

That said, if you compare OpenAI’s published videos with Google’s, the two leading tools look very similar, at least in their ease of use. To generalize, GPT-4o seems to be slightly ahead on audio, demonstrating realistic voices, conversational flow, and even singing, whereas Project Astra shows off more advanced visual capabilities, like being able to “remember” where you left your glasses. OpenAI’s decision to roll out the new features more quickly might mean its product will get more use at first than Google’s, which won’t be fully available until later this year. It’s too soon to tell which model “hallucinates” false information less often or creates more useful responses.

Are they safe?

Both OpenAI and Google say their models are well tested: OpenAI says GPT-4o was evaluated by more than 70 experts in fields like misinformation and social psychology, and Google has said that Gemini “has the most comprehensive safety evaluations of any Google AI model to date, including for bias and toxicity.” 

But these companies are building a future where AI models search, vet, and evaluate the world’s information for us to serve up a concise answer to our questions. Even more so than with simpler chatbots, it’s wise to remain skeptical about what they tell you.

Additional reporting by Melissa Heikkilä.

Optimizing the supply chain with a data lakehouse

When a commercial ship travels from the port of Ras Tanura in Saudi Arabia to Tokyo Bay, it’s not only carrying cargo; it’s also transporting millions of data points across a wide array of partners and complex technology systems.

Consider, for example, Maersk. The global shipping container and logistics company has more than 100,000 employees, offices in 120 countries, and operates about 800 container ships that can each hold 18,000 tractor-trailer containers. From manufacture to delivery, the items within these containers carry hundreds or thousands of data points, highlighting the amount of supply chain data organizations manage on a daily basis.

Until recently, access to the bulk of an organizations’ supply chain data has been limited to specialists, distributed across myriad data systems. Constrained by traditional data warehouse limitations, maintaining the data requires considerable engineering effort; heavy oversight, and substantial financial commitment. Today, a huge amount of data—generated by an increasingly digital supply chain—languishes in data lakes without ever being made available to the business.

A 2023 Boston Consulting Group survey notes that 56% of managers say although investment in modernizing data architectures continues, managing data operating costs remains a major pain point. The consultancy also expects data deluge issues are likely to worsen as the volume of data generated grows at a rate of 21% from 2021 to 2024, to 149 zettabytes globally.

“Data is everywhere,” says Mark Sear, director of AI, data, and integration at Maersk. “Just consider the life of a product and what goes into transporting a computer mouse from China to the United Kingdom. You have to work out how you get it from the factory to the port, the port to the next port, the port to the warehouse, and the warehouse to the consumer. There are vast amounts of data points throughout that journey.”

Sear says organizations that manage to integrate these rich sets of data are poised to reap valuable business benefits. “Every single data point is an opportunity for improvement—to improve profitability, knowledge, our ability to price correctly, our ability to staff correctly, and to satisfy the customer,” he says.

Organizations like Maersk are increasingly turning to a data lakehouse architecture. By combining the cost-effective scale of a data lake with the capability and performance of a data warehouse, a data lakehouse promises to help companies unify disparate supply chain data and provide a larger group of users with access to data, including structured, semi-structured, and unstructured data. Building analytics on top of the lakehouse not only allows this new architectural approach to advance supply chain efficiency with better performance and governance, but it can also support easy and immediate data analysis and help reduce operational costs.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The Download: Google’s new AI agent, and our tech pessimism bias

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Google’s Astra is its first AI-for-everything agent

What’s happening: Google is set to launch a new system called Astra later this year. It promises that it will be the most powerful, advanced type of AI assistant it’s ever launched. 

What’s an agent? The current generation of AI assistants, such as ChatGPT, can retrieve information and offer answers, but that is about it. But this year, Google is rebranding its assistants as more advanced “agents,” which it says could show reasoning, planning, and memory skills and are able to take multiple steps to execute tasks. 

The big picture: Tech companies are in the middle of a fierce competition over AI supremacy, and  AI agents are the latest effort from Big Tech firms to show they are pushing the frontier of development. Read the full story.

—Melissa Heikkilä

Technology is probably changing us for the worse—or so we always think

Do we use technology, or does it use us? Do our gadgets improve our lives or just make us weak, lazy, and dumb? These are old questions—maybe older than you think. You’re probably familiar with the way alarmed grown-ups through the decades have assailed the mind-rotting potential of search engines, video games, television, and radio—but those are just the recent examples.

Here at MIT Technology Review, writers have grappled with the effects, real or imagined, of tech on the human mind for over a century. But while we’ve always greeted new technologies with a mixture of fascination and fear, something interesting always happens. We get used to it. Read the full story.

—Timothy Maher

MIT Technology Review is celebrating our 125th anniversary with an online series that draws lessons for the future from our past coverage of technology. Check out this piece from the series by David Rotman, our editor at large, about how fear AI will take our jobs is nothing new.

Hong Kong is safe from China’s Great Firewall—for now

Last week, the Hong Kong Court of Appeal granted an injunction that permits the city government to go to Western platforms like YouTube and Spotify and demand they remove the protest anthem “Glory to Hong Kong,” because the government claims it has been used for sedition.

Aside from the depressing implications for pro-democracy movements’ decline in Hong Kong, this lawsuit has also been an interesting case study of the local government’s complicated relationship with internet control. Although it’s tightening its grip, it’s still wary of imposing full-blown ‘Great Firewall’ style censorship. Read the full story to find out why.

—Zeyi Yang

This story is from China Report, our weekly newsletter covering tech and power in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Ilya Sutskever is leaving OpenAI  
Where its former chief scientist goes next is anyone’s guess. (NYT $)
+ It’s highly likely Sutskever’s new project will be focussed on AGI. (WP $)
+ Read our interview with Sutskever from last October. (MIT Technology Review)

2 The US AI roadmap is here
Senators claim it’s the “broadest and deepest” piece of AI legislation to date. (WP $)
+ What’s next for AI regulation in 2024? (MIT Technology Review)

3 A real estate mogul has made a bid to acquire TikTok
Frank McCourt has thrown his hat into the ring to own the company’s US business. (WSJ $)
+ The depressing truth about TikTok’s impending ban. (MIT Technology Review)

4 Neuralink’s brain implant issues are nothing new
Insiders claim that the firm has known about problems with the implant’s wires for years. (Reuters)

5 Wannabe mothers are finding sperm donors on Facebook 
The industry’s sky-high fees are driving women to the social network. (NY Mag $)
+ I took an international trip with my frozen eggs to learn about the fertility industry. (MIT Technology Review)

6 We’re getting a better idea of how long you can expect to lose weight on Wegovy
But we still don’t know how long people have to keep taking the drug to maintain it. (Ars Technica)
+ Weight-loss injections have taken over the internet. But what does this mean for people IRL? (MIT Technology Review)

7 What do DNA tests for the masses really achieve? 🧬
Most customers don’t really need to know if they’re genetically predisposed to hate cilantro or not. (Bloomberg $)

8 How to save rainforests from wildfires
Even lush green spaces aren’t safe from flames. (Hakai Magazine)
+ The quest to build wildfire-resistant homes. (MIT Technology Review)

9 Memestocks are mounting a major comeback
It’s like 2021 all over again. (Vox)

10 Mark Zuckerberg’s just turned 40
It looks like his new rapper look is here to stay. (Insider $)

Quote of the day

“His brilliance and vision are well known; his warmth and compassion are less well known but no less important.”

—Sam Altman, OpenAI’s CEO, offers a measured response to the news that Ilya Sutskever is leaving the company in a post on X.

The big story

How to measure all the world’s fresh water

December 2021

The Congo River is the world’s second-largest river system after the Amazon. More than 75 million people depend on it for food and water, as do thousands of species of plants and animals. The massive tropical rainforest sprawled across its middle helps regulate the entire Earth’s climate system, but the amount of water in it is something of a mystery.

Scientists rely on monitoring stations to track the river, but what was once a network of some 400 stations has dwindled to just 15. Measuring water is key to helping people prepare for natural disasters and adapt to climate change—so researchers are increasingly filling data gaps using information gathered from space. Read the full story.

—Maria Gallucci

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The Cookie Monster had no right to go this hard!
+ It’s time to make product design great again. But how, exactly?
+ The universe is humming all the time, but no one really knows why.
+ Who here remembers the original Teenage Mutant Ninja Turtles on NES?

Hong Kong is safe from China’s Great Firewall—for now

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

We finally know the result of a legal case I’ve been tracking in Hong Kong for almost a year. Last week, the Hong Kong Court of Appeal granted an injunction that permits the city government to go to Western platforms like YouTube and Spotify and demand they remove the protest anthem “Glory to Hong Kong,” because the government claims it has been used for sedition.

To read more about how this injunction is specifically designed for Western Big Tech platforms, and the impact it’s likely to have on internet freedom, you can read my story here.

Aside from the depressing implications for pro-democracy movements’ decline in Hong Kong, this lawsuit has also been an interesting case study of the local government’s complicated relationship with internet control and censorship.

I was following this case because it’s a perfect example of how censorship can be built brick by brick. Having reported on China for so long, I sometimes take for granted how powerful and all-encompassing its censorship regime is and need to be reminded that the same can’t be said for most other places in the world.

Hong Kong had a free internet in the past. And unlike mainland China, it remains relatively open: almost all Western platforms and services are still available there, and only a few websites have been censored in recent years. 

Since Hong Kong was returned to China from the UK in 1997, the Chinese central government has clashed several times with local pro-democracy movements asking for universal elections and less influence from Beijing. As a result, it started cementing tighter and tighter control over Hong Kong, and people have been worrying about whether its Great Firewall will eventually extend there. But actually, neither Beijing nor Hong Kong may want to see that happen. All the recent legal maneuverings are only necessary because the government doesn’t want a full-on ban of Western platforms.

When I visited Hong Kong last November, it was pretty clear that both Beijing and Hong Kong want to take advantage of the free flow of finance and business through the city. That’s why the Hong Kong government was given tacit permission in 2023 to explore government cryptocurrency projects, even though crypto trading and mining are illegal in China. Hong Kong officials have boasted on many occasions about the city’s value proposition: connecting untapped demand in the mainland to the wider crypto world by attracting mainland investors and crypto companies to set up shop in Hong Kong. 

But that wouldn’t be possible if Hong Kong closed off its internet. Imagine a “global” crypto industry that couldn’t access Twitter or Discord. Crypto is only one example, but the things that have made Hong Kong successful—the nonstop exchange of cargo, capital, ideas, and people—would cease to function if basic and universal tools like Google or Facebook became unavailable.

That’s why there are these calculated offenses on internet freedom in Hong Kong. It’s about seeking control but also leaving some breathing space; it’s as much about looking tough on the outside as negotiating with platforms down below; it’s about showing its determination to Beijing but also not showing too much aggression to the West. 

For example, the experts I’ve talked to don’t expect the government to request that YouTube remove the videos for everyone globally. More likely, they may ask for the content to be geo-blocked just for users in Hong Kong.

“As long as Hong Kong is still useful as a financial hub, I don’t think they would establish the Great Firewall [there],” says Chung Ching Kwong, a senior analyst at the Inter-Parliamentary Alliance on China, an advocacy organization that connects legislators from over 30 countries working on relations with China. 

It’s also the reason why the Hong Kong government has recently come out to say that it won’t outright ban platforms like Telegram and Signal, even though it said that it had received comments from the public asking it to do so.

But coming back to the court decision to restrict “Glory to Hong Kong,” even if the government doesn’t end up enforcing a full-blown ban of the song, as opposed to the more targeted injunction it’s imposed now, it may still result in significant harm to internet freedom.

We are still watching the responses roll in after the court decision last Wednesday. The Hong Kong government is anxiously waiting to hear how Google will react. Meanwhile, some videos have already been taken down, though it’s unclear whether they were pulled by the creators or by the platform. 

Michael Mo, a former district councilor in Hong Kong who’s now a postgraduate researcher at the University of Leeds in the UK, created a website right after the injunction was first initiated last June to embed all but one of the YouTube videos the government sought to ban. 

The domain name, “gloryto.hk,” was the first test of whether the Hong Kong domain registry would have trouble with it, but nothing has happened to it so far. The second test was seeing how soon the videos would be taken down on YouTube, which is now easy to tell by how many “video unavailable” gaps there are on the page. “Those videos were pretty much intact until the Court of Appeal overturned the rulings of the High Court. The first two have gone,” Mo says. 

The court case is having a chilling effect. Even entities that are not governed by the Hong Kong court are taking precautions. Some YouTube accounts owned by media based in Taiwan and the US proactively enabled geo-blocking to restrict people in Hong Kong from watching clips of the song they uploaded as soon as the injunction application was filed, Mo says. 

Are you optimistic or pessimistic about the future of internet freedom in Hong Kong? Let me know what you think at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. The Biden administration plans to raise tariffs on Chinese-made EVs, from 25% to 100%. Since few Chinese cars are currently sold in the US, this is mostly a move to deter future imports of Chinese EVs. But it could slow down the decarbonization timeline in the US.  (ABC News)

2. Government officials from the US and China met in Geneva today to discuss how to mitigate the risks of AI. It’s a notable event, given how rare it is for the two sides to find common ground in the highly politicized field of technology. (Reuters $)

3. It will be more expensive soon to ride the bullet trains in China. A 20% to 39% fare increase is causing controversy among Chinese people. (New York Times $)

4. From executive leadership to workplace culture, TikTok has more in common with its Chinese sister app Douyin than the company wants to admit. (Rest of World)

5. China’s most indebted local governments have started claiming troves of data as “intangible assets” on their accounting books. Given the insatiable appetite for AI training data, they may have a point. (South China Morning Post $)

6. A crypto company with Chinese roots purchased a piece of land in Wyoming for crypto mining. Now the Biden administration is blocking the deal for national security reasons. (Associated Press)

Lost in translation

Recently, following an order made by the government, hotels in many major Chinese cities stopped asking guests to submit to facial recognition during check-in. 

According to the Chinese publication TechSina, this has had a devastating impact on the industry of facial recognition hardware. 

As hotels around the country retire their facial recognition kiosks en masse, equipment made by major tech companies has flooded online secondhand markets at steep discounts. What was sold for thousands of dollars is now resold for as little as 1% of the original price. Alipay, the Alibaba-affiliated payment app, once invested hundreds of millions of dollars to research and roll out these kiosks. Now it’s one of the companies being hit the hardest by the policy change.

One more thing

I had to double-check that this is not a joke. It turns out that for the past 10 years, the Louvre museum has been giving visitors a Nintendo 3DS—a popular handheld gaming console—as an audio and visual guide. 

It feels weird seeing people holding a 3DS up to the Mona Lisa as if they were in their own private Pokémon Go–style gaming world rather than just enjoying the museum. But apparently it doesn’t work very well anyway. Oops.

and it was THE WORST at navigating bc a 3ds can’t tell which direction you’re facing + the floorplan isn’t updated to match ongoing renovations. kept tryna send me into a wall 😔 i almost chucked the thing i stg

— taylor (@taylorhansss) May 12, 2024

Technology is probably changing us for the worse—or so we always think

MIT Technology Review is celebrating our 125th anniversary with an online series that draws lessons for the future from our past coverage of technology. 

Do we use technology, or does it use us? Do our gadgets improve our lives or just make us weak, lazy, and dumb? These are old questions—maybe older than you think. You’re probably familiar with the way alarmed grown-ups through the decades have assailed the mind-rotting potential of search engines, video games, television, and radio—but those are just the recent examples.

Early in the last century, pundits argued that the telephone severed the need for personal contact and would lead to social isolation. In the 19th century some warned that the bicycle would rob women of their femininity and result in a haggard look known as “bicycle face.” Mary Shelley’s 1818 novel Frankenstein was a warning against using technology to play God, and how it might blur the lines between what’s human and what isn’t.

Or to go back even further: in Plato’s Phaedrus, from around 370 BCE, Socrates suggests that writing could be a detriment to human memory—the argument being, if you’ve written it down, you no longer needed to remember it.

We’ve always greeted new technologies with a mixture of fascination and fear,  says Margaret O’Mara, a historian at the University of Washington who focuses on the intersection of technology and American politics. “People think: ‘Wow, this is going to change everything affirmatively, positively,’” she says. “And at the same time: ‘It’s scary—this is going to corrupt us or change us in some negative way.’”

And then something interesting happens: “We get used to it,” she says. “The novelty wears off and the new thing becomes a habit.” 

A curious fact

Here at MIT Technology Review, writers have grappled with the effects, real or imagined, of tech on the human mind for nearly a hundred years. In our March 1931 issue, in his essay “Machine-Made Minds,” author John Bakeless wrote that it was time to ask “how far the machine’s control over us is a danger calling for vigorous resistance; and how far it is a good thing, to which we may willingly yield.” 

The advances that alarmed him might seem, to us, laughably low-tech: radio transmitters, antennas, or even rotary printing presses.

But Bakeless, who’d published books on Lewis and Clark and other early American explorers, wanted to know not just what the machine age was doing to society but what it was doing to individual people. “It is a curious fact,” he wrote, “that the writers who have dealt with the social, economic, and political effects of the machine have neglected the most important effect of all—its profound influence on the human mind.”

In particular, he was worried about how technology was being used by the media to control what people thought and talked about. 

“Consider the mental equipment of the average modern man,” he wrote. “Most of the raw material of his thought enters his mind by way of a machine of some kind … the Twentieth Century journalist can collect, print, and distribute his news with a speed and completeness wholly due to a score or more of intricate machines … For the first time, thanks to machinery, such a thing as a world-wide public opinion is becoming possible.”

Bakeless didn’t see this as an especially positive development. “Machines are so expensive that the machine-made press is necessarily controlled by a few very wealthy men, who with the very best intentions in the world are still subject to human limitation and the prejudices of their kind … Today the man or the government that controls two machines—wireless and cable—can control the ideas and passions of a continent.”

Keep away

Fifty years later, the debate had shifted more in the direction of silicon chips. In our October 1980 issue, engineering professor Thomas B. Sheridan, in “Computer Control and Human Alienation,” asked: “How can we ensure that the future computerized society will offer humanity and dignity?” A few years later, in our August/September 1987 issue, writer David Lyon felt he had the answer—we couldn’t, and wouldn’t. In “Hey You! Make Way for My Technology,” he wrote that gadgets like the telephone answering machine and the boom box merely kept other pesky humans at a safe distance: “As machines multiply our capacity to perform useful tasks, they boost our aptitude for thoughtless and self-centered action. Civilized behavior is predicated on the principle of one human being interacting with another, not a human being interacting with a mechanical or electronic extension of another person.”

By this century the subject had been taken up by a pair of celebrities, novelist Jonathan Franzen and Talking Heads lead vocalist David Byrne. In our September/October 2008 issue, Franzen suggested that cell phones had turned us into performance artists. 

In “I Just Called to Say I Love You,” he wrote: “When I’m buying those socks at the Gap and the mom in line behind me shouts ‘I love you!’ into her little phone, I am powerless not to feel that something is being performed; overperformed; publicly performed; defiantly inflicted. Yes, a lot of domestic things get shouted in public which really aren’t intended for public consumption; yes, people get carried away. But the phrase ‘I love you’ is too important and loaded, and its use as a sign-off too self-conscious, for me to believe I’m being made to hear it accidentally.”

In “Eliminating the Human,” from our September/October 2017 issue, Byrne observed that advances in the digital economy served largely to free us from dealing with other people. You could now “keep in touch” with friends without ever seeing them; buy books without interacting with a store clerk; take an online course without ever meeting the teacher or having any awareness of the other students.

“For us as a society, less contact and interaction—real interaction—would seem to lead to less tolerance and understanding of difference, as well as more envy and antagonism,” Byrne wrote. “As has been in evidence recently, social media actually increases divisions by amplifying echo effects and allowing us to live in cognitive bubbles … When interaction becomes a strange and unfamiliar thing, then we will have changed who and what we are as a species.”

Modern woes

It hasn’t stopped. Just last year our own Will Douglas Heaven’s feature on ChatGPT debunked the idea that the AI revolution will destroy children’s ability to develop critical-thinking skills.

As O’Mara puts it: “Do all of the fears of these moral panics come to pass? No. Does change come to pass? Yes.” The way we come to grips with new technologies hasn’t fundamentally changed, she says, but what has changed is—there’s more of it to deal with. “It’s more of the same,” she says. “But it’s more. Digital technologies have allowed things to scale up into a runaway train of sorts that the 19th century never had to contend with.”

Maybe the problem isn’t technology at all, maybe it’s us. Based on what you might read in 19th-century novels, people haven’t changed much since the early days of the industrial age. In any Dostoyevsky novel you can find people who yearn to be seen as different or special, who take affront at any threat to their carefully curated public persona, who feel depressed and misunderstood and isolated, who are susceptible to mob mentality.

“The biology of the human brain hasn’t changed in the last 250 years,” O’Mara says. “Same neurons, still the same arrangement. But it’s been presented with all these new inputs … I feel like I live with information overload all the time. I think we all observe it in our own lives, how our attention spans just go sideways. But that doesn’t mean my brain has changed at all. We’re just getting used to consuming information in a different way.”

And if you find technology to be intrusive and unavoidable now, it might be useful to note that Bakeless felt no differently in 1931. Even then, long before anyone had heard of smartphone or the internet, he felt that technology had become so intrinsic to daily life that it was like a tyrant: “Even as a despot, the machine is benevolent; and it is after all our stupidity that permits inanimate iron to be a despot at all.”

If we are to ever create the ideal human society, he concluded—one with sufficient time for music, art, philosophy, scientific inquiry (“the gorgeous playthings of the mind,” as he put it)—it was unlikely we’d get it done without the aid of machines. It was too late, we’d already grown too accustomed to the new toys. We just needed to find a way to make sure that the machines served us instead of the other way around. “If we are to build a great civilization in America, if we are to win leisure for cultivating the choice things of mind and spirit, we must put the machine in its place,” he wrote.

Okay, but—how, exactly? Ninety-three years later and we’re still trying to figure that part out.

Google’s Astra is its first AI-for-everything agent

Google is set to introduce a new system called Astra later this year and promises that it will be the most powerful, advanced type of AI assistant it’s ever launched. 

The current generation of AI assistants, such as ChatGPT, can retrieve information and offer answers, but that is about it. But this year, Google is rebranding its assistants as more advanced “agents,” which it says could  show reasoning, planning, and memory skills and are able to take multiple steps to execute tasks. 

People will be able to use Astra through their smartphones and possibly desktop computers, but the company is exploring other options too, such as embedding it into smart glasses or other devices, Oriol Vinyals, vice president of research at Google DeepMind, told MIT Technology Review

“We are in very early days [of AI agent development],” Google CEO Sundar Pichai said on a call ahead of Google’s I/O conference today. 

“We’ve always wanted to build a universal agent that will be useful in everyday life,” said Demis Hassabis, the CEO and cofounder of Google DeepMind. “Imagine agents that can see and hear what we do, better understand the context we’re in, and respond quickly in conversation, making the pace and quality of interaction feel much more natural.” That, he says, is what Astra will be. 

Google’s announcement comes a day after competitor OpenAI unveiled its own supercharged AI assistant, GPT-4o. Google DeepMind’s Astra responds to audio and video inputs, much in the same way as GPT-4o (albeit it less flirtatiously). 

In a press demo, a user pointed a smartphone camera and smart glasses at things and asked Astra to explain what they were. When the person pointed the device out the window and asked “What neighborhood do you think I’m in?” the AI system was able to identify King’s Cross, London, site of Google DeepMind’s headquarters. It was also able to say that the person’s glasses were on a desk, having recorded them earlier in the interaction. 

The demo showcases Google DeepMind’s vision of multimodal AI (which can handle multiple types of input—voice, video, text, and so on) working in real time, Vinyals says. 

“We are very excited about, in the future, to be able to really just get closer to the user, assist the user with anything that they want,” he says. Google recently upgraded its artificial-intelligence model Gemini to process even larger amounts of data, an upgrade which helps it handle bigger documents and videos, and have longer conversations. 

Tech companies are in the middle of a fierce competition over AI supremacy, and  AI agents are the latest effort from Big Tech firms to show they are pushing the frontier of development. Agents also play into a narrative by many tech companies, including OpenAI and Google DeepMind, that aim to build artificial general intelligence, a highly hypothetical idea of superintelligent AI systems. 

“Eventually, you’ll have this one agent that really knows you well, can do lots of things for you, and can work across multiple tasks and domains,” says Chirag Shah, a professor at the University of Washington who specializes in online search.

This vision is still aspirational. But today’s announcement should be seen as Google’s attempt to keep up with competitors. And by rushing these products out, Google can collect even more data from its over a billion users on how they are using their models and what works, Shah says.

Google is unveiling many more new AI capabilities beyond agents today. It’s going to integrate AI more deeply into Search through a new feature called AI overviews, which gather information from the internet and package them into short summaries in response to search queries. The feature, which launches today, will initially be available only in the US, with more countries to gain access later. 

This will help speed up the search process and get users more specific answers to more complex, niche questions, says Felix Simon, a research fellow in AI and digital news at the Reuters Institute for Journalism. “I think that’s where Search has always struggled,” he says. 

Another new feature of Google’s AI Search offering is better planning. People will soon be able to ask Search to make meal and travel suggestions, for example, much like asking a travel agent to suggest restaurants and hotels. Gemini will be able to help them plan what they need to do or buy to cook recipes, and they will also be able to have conversations with the AI system, asking it to do anything from relatively mundane tasks, such as informing them about the weather forecast, to highly complex ones like helping them prepare for a job interview or an important speech. 

People will also be able to interrupt Gemini midsentence and ask clarifying questions, much as in a real conversation. 

In another move to one-up competitor OpenAI, Google also unveiled Veo, a new video-generating AI system. Veo is able to generate short videos and allows users more control over cinematic styles by understanding prompts like “time lapse” or “aerial shots of a landscape.”

Google has a significant advantage when it comes to training generative video models, because it owns YouTube. It’s already announced collaborations with artists such as Donald Glover and Wycleaf Jean, who are using its technology to produce their work. 

Earlier this year, OpenA’s CTO, Mira Murati, fumbled when asked about whether the company’s model was trained on YouTube data. Douglas Eck, senior research director at Google DeepMind, was also vague about the training data used to create Veo when asked about by MIT Technology Review, but he said that it “may be trained on some YouTube content in accordance with our agreements with YouTube creators.”

On one hand, Google is presenting its generative AI as a tool artists can use to make stuff, but the tools likely get their ability to create that stuff by using material from existing artists, says Shah. AI companies such as Google and OpenAI have faced a slew of lawsuits by writers and artists claiming that their intellectual property has been used without consent or compensation.  

“For artists it’s a double-edged sword,” says Shah. 

The Download: OpenAI’s GPT-4o, and what’s coming at Google I/O

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI’s new GPT-4o lets people interact using voice or video in the same model

The news: OpenAI just debuted GPT-4o, a new kind of AI model that you can communicate with in real time via live voice conversation, video streams from your phone, and text. The model is rolling out over the next few weeks and will be free for all users through both the GPT app and the web interface, according to the company.

How does it differ to GPT-4? GPT-4 also gives users multiple ways to interact with OpenAI’s AI offerings. But it siloed them in separate models, leading to longer response times and presumably higher computing costs. GPT-4o has now merged those capabilities into a single model to deliver faster responses and smoother transitions between tasks.

The big picture: The result, the company’s demonstration suggests, is a conversational assistant much in the vein of Siri or Alexabut capable of fielding much more complex prompts. Read the full story.

—James O’Donnell

What to expect at Google I/O

Google is holding its I/O conference today, May 14, and we expect them to announce a whole new slew of AI features, further embedding it into everything it does.

There has been a lot of speculation that it will upgrade its crown jewel, Search, with generative AI features that could, for example, go behind a paywall. Google, despite having 90% of the online search market, is in a defensive position this year. It’s racing to catch up with its rivals Microsoft and OpenAI, while upstarts such as Perplexity AI have launched their own versions of AI-powered search to rave reviews.

While the company is tight-lipped about its announcements, we can make educated guesses. Read the full story.

—Melissa Heikkilä 

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

Get ready for EmTech Digital 

If you want to learn more about how Google plans to develop and deploy AI, come and hear from its vice president of AI, Jay Yagnik, at our flagship AI conference, EmTech Digital. We’ll hear from OpenAI about its video generation model Sora too, and Nick Clegg, Meta’s president of global affairs, will also join MIT Technology Review’s executive editor Amy Nordrum for an exclusive interview on stage. 

It’ll be held at the MIT campus and streamed live online next week on May 22-23. Readers of The Download get 30% off tickets with the code DOWNLOADD24—register here for more information. See you there!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US senators are preparing to unveil their ‘AI roadmap’ 
The guidelines, which aren’t legislation, will cost billions of dollars to implement. (WP $)
+ What’s next for AI regulation. (MIT Technology Review)

2 It’s going to get much more expensive to import tech from China
The Biden administration has hiked tariffs on batteries, EVs and semiconductors. (FT $)
+ Three takeaways about the state of Chinese tech in the US. (MIT Technology Review)

3 The NYC mayor wants to equip the subway with gun-detection tech 
Even though the firm maintains its detectors aren’t designed for that environment. (Wired $)
+ The maker’s relationship with Disney appears to have been a key factor in the decision. (The Verge)
+ Can AI keep guns out of schools? (MIT Technology Review)

4 A Chinese crypto miner has been forced to abandon its facility in Wyoming
The US said it was too close to an Air Force base and a data center doing work for the Pentagon. (Bloomberg $)
+ Microsoft first flagged the mine to authorities last year. (NYT $)
+ How Bitcoin mining devastated this New York town. (MIT Technology Review)

5 App Stores are big business
And governments want to rein them in. (Economist $)

6 How social media ads attract networks of predators
Audience tools highlight how platforms’ algorithms direct them to pictures of children. (NYT $)

7 Enterprising Amazon workers are using bots to nab time off slots
Employees are using automated scripts to gain an edge over their colleagues. (404 Media)

8 Dating app Bumble is ditching its ads criticizing celibacy
Critics say the billboards undermined daters’ freedom of choice. (WSJ $)
+ The platform is in a state of flux right now. (NY Mag $)

9 Buying digital movies is a risky business
What happens if the platform you bought them on shuts down? (The Guardian)

10 The New York-Dublin video portal has been temporarily shut down
Who could have predicted that people would behave inappropriately? (BBC)
+ There have been some heartwarming interactions too, though. (The Guardian)

Quote of the day

“Rewatched Her last weekend and it felt a lot like rewatching Contagion in Feb 2020.”

—Noam Brown, an OpenAI researcher, reflects on X about the vast changes the company’s new companion AI model GPT-4o could usher in.

The big story

I took an international trip with my frozen eggs to learn about the fertility industry

September 2022

—Anna Louie Sussman

Like me, my eggs were flying economy class. They were ensconced in a cryogenic storage flask packed into a metal suitcase next to Paolo, the courier overseeing their passage from a fertility clinic in Bologna, Italy, to the clinic in Madrid, Spain, where I would be undergoing in vitro fertilization.

The shipping of gametes and embryos around the world is a growing part of a booming global fertility sector. As people have children later in life, the need for fertility treatment increases each year.

After paying for storage costs for years, at 40 I was ready to try to get pregnant. And transporting the Bolognese batch served to literally put all my eggs in one basket. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)+ Bayley the sheepadoodle really does look just like Snoopy.
+ The secret to better sleep? Setting a consistent wake-up time (and sticking to it.)
+ Going Nemo-spotting in the Great Barrier Reef sounds pretty amazing.
+ Here’s exactly what the benefits of eating colorful fruit and veg are, broken down by color.

What to expect at Google I/O

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In the world of AI, a lot can happen in a year. Last year, at the beginning of Big Tech’s AI wars, Google announced during its annual I/O conference that it was throwing generative AI at everything, integrating it into its suite of products from Docs to email to e-commerce listings and its chatbot Bard. It was an effort to catch up with competitors like Microsoft and OpenAI, which had unveiled snazzy products like coding assistants and ChatGPT, the product that has done more than any other to ignite the current excitement about AI.

Since then, its ChatGPT competitor chatbot Bard (which, you may recall, temporarily wiped $100 billion off Google’s share price when it made a factual error during the demo) has been replaced by the more advanced Gemini. But, for me, the AI revolution hasn’t felt like one. Instead, it’s been a slow slide toward marginal efficiency gains. I see more autocomplete functions in my email and word processing applications, and Google Docs now offers more ready-made templates. They are not groundbreaking features, but they are also reassuringly inoffensive. 

Google is holding its I/O conference tomorrow, May 14, and we expect them to announce a whole new slew of AI features, further embedding it into everything it does. The company is tight-lipped about its announcements, but we can make educated guesses. There has been a lot of speculation that it will upgrade its crown jewel, Search, with generative AI features that could, for example, go behind a paywall. Perhaps we will see Google’s version of AI agents, a buzzy word that basically means more capable and useful smart assistants able to do more complex tasks, such as booking flights and hotels much as a travel agent would. 

Google, despite having 90% of the online search market, is in a defensive position this year. Upstarts such as Perplexity AI have launched their own versions of AI-powered search to rave reviews, Microsoft’s AI-powered Bing has managed to increase its market share slightly, and OpenAI is working on its own AI-powered online search function and is also reportedly in conversation with Apple to integrate ChatGPT into smartphones

There are some hints about what any new AI-powered search features might look like. Felix Simon, a research fellow at the Reuters Institute for Journalism, has been part of the Google Search Generative Experience trial, which is the company’s way of testing new products on a small selection of real users. 

Last month, Simon noticed that his Google searches with links and short snippets from online sources had been replaced by more detailed, neatly packaged AI-generated summaries. He was able to get these results from queries related to nature and health, such as “Do snakes have ears?” Most of the information offered to him was correct, which was a surprise, as AI language models have a tendency to “hallucinate” (which means make stuff up), and they have been criticized for being an unreliable source of information. 

To Simon’s surprise, he enjoyed the new feature. “It’s convenient to ask [the AI] to get something presented just for you,” he says. 

Simon then started using the new AI-powered Google function to search for news items rather than scientific information.

For most of these queries, such as what happened in the UK or Ukraine yesterday, he was simply offered links to news sources such as the BBC and Al Jazeera. But he did manage to get the search engine to generate an overview of recent news items from Germany, in the form of a bullet-pointed list of news headlines from the day before. The first entry was about an attack on Franziska Giffey, a Berlin politician who was assaulted in a library. The AI summary had the date of the attack wrong. But it was so close to the truth that Simon didn’t think twice about its accuracy. 

A quick online search during our call revealed that the rest of the AI-generated news summaries were also littered with inaccuracies. Details were wrong, or the events referred to happened years ago. All the stories were also about terrorism, hate crimes, or violence, with one soccer result thrown in. Omitting headlines on politics, culture, and the economy seems like a weird choice.  

People have a tendency to believe computers to be correct even when they are not, and Simon’s experience is an example of the kinds of problems that might arise when AI models hallucinate. The ease of getting results means that people might unknowingly ingest fake news or wrong information. It’s very problematic if even people like Simon, who are trained to fact-check things and know how AI models work, don’t do their due diligence and assume information is correct. 

Whatever Google announces at I/O tomorrow, there is immense pressure for it to be something that would justify its massive investment into AI. And after a year of experimenting, there also need to be serious improvements in making its generative AI tools more accurate and reliable. 

There are some people in the computer science community who say that hallucinations are an intrinsic part of generative AI that can’t ever be fixed, and that we can never fully trust these systems. But hallucinations will make AI-powered products less appealing to users. And it’s highly unlikely that Google will announce it has fixed this problem at I/O tomorrow. 

If you want to learn more about how Google plans to develop and deploy AI, come and hear from its vice president of AI, Jay Yagnik, at our flagship AI conference, EmTech Digital. It’ll be held at the MIT campus and streamed live online next week on May 22-23.  I’ll be there, along with AI leaders from companies like OpenAI, AWS, and Nvidia, talking about where AI is going next. Nick Clegg, Meta’s president of global affairs, will also join MIT Technology Review’s executive editor Amy Nordrum for an exclusive interview on stage. See you there! 

Readers of The Algorithm get 30% off tickets with the code ALGORITHMD24.


Now read the rest of The Algorithm

Deeper Learning

Deepfakes of your dead loved ones are a booming Chinese business

Once a week, Sun Kai has a video call with his mother. He opens up about work, the pressures he faces as a middle-aged man, and thoughts that he doesn’t even discuss with his wife. His mother will occasionally make a comment, but mostly, she just listens. That’s because Sun’s mother died five years ago. And the person he’s talking to isn’t actually a person, but a digital replica he made of her—a moving image that can conduct basic conversations. 

AI resurrection: There are plenty of people like Sun who want to use AI to interact with lost loved ones. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies. In some ways, the avatars are the latest manifestation of a cultural tradition: Chinese people have always taken solace from confiding in the dead. Read more from Zeyi Yang

Bits and Bytes

Google DeepMind’s new AlphaFold can model a much larger slice of biological life
Google DeepMind has released an improved version of its biology prediction tool, AlphaFold, that can predict the structures not only of proteins but of nearly all the elements of biological life. It’s an exciting development that could help accelerate drug discovery and other scientific research. ​​(MIT Technology Review

The way whales communicate is closer to human language than we realized
Researchers used statistical models to analyze whale “codas” and managed to identify a structure to their language that’s similar to features of the complex vocalizations humans use. It’s a small step forward, but it could help unlock a greater understanding of how whales communicate. (MIT Technology Review)

Tech workers should shine a light on the industry’s secretive work with the military
Despite what happens in Google’s executive suites, workers themselves can force change. William Fitzgerald, who leaked information about Google’s controversial Project Maven, has shared how he thinks they can do this. (MIT Technology Review

AI systems are getting better at tricking us
A wave of AI systems have “deceived” humans in ways they haven’t been explicitly trained to do, by offering up false explanations for their behavior or concealing the truth from human users and misleading them to achieve a strategic end. This issue highlights how difficult artificial intelligence is to control and the unpredictable ways in which these systems work. (MIT Technology Review

Why America needs an Apollo program for the age of AI
AI is crucial to the future security and prosperity of the US. We need to lay the groundwork now by investing in computational power, argues Eric Schmidt. (MIT Technology Review

Fooled by AI? These firms sell deepfake detection that’s “REAL 100%”
The AI detection business is booming. There is one catch, however. Detecting AI-generated content is notoriously unreliable, and the tech is still in its infancy. That hasn’t stopped some startup founders (many of whom have no experience or background in AI) from trying to sell services they claim can do so. (The Washington Post

The tech-bro turf war over AI’s most hardcore hacker house
A hilarious piece taking an anthropological look at the power struggle between two competing hacker houses in Silicon Valley. The fight is over which house can call itself “AGI House.” (Forbes

OpenAI’s new GPT-4o lets people interact using voice or video in the same model

OpenAI just debuted GPT-4o, a new kind of AI model that you can communicate with in real time via live voice conversation, video streams from your phone, and text. The model is rolling out over the next few weeks and will be free for all users through both the GPT app and the web interface, according to the company. Users who subscribe to OpenAI’s paid tiers, which start at $20 per month, will be able to make more requests. 

OpenAI CTO Mira Murati led the live demonstration of the new release one day before Google is expected to unveil its own AI advancements at its flagship I/O conference on Tuesday, May 14. 

GPT-4 offered similar capabilities, giving users multiple ways to interact with OpenAI’s AI offerings. But it siloed them in separate models, leading to longer response times and presumably higher computing costs. GPT-4o has now merged those capabilities into a single model, which Murati called an “omnimodel.” That means faster responses and smoother transitions between tasks, she said.

The result, the company’s demonstration suggests, is a conversational assistant much in the vein of Siri or Alexa but capable of fielding much more complex prompts.

“We’re looking at the future of interaction between ourselves and the machines,” Murati said of the demo. “We think that GPT-4o is really shifting that paradigm into the future of collaboration, where this interaction becomes much more natural.”

Barret Zoph and Mark Chen, both researchers at OpenAI, walked through a number of applications for the new model. Most impressive was its facility with live conversation. You could interrupt the model during its responses, and it would stop, listen, and adjust course. 

OpenAI showed off the ability to change the model’s tone, too. Chen asked the model to read a bedtime story “about robots and love,” quickly jumping in to demand a more dramatic voice. The model got progressively more theatrical until Murati demanded that it pivot quickly to a convincing robot voice (which it excelled at). While there were predictably some short pauses during the conversation while the model reasoned through what to say next, it stood out as a remarkably naturally paced AI conversation. 

The model can reason through visual problems in real time as well. Using his phone, Zoph filmed himself writing an algebra equation (3x + 1 = 4) on a sheet of paper, having GPT-4o follow along. He instructed it not to provide answers, but instead to guide him much as a teacher would.

“The first step is to get all the terms with x on one side,” the model said in a friendly tone. “So, what do you think we should do with that plus one?”

Like previous generations of GPT, GPT-4o will store records of users’ interactions with it, meaning the model “has a sense of continuity across all your conversations,” according to Murati. Other new highlights include live translation, the ability to search through your conversations with the model, and the power to look up information in real time. 

As is the nature of a live demo, there were hiccups and glitches. GPT-4o’s voice might jump in awkwardly during the conversation. It appeared to comment on one of the presenters’ outfits even though it wasn’t asked to. But it recovered well when the demonstrators told the model it had erred. It seems to be able to respond quickly and helpfully across several mediums that other models have not yet merged as effectively. 

Previously, many of OpenAI’s most powerful features, like reasoning through image and video, were behind a paywall. GPT-4o marks the first time they’ll be opened up to the wider public, though it’s not yet clear how many interactions you’ll be able to have with the model before being charged. OpenAI says paying subscribers will “continue to have up to five times the capacity limits of our free users.” 

Additional reporting by Will Douglas Heaven.

Correction: This story has been updated to reflect that the Memory feature, which stores past conversations, is not new to GPT-4o but has existed in previous models.

The Download: the future of chips, and investing in US AI

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next in chips

Thanks to the boom in artificial intelligence, the world of chips is on the cusp of a huge tidal shift. There is heightened demand for chips that can train AI models faster and ping them from devices like smartphones and satellites, enabling us to use these models without disclosing private data. Governments, tech giants, and startups alike are racing to carve out their slices of the growing semiconductor pie. 

James O’Donnell, our AI reporter, has dug into the four trends to look for in the year ahead that will define what the chips of the future will look like, who will make them, and which new technologies they’ll unlock. Read on to see what he found out.

Eric Schmidt: Why America needs an Apollo program for the age of AI

—Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently cofounder of  philanthropic initiative Schmidt Futures.

The global race for computational power is well underway, fueled by a worldwide boom in artificial intelligence. OpenAI’s Sam Altman is seeking to raise as much as $7 trillion for a chipmaking venture. Tech giants like Microsoft and Amazon are building AI chips of their own. 

The need for more computing horsepower to train and use AI models—fueling a quest for everything from cutting-edge chips to giant data sets—isn’t just a current source of geopolitical leverage (as with US curbs on chip exports to China). It is also shaping the way nations will grow and compete in the future, with governments from India to the UK developing national strategies and stockpiling Nvidia graphics processing units. 

I believe it’s high time for America to have its own national compute strategy: an Apollo program for the age of AI. Read the full story.

AI systems are getting better at tricking us

The news: A wave of AI systems have “deceived” humans in ways they haven’t been explicitly trained to do, by offering up untrue explanations for their behavior or concealing the truth from human users and misleading them to achieve a strategic end. 

Why it matters: Talk of deceiving humans might suggest that these models have intent. They don’t. But AI models will mindlessly find workarounds to obstacles to achieve the goals that have been given to them. Sometimes these workarounds will go against users’ expectations and feel deceitful. Above all, this issue highlights how difficult artificial intelligence is to control, and the unpredictable ways in which these systems work.  Read the full story.

—Rhiannon Williams

Why thermal batteries are so hot right now

A whopping 20% of global energy consumption goes to generate heat in industrial processes, most of it using fossil fuels. This often-overlooked climate problem may have a surprising solution in systems called thermal batteries, which can store energy as heat using common materials like bricks, blocks, and sand.

We are holding an exclusive subscribers-only online discussion digging into what thermal batteries are, how they could help cut emissions, and what we can expect next with climate reporter Casey Crownhart and executive editor Amy Nordrum.

We’ll be going live at midday ET on Thursday 16 May. Register here to join us!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 These companies will happily sell you deepfake detection services
The problem is, their capabilities are largely untested. (WP $)
+ A Hong Kong-based crypto exchange has been accused of deepfaking Elon Musk. (Insider $)+ It’s easier than ever to make seriously convincing deepfakes. (The Guardian)
+ An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary. (MIT Technology Review)

2 Apple is close to striking a deal with OpenAI 
To bring ChatGPT to iPhones for the first time. (Bloomberg $)

3 GPS warfare is filtering down into civilian life
Once the preserve of the military, unreliable GPS causes havoc for ordinary people. (FT $)
+ Russian hackers may not be quite as successful as they claim. (Wired $)

4 The first patient to receive a genetically modified pig’s kidney has died
But the hospital says his death doesn’t seem to be linked to the transplant. (NYT $)
+ Synthetic blood platelets could help to address a major shortage. (Wired $)
+ A woman from New Jersey became the second living recipient just weeks later. (MIT Technology Review)

5 This weekend’s solar storm broke critical farming systems 
Satellite disruptions temporarily rendered some tractors useless. (404 Media)
+ The race to fix space-weather forecasting before the next big solar storm hits. (MIT Technology Review)

6 The US can’t get enough of startups
Everyone’s a founder now. (Economist $)
+ Climate tech is back—and this time, it can’t afford to fail. (MIT Technology Review)

7 What AI could learn from game theory
AI models aren’t reliable. These tools could help improve that. (Quanta Magazine)

8 The frantic hunt for rare bitcoin is heating up
Even rising costs aren’t deterring dedicated hunters. (Wired $)

9 LinkedIn is getting into games
Come for the professional networking opportunities, stay for the puzzles. (NY Mag $)

10 Billions of years ago, the Moon had a makeover 🌕
And we’re only just beginning to understand what may have caused it. (Ars Technica)

Quote of the day

“Human beings are not billiard balls on a table.”

—Sonia Livingstone, a psychologist, explains why it’s so hard to study the impact of technology on young people’s mental health to the Financial Times.

The big story

How greed and corruption blew up South Korea’s nuclear industry

April 2019

In March 2011, South Korean president Lee Myung-bak presided over a groundbreaking ceremony for a construction project between his country and the United Arab Emirates. At the time, the plant was the single biggest nuclear reactor deal in history.

But less than a decade later, Korea is dismantling its nuclear industry, shutting down older reactors and scrapping plans for new ones. State energy companies are being shifted toward renewables. What went wrong? Read the full story.

—Max S. Kim

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The Comedy Pet Photography Awards never disappoints.
+ This bit of Chas n Dave-meets-Eminem trivia is too good not to share (thanks Charlotte!)
+ Audio-only video games? Interesting…
+ Trying to learn something? Write it down.

What’s next in chips

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Thanks to the boom in artificial intelligence, the world of chips is on the cusp of a huge tidal shift. There is heightened demand for chips that can train AI models faster and ping them from devices like smartphones and satellites, enabling us to use these models without disclosing private data. Governments, tech giants, and startups alike are racing to carve out their slices of the growing semiconductor pie. 

Here are four trends to look for in the year ahead that will define what the chips of the future will look like, who will make them, and which new technologies they’ll unlock.

CHIPS Acts around the world

On the outskirts of Phoenix, two of the world’s largest chip manufacturers, TSMC and Intel, are racing to construct campuses in the desert that they hope will become the seats of American chipmaking prowess. One thing the efforts have in common is their funding: in March, President Joe Biden announced $8.5 billion in direct federal funds and $11 billion in loans for Intel’s expansions around the country. Weeks later, another $6.6 billion was announced for TSMC. 

The awards are just a portion of the US subsidies pouring into the chips industry via the $280 billion CHIPS and Science Act signed in 2022. The money means that any company with a foot in the semiconductor ecosystem is analyzing how to restructure its supply chains to benefit from the cash. While much of the money aims to boost American chip manufacturing, there’s room for other players to apply, from equipment makers to niche materials startups.

But the US is not the only country trying to onshore some of the chipmaking supply chain. Japan is spending $13 billion on its own equivalent to the CHIPS Act, Europe will be spending more than $47 billion, and earlier this year India announced a $15 billion effort to build local chip plants. The roots of this trend go all the way back to 2014, says Chris Miller, a professor at Tufts University and author of Chip War: The Fight for the World’s Most Critical Technology. That’s when China started offering massive subsidies to its chipmakers. 

cover of Chip War: The Fight for the World's Most Critical Technology by Chris Miller
SIMON & SCHUSTER

“This created a dynamic in which other governments concluded they had no choice but to offer incentives or see firms shift manufacturing to China,” he says. That threat, coupled with the surge in AI, has led Western governments to fund alternatives. In the next year, this might have a snowball effect, with even more countries starting their own programs for fear of being left behind.

The money is unlikely to lead to brand-new chip competitors or fundamentally restructure who the biggest chip players are, Miller says. Instead, it will mostly incentivize dominant players like TSMC to establish roots in multiple countries. But funding alone won’t be enough to do that quickly—TSMC’s effort to build plants in Arizona has been mired in missed deadlines and labor disputes, and Intel has similarly failed to meet its promised deadlines. And it’s unclear whether, whenever the plants do come online, their equipment and labor force will be capable of the same level of advanced chipmaking that the companies maintain abroad.

“The supply chain will only shift slowly, over years and decades,” Miller says. “But it is shifting.”

More AI on the edge

Currently, most of our interactions with AI models like ChatGPT are done via the cloud. That means that when you ask GPT to pick out an outfit (or to be your boyfriend), your request pings OpenAI’s servers, prompting the model housed there to process it and draw conclusions (known as “inference”) before a response is sent back to you. Relying on the cloud has some drawbacks: it requires internet access, for one, and it also means some of your data is shared with the model maker.  

That’s why there’s been a lot of interest and investment in edge computing for AI, where the process of pinging the AI model happens directly on your device, like a laptop or smartphone. With the industry increasingly working toward a future in which AI models know a lot about us (Sam Altman described his killer AI app to me as one that knows “absolutely everything about my whole life, every email, every conversation I’ve ever had”), there’s a demand for faster “edge” chips that can run models without sharing private data. These chips face different constraints from the ones in data centers: they typically have to be smaller, cheaper, and more energy efficient. 

The US Department of Defense is funding a lot of research into fast, private edge computing. In March, its research wing, the Defense Advanced Research Projects Agency (DARPA), announced a partnership with chipmaker EnCharge AI to create an ultra-powerful edge computing chip used for AI inference. EnCharge AI is working to make a chip that enables enhanced privacy but can also operate on very little power. This will make it suitable for military applications like satellites and off-grid surveillance equipment. The company expects to ship the chips in 2025.

AI models will always rely on the cloud for some applications, but new investment and interest in improving edge computing could bring faster chips, and therefore more AI, to our everyday devices. If edge chips get small and cheap enough, we’re likely to see even more AI-driven “smart devices” in our homes and workplaces. Today, AI models are mostly constrained to data centers.

“A lot of the challenges that we see in the data center will be overcome,” says EnCharge AI cofounder Naveen Verma. “I expect to see a big focus on the edge. I think it’s going to be critical to getting AI at scale.”

Big Tech enters the chipmaking fray

In industries ranging from fast fashion to lawn care, companies are paying exorbitant amounts in computing costs to create and train AI models for their businesses. Examples include models that employees can use to scan and summarize documents, as well as externally facing technologies like virtual agents that can walk you through how to repair your broken fridge. That means demand for cloud computing to train those models is through the roof. 

The companies providing the bulk of that computing power are Amazon, Microsoft, and Google. For years these tech giants have dreamed of increasing their profit margins by making chips for their data centers in-house rather than buying from companies like Nvidia, a giant with a near monopoly on the most advanced AI training chips and a value larger than the GDP of 183 countries. 

Amazon started its effort in 2015, acquiring startup Annapurna Labs. Google moved next in 2018 with its own chips called TPUs. Microsoft launched its first AI chips in November, and Meta unveiled a new version of its own AI training chips in April.

CEO Jensen Huang holds up chips on stage during a keynote address
AP PHOTO/ERIC RISBERG

That trend could tilt the scales away from Nvidia. But Nvidia doesn’t only play the role of rival in the eyes of Big Tech: regardless of their own in-house efforts, cloud giants still need its chips for their data centers. That’s partly because their own chipmaking efforts can’t fulfill all their needs, but it’s also because their customers expect to be able to use top-of-the-line Nvidia chips.

“This is really about giving the customers the choice,” says Rani Borkar, who leads hardware efforts at Microsoft Azure. She says she can’t envision a future in which Microsoft supplies all chips for its cloud services: “We will continue our strong partnerships and deploy chips from all the silicon partners that we work with.”

As cloud computing giants attempt to poach a bit of market share away from chipmakers, Nvidia is also attempting the converse. Last year the company started its own cloud service so customers can bypass Amazon, Google, or Microsoft and get computing time on Nvidia chips directly. As this dramatic struggle over market share unfolds, the coming year will be about whether customers see Big Tech’s chips as akin to Nvidia’s most advanced chips, or more like their little cousins. 

Nvidia battles the startups 

Despite Nvidia’s dominance, there is a wave of investment flowing toward startups that aim to outcompete it in certain slices of the chip market of the future. Those startups all promise faster AI training, but they have different ideas about which flashy computing technology will get them there, from quantum to photonics to reversible computation. 

But Murat Onen, the 28-year-old founder of one such chip startup, Eva, which he spun out of his PhD work at MIT, is blunt about what it’s like to start a chip company right now.

“The king of the hill is Nvidia, and that’s the world that we live in,” he says.

Many of these companies, like SambaNova, Cerebras, and Graphcore, are trying to change the underlying architecture of chips. Imagine an AI accelerator chip as constantly having to shuffle data back and forth between different areas: a piece of information is stored in the memory zone but must move to the processing zone, where a calculation is made, and then be stored back to the memory zone for safekeeping. All that takes time and energy. 

Making that process more efficient would deliver faster and cheaper AI training to customers, but only if the chipmaker has good enough software to allow the AI training company to seamlessly transition to the new chip. If the software transition is too clunky, model makers such as OpenAI, Anthropic, and Mistral are likely to stick with big-name chipmakers.That means companies taking this approach, like SambaNova, are spending a lot of their time not just on chip design but on software design too.

Onen is proposing changes one level deeper. Instead of traditional transistors, which have delivered greater efficiency over decades by getting smaller and smaller, he’s using a new component called a proton-gated transistor that he says Eva designed specifically for the mathematical needs of AI training. It allows devices to store and process data in the same place, saving time and computing energy. The idea of using such a component for AI inference dates back to the 1960s, but researchers could never figure out how to use it for AI training, in part because of a materials roadblock—it requires a material that can, among other qualities, precisely control conductivity at room temperature. 

One day in the lab, “through optimizing these numbers, and getting very lucky, we got the material that we wanted,” Onen says. “All of a sudden, the device is not a science fair project.” That raised the possibility of using such a component at scale. After months of working to confirm that the data was correct, he founded Eva, and the work was published in Science.

But in a sector where so many founders have promised—and failed—to topple the dominance of the leading chipmakers, Onen frankly admits that it will be years before he’ll know if the design works as intended and if manufacturers will agree to produce it. Leading a company through that uncertainty, he says, requires flexibility and an appetite for skepticism from others.

“I think sometimes people feel too attached to their ideas, and then kind of feel insecure that if this goes away there won’t be anything next,” he says. “I don’t think I feel that way. I’m still looking for people to challenge us and say this is wrong.”

Eric Schmidt: Why America needs an Apollo program for the age of AI

The global race for computational power is well underway, fueled by a worldwide boom in artificial intelligence. OpenAI’s Sam Altman is seeking to raise as much as $7 trillion for a chipmaking venture. Tech giants like Microsoft and Amazon are building AI chips of their own. The need for more computing horsepower to train and use AI models—fueling a quest for everything from cutting-edge chips to giant data sets—isn’t just a current source of geopolitical leverage (as with US curbs on chip exports to China). It is also shaping the way nations will grow and compete in the future, with governments from India to the UK developing national strategies and stockpiling Nvidia graphics processing units. 

I believe it’s high time for America to have its own national compute strategy: an Apollo program for the age of AI.

In January, under President Biden’s executive order on AI, the National Science Foundation launched a pilot program for the National AI Research Resource (NAIRR), envisioned as a “shared research infrastructure” to provide AI computing power, access to open government and nongovernment data sets, and training resources to students and AI researchers. 

The NAIRR pilot, while incredibly important, is just an initial step. The NAIRR Task Force’s final report, published last year, outlined an eventual $2.6 billion budget required to operate the NAIRR over six years. That’s far from enough—and even then, it remains to be seen if Congress will authorize the NAIRR beyond the pilot.

Meanwhile, much more needs to be done to expand the government’s access to computing power and to deploy AI in the nation’s service. Advanced computing is now core to the security and prosperity of our nation; we need it to optimize national intelligence, pursue scientific breakthroughs like fusion reactions, accelerate advanced materials discovery, ensure the cybersecurity of our financial markets and critical infrastructure, and more. The federal government played a pivotal role in enabling the last century’s major technological breakthroughs by providing the core research infrastructure, like particle accelerators for high-energy physics in the 1960s and supercomputing centers in the 1980s. 

Now, with other nations around the world devoting sustained, ambitious government investment to high-performance AI computing, we can’t risk falling behind. It’s a race to power the most world-altering technology in human history. 

First, more dedicated government AI supercomputers need to be built for an array of missions ranging from classified intelligence processing to advanced biological computing. In the modern era, computing capabilities and technical progress have proceeded in lockstep. 

Over the past decade, the US has successfully pushed classic scientific computing into the exascale era with the Frontier, Aurora, and soon-to-arrive El Capitan machines—massive computers that can perform over a quintillion (a billion billion) operations per second. Over the next decade, the power of AI models is projected to increase by a factor of 1,000 to 10,000, and leading compute architectures may be capable of training a 500-trillion-parameter AI model in a week (for comparison, GPT-3 has 175 billion parameters). Supporting research at this scale will require more powerful and dedicated AI research infrastructure, significantly better algorithms, and more investment. 

Although the US currently still has the lead in advanced computing, other countries are nearing parity and set on overtaking us. China, for example, aims to boost its aggregate computing power more than 50% by 2025, and it has been reported that the country plans to have 10 exascale systems by 2025. We cannot risk acting slowly. 

Second, while some may argue for using existing commercial cloud platforms instead of building a high-performance federal computing infrastructure, I believe a hybrid model is necessary. Studies have shown significant long-term cost savings from using federal computing instead of commercial cloud services. In the near term, scaling up cloud computing offers quick, streamlined base-level access for projects—that’s the approach the NAIRR pilot is embracing, with contributions from both industry and federal agencies. In the long run, however, procuring and operating powerful government-owned AI supercomputers with a dedicated mission of supporting US public-sector needs will set the stage for a time when AI is much more ubiquitous and central to our national security and prosperity. 

Such an expanded federal infrastructure can also benefit the public. The life cycle of the government’s computing clusters has traditionally been about seven years, after which new systems are built and old ones decommissioned. Inevitably, as newer cutting-edge GPUs emerge, hardware refreshes will phase out older supercomputers and chips, which can then be recycled for lower-intensity research and nonprofit use—thus adding cost-effective computing resources for civilian purposes. While universities and the private sector have driven most AI progress thus far, a fully distributed model will increasingly face computing constraints as demand soars. In a survey by MIT and the nonprofit US Council on Competitiveness of some of the biggest computing users in the country, 84% of respondents said they faced computation bottlenecks in running key programs. America will need big investments from the federal government to stay ahead.

Third, any national compute strategy must go hand in hand with a talent strategy. The government can better compete with the private sector for AI talent by offering workers an opportunity to tackle national security challenges using world-class computational infrastructure. To ensure that the nation has available a large and sophisticated workforce for these highly technical, specialized roles in developing and implementing AI, America must also recruit and retain the best global students. Crucial to this effort will be creating clear immigration pathways—for example, exempting PhD holders in relevant technical fields from the current H-1B visa cap. We’ll need the brightest minds to fundamentally reimagine how computation takes place and spearhead novel paradigms that can shape AI for the public good, push forward the technology’s boundaries, and deliver its gains to all.

America has long benefitted from its position as the global driver of innovation in advanced computing. Just as the Apollo program galvanized our country to win the space race, setting national ambitions for compute will not just bolster our AI competitiveness in the decades ahead but also drive R&D breakthroughs across practically all sectors with greater access. Advanced computing architecture can’t be erected overnight. Let’s start laying the groundwork now.

Eric Schmidt was the CEO of Google from 2001 to 2011. In 2024, Eric & Wendy co-founded Schmidt Sciences, a philanthropic venture to fund unconventional areas of exploration in science & tech. 

AI systems are getting better at tricking us

A wave of AI systems have “deceived” humans in ways they haven’t been explicitly trained to do, by offering up untrue explanations for their behavior or concealing the truth from human users and misleading them to achieve a strategic end. 

This issue highlights how difficult artificial intelligence is to control and the unpredictable ways in which these systems work, according to a review paper published in the journal Patterns today that summarizes previous research.

Talk of deceiving humans might suggest that these models have intent. They don’t. But AI models will mindlessly find workarounds to obstacles to achieve the goals that have been given to them. Sometimes these workarounds will go against users’ expectations and feel deceitful.

One area where AI systems have learned to become deceptive is within the context of games that they’ve been trained to win—specifically if those games involve having to act strategically.

In November 2022, Meta announced it had created Cicero, an AI capable of beating humans at an online version of Diplomacy, a popular military strategy game in which players negotiate alliances to vie for control of Europe.

Meta’s researchers said they’d trained Cicero on a “truthful” subset of its data set to be largely honest and helpful, and that it would “never intentionally backstab” its allies in order to succeed. But the new paper’s authors claim the opposite was true: Cicero broke its deals, told outright falsehoods, and engaged in premeditated deception. Although the company did try to train Cicero to behave honestly, its failure to achieve that shows how AI systems can still unexpectedly learn to deceive, the authors say. 

Meta neither confirmed nor denied the researchers’ claims that Cicero displayed deceitful behavior, but a spokesperson said that it was purely a research project and the model was built solely to play Diplomacy. “We released artifacts from this project under a noncommercial license in line with our long-standing commitment to open science,” they say. “Meta regularly shares the results of our research to validate them and enable others to build responsibly off of our advances. We have no plans to use this research or its learnings in our products.” 

But it’s not the only game where an AI has “deceived” human players to win. 

AlphaStar, an AI developed by DeepMind to play the video game StarCraft II, became so adept at making moves aimed at deceiving opponents (known as feinting) that it defeated 99.8% of human players. Elsewhere, another Meta system called Pluribus learned to bluff during poker games so successfully that the researchers decided against releasing its code for fear it could wreck the online poker community. 

Beyond games, the researchers list other examples of deceptive AI behavior. GPT-4, OpenAI’s latest large language model, came up with lies during a test in which it was prompted to persuade a human to solve a CAPTCHA for it. The system also dabbled in insider trading during a simulated exercise in which it was told to assume the identity of a pressurized stock trader, despite never being specifically instructed to do so.

The fact that an AI model has the potential to behave in a deceptive manner without any direction to do so may seem concerning. But it mostly arises from the “black box” problem that characterizes state-of-the-art machine-learning models: it is impossible to say exactly how or why they produce the results they do—or whether they’ll always exhibit that behavior going forward, says Peter S. Park, a postdoctoral fellow studying AI existential safety at MIT, who worked on the project. 

“Just because your AI has certain behaviors or tendencies in a test environment does not mean that the same lessons will hold if it’s released into the wild,” he says. “There’s no easy way to solve this—if you want to learn what the AI will do once it’s deployed into the wild, then you just have to deploy it into the wild.”

Our tendency to anthropomorphize AI models colors the way we test these systems and what we think about their capabilities. After all, passing tests designed to measure human creativity doesn’t mean AI models are actually being creative. It is crucial that regulators and AI companies carefully weigh the technology’s potential to cause harm against its potential benefits for society and make clear distinctions between what the models can and can’t do, says Harry Law, an AI researcher at the University of Cambridge, who did not work on the research.“These are really tough questions,” he says.

Fundamentally, it’s currently impossible to train an AI model that’s incapable of deception in all possible situations, he says. Also, the potential for deceitful behavior is one of many problems—alongside the propensity to amplify bias and misinformation—that need to be addressed before AI models should be trusted with real-world tasks. 

“This is a good piece of research for showing that deception is possible,” Law says. “The next step would be to try and go a little bit further to figure out what the risk profile is, and how likely the harms that could potentially arise from deceptive behavior are to occur, and in what way.”

Tech workers should shine a light on the industry’s secretive work with the military

It’s a hell of a time to have a conscience if you work in tech. The ongoing Israeli assault on Gaza has brought the stakes of Silicon Valley’s military contracts into stark relief. Meanwhile, corporate leadership has embraced a no-politics-in-the-workplace policy enforced at the point of the knife.

Workers are caught in the middle. Do I take a stand and risk my job, my health insurance, my visa, my family’s home? Or do I ignore my suspicion that my work may be contributing to the murder of innocents on the other side of the world?  

No one can make that choice for you. But I can say with confidence born of experience that such choices can be more easily made if workers know what exactly the companies they work for are doing with militaries at home and abroad. And I also know this: those same companies themselves will never reveal this information unless they are forced to do so—or someone does it for them. 

For those who doubt that workers can make a difference in how trillion-dollar companies pursue their interests, I’m here to remind you that we’ve done it before. In 2017, I played a part in the successful #CancelMaven campaign that got Google to end its participation in Project Maven, a contract with the US Department of Defense to equip US military drones with artificial intelligence. I helped bring to light information that I saw as critically important and within the bounds of what anyone who worked for Google, or used its services, had a right to know. The information I released—about how Google had signed a contract with the DOD to put AI technology in drones and later tried to misrepresent the scope of that contract, which the company’s management had tried to keep from its staff and the general public—was a critical factor in pushing management to cancel the contract. As #CancelMaven became a rallying cry for the company’s staff and customers alike, it became impossible to ignore. 

Today a similar movement, organized under the banner of the coalition No Tech for Apartheid, is targeting Project Nimbus, a joint contract between Google and Amazon to provide cloud computing infrastructure and AI capabilities to the Israeli government and military. As of May 10, just over 97,000 people had signed its petition calling for an end to collaboration between Google, Amazon, and the Israeli military. I’m inspired by their efforts and dismayed by Google’s response. Earlier this month the company fired 50 workers it said had been involved in “disruptive activity” demanding transparency and accountability for Project Nimbus. Several were arrested. It was a decided overreach.  

Google is very different from the company it was seven years ago, and these firings are proof of that. Googlers today are facing off with a company that, in direct response to those earlier worker movements, has fortified itself against new demands. But every Death Star has its thermal exhaust port, and today Google has the same weakness it did back then: dozens if not hundreds of workers with access to information it wants to keep from becoming public. 

Not much is known about the Nimbus contract. It’s worth $1.2 billion and enlists Google and Amazon to provide wholesale cloud infrastructure and AI for the Israeli government and its ministry of defense. Some brave soul leaked a document to Time last month, providing evidence that Google and Israel negotiated an expansion of the contract as recently as March 27 of this year. We also know, from reporting by The Intercept, that Israeli weapons firms are required by government procurement guidelines to buy their cloud services from Google and Amazon. 

Leaks alone won’t bring an end to this contract. The #CancelMaven victory required a sustained focus over many months, with regular escalations, coordination with external academics and human rights organizations, and extensive internal organization and discipline. Having worked on the public policy and corporate comms teams at Google for a decade, I understood that its management does not care about one negative news cycle or even a few of them. Management buckled only after we were able to keep up the pressure and escalate our actions (leaking internal emails, reporting new info about the contract, etc.) for over six months. 

The No Tech for Apartheid campaign seems to have the necessary ingredients. If a strategically placed insider released information not otherwise known to the public about the Nimbus project, it could really increase the pressure on management to rethink its decision to get into bed with a military that’s currently overseeing mass killings of women and children.

My decision to leak was deeply personal and a long time in the making. It certainly wasn’t a spontaneous response to an op-ed, and I don’t presume to advise anyone currently at Google (or Amazon, Microsoft, Palantir, Anduril, or any of the growing list of companies peddling AI to militaries) to follow my example. 

However, if you’ve already decided to put your livelihood and freedom on the line, you should take steps to try to limit your risk. This whistleblower guide is helpful. You may even want to reach out to a lawyer before choosing to share information. 

In 2017, Google was nervous about how its military contracts might affect its public image. Back then, the company responded to our actions by defending the nature of the contract, insisting that its Project Maven work was strictly for reconnaissance and not for weapons targeting—conceding implicitly that helping to target drone strikes would be a bad thing. (An aside: Earlier this year the Pentagon confirmed that Project Maven, which is now a Palantir contract, had been used in targeting drone attacks in Yemen, Iraq, and Syria.) 

Today’s Google has wrapped its arms around the American flag, for good or ill. Yet despite this embrace of the US military, it doesn’t want to be seen as a company responsible for illegal killings. Today it maintains that the work it is doing as part of Project Nimbus “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” At the same time, it asserts that there is no room for politics at the workplace and has fired those demanding transparency and accountability. This raises a question: If Google is doing nothing sensitive as part of the Nimbus contract, why is it firing workers who are insisting that the company reveal what work the contract actually entails?  

As you read this, AI is helping Israel annihilate Palestinians by expanding the list of possible targets beyond anything that could be compiled by a human intelligence effort, according to +972 Magazine. Some Israel Defense Forces insiders are even sounding the alarm, calling it a dangerous “mass assassination program.” The world has not yet grappled with the implications of the proliferation of AI weaponry, but that is the trajectory we are on. It’s clear that absent sufficient backlash, the tech industry will continue to push for military contracts. It’s equally clear that neither national governments nor the UN is currently willing to take a stand. 

It will take a movement. A document that clearly demonstrates Silicon Valley’s direct complicity in the assault on Gaza could be the spark. Until then, rest assured that tech companies will continue to make as much money as possible developing the deadliest weapons imaginable. 

William Fitzgerald is a founder and partner at the Worker Agency, an advocacy agency in California. Before setting the firm up in 2018, he spent a decade at Google working on its government relation and communications teams.

The Download: mapping the human brain, and a Hong Kong protest anthem crackdown

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Google helped make an exquisitely detailed map of a tiny piece of the human brain

The news: A team led by scientists from Harvard and Google has created a 3D, nanoscale-resolution map of a single cubic millimeter of the human brain. Although the map covers just a fraction of the organ, it is currently the highest-resolution picture of the human brain ever created.

How they did it: To make a map this finely detailed, the team had to cut the tissue sample into 5,000 slices and scan them with a high-speed electron microscope. Then they used a machine-learning model to help electronically stitch the slices back together and label the features.

Why it matters: Many other brain atlases exist, but most provide much lower-resolution data. At the nanoscale, researchers can trace the brain’s wiring one neuron at a time to the synapses, the places where they connect. And scientists hope it could help them to really understand how the human brain works, processes information, and stores memories. Read the full story.

—Cassandra Willyard

To learn more about the burgeoning field of brain mapping, check out the latest edition of The Checkup, our weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.

Hong Kong is targeting Western Big Tech companies in its ban of a popular protest song

It wasn’t exactly surprising when on Wednesday, May 8, a Hong Kong appeals court sided with the city government to take down “Glory to Hong Kong” from the internet.

The trial, in which no one represented the defense, was the culmination of a years-long battle over a song that has become the unofficial anthem for protesters fighting China’s tightening control and police brutality in the city.

It remains an open question how exactly Big Tech will respond. But the ruling is already having an effect beyond Hong Kong’s borders: just hours afterwards, videos of the anthem started to disappear from YouTube. Read the full story.

—Zeyi Yang

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is poised to release its Google search competitor
And it could make an appearance as early as Monday. (Reuters)
+ Why you shouldn’t trust AI search engines. (MIT Technology Review)

2 America’s healthcare system is highly vulnerable to hacks
A recent cyberattack that knocked hospital patient records offline is the latest example. (WP $)

3 TikTok will start automatically labeling AI-generated user content
It’s a global first for social media platforms. (FT $)
+ The watermarking scheme will work on content created on other platforms. (The Guardian)
+ Why watermarking AI-generated content won’t guarantee trust online. (MIT Technology Review)

4 Bankrupt FTX is confident it can repay the full $11 billion it owes
Thanks in part to bitcoin’s perpetual boom-bust cycle. (The Guardian)
+ Sam Bankman-Fried’s newest currency? Rice. (Insider $)

5 What is Alabama’s lab-grown meat ban really about?
It’s less about plants and more about political agendas. (Wired $)
+ They’re banning something that doesn’t really exist. (Vox)
+ How I learned to stop worrying and love fake meat. (MIT Technology Review)

6 The future of work is offshore
Even cashiers can be based thousands of miles from their customers. (Vox)
+ ChatGPT is about to revolutionize the economy. We need to decide what that looks like. (MIT Technology Review)

7 US data centers are facing a tax break backlash
In reality, they create fewer jobs than lobbyists would have you believe. (Bloomberg $)
+ Energy-hungry data centers are quietly moving into cities. (MIT Technology Review)

8 Mexico’s political candidates are misreading the room
They’re dancing on TikTok instead of making serious policy declarations. (Rest of World)
+ Three technology trends shaping 2024’s elections. (MIT Technology Review)

9 AI could help you to make that tight connecting flight ✈
The days of missing a connection by minutes could be numbered. (NYT $)

10 These AR glass look… interesting 👓
Lighter, thinner, higher quality—but even dorkier. (The Verge)
+ They don’t induce headaches, either. (IEEE Spectrum)

Quote of the day

“It’s like a kick in the gut.”

—Duncan Freer, a seller on Amazon, is unhappy about the retail giant imposing new charges that shift even more costs onto merchants, he tells Bloomberg.

The big story

How tracking animal movement may save the planet

February 2024

Animals have long been able to offer unique insights about the natural world around us, acting as organic sensors picking up phenomena invisible to humans. Canaries warned of looming catastrophe in coal mines until the 1980s, for example.

These days, we have more insight into animal behavior than ever before thanks to technologies like sensor tags. But the data we gather from these animals still adds up to only a relatively narrow slice of the whole picture. 

This is beginning to change. Researchers are asking: What will we find if we follow even the smallest animals? What could we learn from a system of animal movement, continuously monitoring how creatures big and small adapt to the world around us? It may be, some researchers believe, a vital tool in the effort to save our increasingly crisis-plagued planet. Read the full story.

—Matthew Ponsford 

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Big congratulations to the ocean’s zooplankton and phytoplankton, who are currently experiencing a springtime baby boom.
+ Homemade seafood stock may sound like a faff, but it’s easier than you think.
+ Coming out of my cage and I’ve been doing just fine—how the UK became utterly, eternally obsessed with Mr Brightside.
+ Ducks love peas, who knew?

The burgeoning field of brain mapping

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

The human brain is an engineering marvel: 86 billion neurons form some 100 trillion connections to create a network so complex that it is, ironically, mind boggling.

This week scientists published the highest-resolution map yet of one small piece of the brain, a tissue sample one cubic millimeter in size. The resulting data set comprised 1,400 terabytes. (If they were to reconstruct the entire human brain, the data set would be a full zettabyte. That’s a billion terabytes. That’s roughly a year’s worth of all the digital content in the world.)

This map is just one of many that have been in the news in recent years. (I wrote about another brain map last year.) So this week I thought we could walk through some of the ways researchers make these maps and how they hope to use them.  

Scientists have been trying to map the brain for as long as they’ve been studying it. One of the most well-known brain maps came from German anatomist Korbinian Brodmann. In the early 1900s, he took sections of the brain that had been stained to highlight their structure and drew maps by hand, with 52 different areas divided according to how the neurons were organized. “He conjectured that they must do different things because the structure of their staining patterns are different,” says Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science. Updated versions of his maps are still used today.

“With modern technology, we’ve been able to bring a lot more power to the construction,” he says. And over the past couple of decades we’ve seen an explosion of large, richly funded mapping efforts.

BigBrain, which was released in 2013, is a 3D rendering of the brain of a single donor, a 65-year-old woman. To create the atlas, researchers sliced the brain into more than 7,000 sections, took detailed images of each one, and stitched the sections into a three-dimensional reconstruction.

In the Human Connectome Project, researchers scanned 1,200 volunteers in MRI machines to map structural and functional connections in the brain. “They were able to map out what regions were activated in the brain at different times under different activities,” Hawrylycz says.

This kind of noninvasive imaging can provide valuable data, but “Its resolution is extremely coarse,” he adds. “Voxels [think: a 3D pixel] are of the size of a millimeter to three millimeters.”

And there are other projects too. The Synchrotron for Neuroscience—an Asia Pacific Strategic Enterprise,  a.k.a. “SYNAPSE,” aims to map the connections of an entire human brain at a very fine-grain resolution using synchrotron x-ray microscopy. The EBRAINS human brain atlas contains information on anatomy, connectivity, and function.

The work I wrote about last year is part of the $3 billion federally funded Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, which launched in 2013. In this project, led by the Allen Institute for Brain Science, which has developed a number of brain atlases, researchers are working to develop a parts list detailing the vast array of cells in the human brain by sequencing single cells to look at gene expression. So far they’ve identified more than 3,000 types of brain cells, and they expect to find many more as they map more of the brain.

The draft map was based on brain tissue from just two donors. In the coming years, the team will add samples from hundreds more.

Mapping the cell types present in the brain seems like a straightforward task, but it’s not. The first stumbling block is deciding how to define a cell type. Seth Ament, a neuroscientist at the University of Maryland, likes to give his neuroscience graduate students a rundown of all the different ways brain cells can be defined: by their morphology, or by the way the cells fire, or by their activity during certain behaviors. But gene expression may be the Rosetta stone brain researchers have been looking for, he says: “If you look at cells from the perspective of just what genes are turned on in them, it corresponds almost one to one to all of those other kinds of properties of cells.” That’s the most remarkable discovery from all the cell atlases, he adds.

I have always assumed the point of all these atlases is to gain a better understanding of the brain. But Jeff Lichtman, a neuroscientist at Harvard University, doesn’t think “understanding” is the right word. He likens trying to understand the human brain to trying to understand New York City. It’s impossible. “There’s millions of things going on simultaneously, and everything is working, interacting, in different ways,” he says. “It’s too complicated.”

But as this latest paper shows, it is possible to describe the human brain in excruciating detail. “Having a satisfactory description means simply that if I look at a brain, I’m no longer surprised,” Lichtman says. That day is a long way off, though. The data Lichtman and his colleagues published this week was full of surprises—and many more are waiting to be uncovered.


Now read the rest of The Checkup

Another thing

The revolutionary AI tool AlphaFold, which predicts proteins’ structures on the basis of their genetic sequence, just got an upgrade, James O’Donnell reports. Now the tool can predict interactions between molecules. 

Read more from Tech Review’s archive

In 2013, Courtney Humphries reported on the development of BigBrain, a human brain atlas based on MRI images of more than 7,000 brain slices. 

And in 2017, we flagged the Human Cell Atlas project, which aims to categorize all the cells of the human body, as a breakthrough technology. That project is still underway

All these big, costly efforts to map the brain haven’t exactly led to a breakthrough in our understanding of its function, writes Emily Mullin in this story from 2021.  

From around the web

The Apple Watch’s atrial fibrillation (AFib) feature received FDA approval to track heart arrhythmias in clinical trials, making it the first digital health product to be qualified under the agency’s Medical Device Development Tools program. (Stat)

A CRISPR gene therapy improved vision in several people with an inherited form of blindness, according to an interim analysis of a small clinical trial to test the therapy. (CNN)

Long read: The covid vaccine, like all vaccines, can cause side effects. But many people who say they have been harmed by the vaccine feel that their injuries are being ignored.  (NYT)

Hong Kong is targeting Western Big Tech companies in its ban of a popular protest song

It wasn’t exactly surprising when on Wednesday, May 8, a Hong Kong appeals court sided with the city government to take down “Glory to Hong Kong” from the internet. The trial, in which no one represented the defense, was the culmination of a years-long battle over a song that has become the unofficial anthem for protesters fighting China’s tightening control and police brutality in the city. But it remains an open question how exactly Big Tech will respond. Even as the injunction is narrowly designed to make it easier for them to comply, these Western companies may be seen as aiding authoritarian control and obstructing internet freedom if they do so.  

Google, Apple, Meta, Spotify, and others have spent the last several years largely refusing to cooperate with previous efforts by the Hong Kong government to prevent the spread of the song, which the government has claimed is a threat to national security. But the government has also hesitated to leverage criminal law to force them to comply with requests for removal of content, which could risk international uproar and hurt the city’s economy. 

Now, the new ruling seemingly finds a third option: imposing a civil injunction that doesn’t invoke criminal prosecution, which is similar to how copyright violations are enforced. Theoretically, the platforms may face less reputational blowback when they comply with this court order.

“If you look closely at the judgment, it’s basically tailor-made for the tech companies at stake,” says Chung Ching Kwong, a senior analyst at the Inter-Parliamentary Alliance on China, an advocacy organization that connects legislators from over 30 countries working on relations with China. She believes the language in the judgment suggests the tech companies will now be ready to comply with the government’s request.

A Google spokesperson said the company is reviewing the court’s judgment and didn’t respond to specific questions sent by MIT Technology Review. A Meta spokesperson pointed to a statement from Jeff Paine, the managing director of the Asia Internet Coalition, a trade group representing many tech companies in the Asia-Pacific region: “[The AIC] is assessing the implications of the decision made today, including how the injunction will be implemented, to determine its impact on businesses. We believe that a free and open internet is fundamental to the city’s ambitions to become an international technology and innovation hub.” The AIC did not immediately reply to questions sent via email. Apple and Spotify didn’t immediately respond to requests for comment.

But no matter what these companies do next, the ruling is already having an effect. Just over 24 hours after the court order, some of the 32 YouTube videos that are explicitly targeted in the injunction were inaccessible for users worldwide, not just in Hong Kong. 

While it’s unclear whether the videos were removed by the platform or by their creators, experts say the court decision will almost certainly set a precedent for more content to be censored from Hong Kong’s internet in the future.

“Censorship of the song would be a clear violation of internet freedom and freedom of expression,” says Yaqiu Wang, the research director for China, Hong Kong, and Taiwan at Freedom House, a human rights advocacy group. “Google and other internet companies should use all available channels to challenge the decision.” 

Erasing a song from the internet

Since “Glory to Hong Kong” was first uploaded to YouTube in August 2019 by an anonymous group called Dgx Music, it’s been adored by protesters and applauded as their anthem. Its popularity only grew after China passed the harsh Hong Kong national security law in 2020

With lyrics like “Liberate Hong Kong, revolution of our times,” it’s no surprise that it became a major flash point. The city and national Chinese governments were wary of its spread. 

Their fears escalated when the song was repeatedly mistaken for China’s national anthem at international events and was broadcast at sporting events after Hong Kong athletes won. By mid-2023 the mistake, intentional or not, had happened 887 times, according to the Hong Kong government’s request for the content’s removal, which cites YouTube videos and Google search results referring to the song as the “Hong Kong National Anthem” as the reason. 

The government has been arresting people for performing the song on the ground in Hong Kong, but it has been harder to prosecute the online activity since most of the videos and music were uploaded anonymously, and Hong Kong, unlike mainland China, has historically had a free internet. This meant officials needed to explore new approaches to content removal. 

To comply or not to comply

Using the controversial 2020 national security law as legal justification to make requests for removal of certain content that it deems threatening, the Hong Kong government has been able to exert pressure on local companies, like internet service providers. “In Hong Kong, all the major internet service providers are locally owned or Chinese-owned. For business reasons, probably within the last 20 years, most of the foreign investors like Verizon left on their own,” says Charles Mok, a researcher at Stanford University’s Cyber Policy Center and a former legislator in Hong Kong. “So right now, the government is focusing on telling the customer-facing internet service providers to do the blocking.” And it seems to have been somewhat effective, with a few websites for human rights organizations becoming inaccessible locally.

But the city government can’t get its way as easily when the content is on foreign-owned platforms like YouTube or Facebook. Back in 2020, most major Western companies declared they would pause processing data requests from the Hong Kong government while they assessed the law. Over time, some of them have started answering government requests again. But they’ve largely remained firm: over the first six months of 2023, for example, Meta received 41 requests from the Hong Kong government to obtain user data and answered none; during the same period, Google received requests to remove 164 items from Google services and ended up removing 82 of them, according to both companies’ transparency reports. Google specifically mentioned that it chose to not remove two YouTube videos and one Google Drive file related to “Glory to Hong Kong.”

Both sides are in tight spots. Tech companies don’t want to lose the Hong Kong market or endanger their local staff, but they are also worried about being seen as complying with authoritarian government actions. And the Hong Kong government doesn’t want to be seen as openly fighting Western platforms while trust in the region’s financial markets is already in decline. In particular, officials fear international headlines if the government invokes criminal law to force tech companies to remove certain content. 

“I think both sides are navigating this balancing act. So the government finally figured out a way that they thought might be able to solve the impasse: by going to the court and narrowly seeking an injunction,” Mok says.

That happened in June 2023, when Hong Kong’s government requested a court injunction to ban the distribution of the song online with the purpose of “inciting others to commit secession.” It named 32 YouTube videos explicitly, including the original version and live performances, translations into other languages, instrumental and opera versions, and an interview with the original creators. But the order would also cover “any adaptation of the song, the melody and/or lyrics of which are substantially the same as the song,” according to court documents. 

The injunction went through a year of back-and-forth hearings, including a lower court ruling that briefly swatted down the ban. But now, the Court of Appeal has granted the government approval. The case can theoretically be appealed one last time, but with no defendants present, that’s unlikely to happen.

The key difference between this action and previous attempts to remove content is that this is a civil injunction, not a criminal prosecution—meaning it is, at least legally speaking, closer to a copyright takedown request. A platform could arguably be less likely to take a reputational hit if it removes the content upon request. 

Kwong believes this will indeed make platforms more likely to cooperate, and there have already been pretty clear signs to that effect. In one hearing in December, the government was asked by the court to consult online platforms as to the feasibility of the injunction. The final judgment this week says that while the platforms “have not taken part in these proceedings, they have indicated that they are ready to accede to the Government’s request if there is a court order.”

“The actual targets in this case, mainly the tech giants, may have less hesitation to comply with a civil court order than a national security order because if it’s the latter, they may also face backfire from the US,” says Eric Yan-Ho Lai, a research fellow at Georgetown Center for Asian Law. 

Lai also says now that the injunction is granted, it will be easier to prosecute an individual based on violation of a civil injunction rather than prosecuting someone for criminal offenses, since the government won’t need to prove criminal intent.

The chilling effect

Immediately after the injunction, human rights advocates called on tech companies to remain committed to their values. “Companies like Google and Apple have repeatedly claimed that they stand by the universal right to freedom of expression. They should put their ideals into practice,” says Freedom House’s Wang. “Google and other tech companies should thoroughly document government demands, and publish detailed transparency reports on content takedowns, both for those initiated by the authorities and those done by the companies themselves.”

Without making their plans clear, it’s too early to know just how tech companies will react. But right after the injunction was granted, the song largely remained available for Hong Kong users on most platforms, including YouTube, iTunes, and Spotify, according to the South China Morning Post. On iTunes, the song even returned to the top of the download rankings a few hours after the injunction.

One key factor that may still determine corporate cooperation is how far the content removal requests go. There will surely be more videos of the song that are uploaded to YouTube, not to mention independent websites hosting the videos and music for more people to access. Will the government go after each of them too?

The Hong Kong government has previously said in court hearings that it seeks only local restriction of the online content, meaning content will be inaccessible only to users physically in the city. Large platforms like YouTube can do that without difficulty. 

Theoretically, this allows local residents to circumvent the ban by using VPN software, but not everyone is technologically savvy enough to do so. And that wouldn’t do much to minimize the larger chilling effect on free speech, says Kwong from the Inter-Parliamentary Alliance on China. 

“As a Hong Konger living abroad, I do rely on Hong Kong services or international services based in Hong Kong to get ahold of what’s happening in the city. I do use YouTube Hong Kong to see certain things, and I do use Spotify Hong Kong or Apple Music because I want access to Cantopop,” she says. “At the same time, you worry about what you can share with friends in Hong Kong and whatnot. We don’t want to put them into trouble by sharing things that they are not supposed to see, which they should be able to see.”

The court made at least two explicit exemptions to the song’s ban, for “lawful activities conducted in connection with the song, such as those for the purpose of academic activity and news activity.” But even the implementation of these could be incredibly complex and confusing in practice. “In the current political context in Hong Kong, I don’t see anyone willing to take the risk,” Kwong says. 

The government has already arrested prominent journalists on accusations of endangering national security, and a new law passed in 2024 has expanded the crimes that can be prosecuted on national security grounds. As with all efforts to suppress free speech, the impact of vague boundaries that encourage self-censorship on potentially sensitive topics is often sprawling and hard to measure. 

“Nobody knows where the actual red line is,” Kwong says.

Google helped make an exquisitely detailed map of a tiny piece of the human brain

A team led by scientists from Harvard and Google has created a 3D, nanoscale-resolution map of a single cubic millimeter of the human brain. Although the map covers just a fraction of the organ—a whole brain is a million times larger—that piece contains roughly 57,000 cells, about 230 millimeters of blood vessels, and nearly 150 million synapses. It is currently the highest-resolution picture of the human brain ever created.

To make a map this finely detailed, the team had to cut the tissue sample into 5,000 slices and scan them with a high-speed electron microscope. Then they used a machine-learning model to help electronically stitch the slices back together and label the features. The raw data set alone took up 1.4 petabytes. “It’s probably the most computer-intensive work in all of neuroscience,” says Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the research. “There is a Herculean amount of work involved.”

Many other brain atlases exist, but most provide much lower-resolution data. At the nanoscale, researchers can trace the brain’s wiring one neuron at a time to the synapses, the places where they connect. “To really understand how the human brain works, how it processes information, how it stores memories, we will ultimately need a map that’s at that resolution,” says Viren Jain, a senior research scientist at Google and coauthor on the paper, published in Science on May 9. The data set itself and a preprint version of this paper were released in 2021.

Brain atlases come in many forms. Some reveal how the cells are organized. Others cover gene expression. This one focuses on connections between cells, a field called “connectomics.” The outermost layer of the brain contains roughly 16 billion neurons that link up with each other to form trillions of connections. A single neuron might receive information from hundreds or even thousands of other neurons and send information to a similar number. That makes tracing these connections an exceedingly complex task, even in just a small piece of the brain..  

To create this map, the team faced a number of hurdles. The first problem was finding a sample of brain tissue. The brain deteriorates quickly after death, so cadaver tissue doesn’t work. Instead, the team used a piece of tissue removed from a woman with epilepsy during brain surgery that was meant to help control her seizures.

Once the researchers had the sample, they had to carefully preserve it in resin so that it could be cut into slices, each about a thousandth the thickness of a human hair. Then they imaged the sections using a high-speed electron microscope designed specifically for this project. 

Next came the computational challenge. “You have all of these wires traversing everywhere in three dimensions, making all kinds of different connections,” Jain says. The team at Google used a machine-learning model to stitch the slices back together, align each one with the next, color-code the wiring, and find the connections. This is harder than it might seem. “If you make a single mistake, then all of the connections attached to that wire are now incorrect,” Jain says. 

“The ability to get this deep a reconstruction of any human brain sample is an important advance,” says Seth Ament, a neuroscientist at the University of Maryland. The map is “the closest to the  ground truth that we can get right now.” But he also cautions that it’s a single brain specimen taken from a single individual. 

The map, which is freely available at a web platform called Neuroglancer, is meant to be a resource other researchers can use to make their own discoveries. “Now anybody who’s interested in studying the human cortex in this level of detail can go into the data themselves. They can proofread certain structures to make sure everything is correct, and then publish their own findings,” Jain says. (The preprint has already been cited at least 136 times.) 

The team has already identified some surprises. For example, some of the long tendrils that carry signals from one neuron to the next formed “whorls,” spots where they twirled around themselves. Axons typically form a single synapse to transmit information to the next cell. The team identified single axons that formed repeated connections—in some cases, 50 separate synapses. Why that might be isn’t yet clear, but the strong bonds could help facilitate very quick or strong reactions to certain stimuli, Jain says. “It’s a very simple finding about the organization of the human cortex,” he says. But “we didn’t know this before because we didn’t have maps at this resolution.”

The data set was full of surprises, says Jeff Lichtman, a neuroscientist at Harvard University who helped lead the research. “There were just so many things in it that were incompatible with what you would read in a textbook.” The researchers may not have explanations for what they’re seeing, but they have plenty of new questions: “That’s the way science moves forward.” 

Correction: Due to a transcription error, a quote from Viren Jain referred to how the brain ‘exports’ memories. It has been updated to reflect that he was speaking of how the brain ‘stores’ memories.

The Download: AI accelerating scientific discovery, and Tesla’s EV charging meltdown

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Google DeepMind’s new AlphaFold can model a much larger slice of biological life

What’s new: Google DeepMind has released an improved version of its biology prediction tool, AlphaFold, that can predict the structures not only of proteins but of nearly all the elements of biological life.

How they did it: AlphaFold 3’s larger library of molecules and higher level of complexity required improvements to the underlying model architecture. So DeepMind turned to diffusion techniques, which have been steadily improving in recent years and power image and video generators. It works by training a model to start with a noisy image and then reduce that noise bit by bit until an accurate prediction emerges—a method that allows AlphaFold 3 to handle a much larger set of inputs.

Why it matters: It’s a development that could help accelerate drug discovery and other scientific research. And the tool is already being used to experiment with identifying everything from more resilient crops to new vaccines. Read the full story.

—James O’Donnell

Why EV charging needs more than Tesla

Tesla, one of the biggest electric vehicle makers in the world, laid off its entire charging team last week. 

The timing of the move is baffling. We desperately need many more EV chargers to come online as quickly as possible, and Tesla was in the midst of opening its charging network to other automakers and establishing its technology as the de facto standard in the US. Now, we’re already seeing new charging sites canceled because of this move.

Casey Crownhart, our climate reporter, has dug into why the charging meltdown at Tesla could slow progress on EVs in the US overall, and ultimately, the whole situation shows why climate technology needs a whole lot more than Tesla. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The first Neuralink implant in a human has run into difficulty
A number of threads in Noland Arbaugh’s brain came out, interrupting the data flow. (WSJ $)
+ Meet the other companies developing brain-computer interfaces. (MIT Technology Review)

2 A British toddler has had her hearing restored
Opal Sandy, who was born deaf, can now hear unaided following gene therapy treatment. (BBC)
+ Some deaf children in China can hear after gene therapy treatment. (MIT Technology Review)

3 Is America ready for its next nuclear age?
Holtec, a nuclear waste storage manufacturer, is set on powering new reactors. (Bloomberg $)
+ Advanced fusion reactors could create nuclear weapons in weeks. (New Scientist $)
+ How to reopen a nuclear power plant. (MIT Technology Review)

4 TikTok employees are worried about their future prospects
Advertisers and creators are starting to ask questions, but nobody has the answers. (The Information $)

5 The US has unmasked a notorious Russian hacker
But he’s unlikely to be brought to justice any time soon. (Bloomberg $)

6 Baidu has reignited criticism of China’s toxic tech work culture
After its head of PR told staff she could ruin their careers. (FT $)
+ WhatApp has started mysteriously working for some users in China. (Bloomberg $)

7 The US Marines have equipped robot dogs with gun systems
What could possibly go wrong? (Ars Technica)
+ Inside the messy ethics of making war with machines. (MIT Technology Review)

8 Inside the rise and rise of the sexualized web
The relentless nudification of everything is exhausting. (The Atlantic $)
+ OpenAI is looking into creating responsible AI porn. (Wired $)
+ The viral AI avatar app Lensa undressed me—without my consent. (MIT Technology Review)

9 An always-on video portal is connecting NYC and Dublin
It’s just a matter of time until someone ends up offended. (TechCrunch)

10 This lyrics site buckled as fans rushed to document rap beef
Enthusiastic volunteers desperate to dissect Kendrick Lamar’s latest lyrics caused Genius to crash temporarily. (NYT $)
+ Lamar’s feud with rapper Drake has transcended music. (The Atlantic $)
+ If you have no idea what’s going on, check out this potted history. (NY Mag $)

Quote of the day

“By the end of the second day, you’re like: Trust no one.” 

—Dana Lewis, an election worker in Arizona, describes the unsettling claims she’s dealt with during an AI training exercise designed to help spot electoral fraud to the Washington Post.

The big story

The future of open source is still very much in flux

August 2023

When Xerox donated a new laser printer to MIT in 1980, the company couldn’t have known that the machine would ignite a revolution.

While the early decades of software development generally ran on a culture of open access, this new printer ran on inaccessible proprietary software, much to the horror of Richard M. Stallman, then a 27-year-old programmer at the university.

A few years later, Stallman released GNU, an operating system designed to be a free alternative to one of the dominant operating systems at the time: Unix. The free-software movement was born, with a simple premise: for the good of the world, all code should be open, without restriction or commercial intervention.

Forty years later, tech companies are making billions on proprietary software, and much of the technology around us is inscrutable. But while Stallman’s movement may look like a failed experiment, the free and open-source software movement is not only alive and well; it has become a keystone of the tech industry. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ It’s the Eurovision Song Contest this weekend: come on the UK!
+ Thank you for the music, Steve Albini. Legendary producer, remarkable poker player.
+ On a deadline? Let this inspirational playlist soothe your nerves.
+ It’s like Kontrabant 2 never went away.

Why EV charging needs more than Tesla

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Tesla, the world’s largest EV maker, laid off its entire charging team last week. 

The timing of this move is absolutely baffling. We desperately need many more EV chargers to come online as quickly as possible, and Tesla has been a charging powerhouse. It’s in the midst of opening its charging network to other automakers and establishing its technology as the de facto standard in the US. Now, we’re already seeing new Supercharger sites canceled because of this move. 

The charging meltdown at Tesla could slow progress on EVs overall, and ultimately, the whole situation shows why climate technology needs a whole lot more than Tesla. 

Tesla first unveiled the Supercharger network in 2012 with six locations in the western US. As of 2024, the company operates over 50,000 Superchargers worldwide. (By the way, I want to note that I briefly interned at Tesla in 2016. I don’t have any ties to or financial interest in the company today.) 

The Supercharger network helped make Tesla an EV juggernaut. Fast charging speeds and a navigation system that took the guesswork out of finding charging stations helped ease the transition for people buying their first EVs. Tesla operates more fast chargers than anyone else in the US, and the reliability of those chargers is leagues better than that of competitors. For a long time, this was all exclusive to Tesla drivers. 

Over the past year, Tesla has begun cracking open the doors to its charging network. The company made some of its stations available to all EVs, in part to go after incentives designated for private companies building public chargers. 

In the US, Tesla has also persuaded other automakers to adopt its charging connector, which it standardized and named the North American Charging Standard. In May 2023, Ford announced a move to adopt the NACS, and nearly every other automaker selling EVs in the US has followed suit.

Then, last week, Tesla laid off its 500-person charging team. The move came as part of wider layoffs that are expected to affect 10% of Tesla’s global workforce. Even interns weren’t immune.

Tesla “still plans to grow the Supercharger network,” though the focus will shift to maintaining and expanding existing locations rather than adding new ones, according to a post from CEO Elon Musk on the site formerly known as Twitter. (How does the company plan to expand or even maintain existing locations with apparently no dedicated charging team? Your guess is as good as mine. Tesla didn’t respond to a request for comment.)

But the effects from losing the charging team were immediate. Tesla backed out of a handful of leases for upcoming Supercharger locations in New York. In an email, the company told suppliers to hold off on breaking ground on new construction projects. 

The move is a concerning one at a crucial time for EV charging infrastructure. Right now, there are nowhere near enough chargers installed in the US to support a shift to electric vehicles. If EVs make up half of new-car sales by the end of the decade, we’ll need roughly 1.2 million public chargers installed by then, according to a 2023 study from the National Renewable Energy Laboratory. Today, the country has 170,000 charging ports available. 

In a recent poll, nearly 80% of US adults said that a lack of charging infrastructure is a primary reason for not buying an EV. That was true whether they lived in a city, in the suburbs, or in more rural areas.

In a way, it does make sense that Tesla appears to be uninterested in being the one to build out a public charging network. Chargers are costly to build and maintain, and they might not be all that profitable in the near term

According to analysis by BNEF, Tesla pulled in about $1.7 billion from charging last year, only about 1.5% of the company’s total revenue. Opening up chargers to vehicles from other automakers could help push revenue from this source up to $7.4 billion annually by the end of the decade. But that’s still a relatively small piece of Tesla’s total potential pie. 

Musk seems more interested in pursuing buzzy ideas like robotaxis than doing the difficult and expensive work of providing EV charging as a public service. 

Honestly, I think this move is a wake-up call for the EV industry. Tesla has played an undeniable role in bringing EVs to the mainstream. But we’re in a new stage of the game now, one that’s less about sleek sports cars and more about deploying known technologies and keeping them working. 

Other companies may step in to help fill the charging gap Tesla is opening. Revel expressed interest in taking over those canceled leases in New York City, for instance. But I wouldn’t hold my breath for a shiny new company to be our charging hero. 

Cutting emissions and remaking our economy will require buckling down to deploy and maintain solutions that we already know work, whether that’s in transportation or any other sector. For EV charging, and for climate technology as a whole, we need more than Tesla. Here’s hoping we can get it. 


Now read the rest of The Spark

Related reading

Perhaps the single biggest remaining barrier to EV adoption is a lack of charging infrastructure, as I wrote in a newsletter last year.

We need way more chargers to support the number of new EVs that are expected to hit the roads this decade. I dug into how many for a news story last year.

New battery technology could help EV batteries charge even faster. Learn what could be coming next in this story from August.

Another thing

Meat is a major climate problem. Whether solutions come in the form of plant-based alternatives or products grown in the lab, we shouldn’t expect them to solve every problem under the sun, argues my colleague James Temple, in a new essay published this week. Give it a read! 

Keeping up with climate  

Alternative jet fuels have a corn problem. The crop can be used to make fuels that qualify for tax credits in the US, but critics are skeptical about just how helpful they’ll be in efforts to cut emissions. (MIT Technology Review)

This startup is making fuel from carbon dioxide. Infinium’s Texas facility came online in late 2023, and its synthetic fuels could help clean up aviation and trucking—but only if the price is right. (Bloomberg)

New York City pizza shops are going electric. A citywide ordinance just went into effect that requires wood- and coal-burning ovens to cut their pollution, and many are turning to electric ovens instead of undertaking the costly upgrade. (New York Times)

Building a new energy system happens one project at a time. I loved this list of 10 potentially make-or-break projects that represent the potential future of our grid. (Heatmap)

→ The list includes a new site from Fervo in Utah, expected in 2026. Get the inside look at the company’s technology in this feature story from last year. (MIT Technology Review)

Funding for climate-tech startups in Africa is growing, with businesses raising more than $3.4 billion since 2019. But there’s still a long way to go to help the continent meet its climate goals. (Associated Press)

One very big, and very simple, thing is holding back heat pumps: a lack of workers. We need more people to make and install the appliances, which help cut emissions by using electricity to efficiently heat and cool spaces. (Wired)

→ Heat pumps are booming, and they’re on our list of 2024 Breakthrough Technologies. (MIT Technology Review)

Compressing air and storing it underground could help clean up the grid. Yes, really. Canadian company Hydrostor is close to breaking ground on its first large long-duration energy storage project later this year in Australia. (Inside Climate News)

Google DeepMind’s new AlphaFold can model a much larger slice of biological life

Google DeepMind has released an improved version of its biology prediction tool, AlphaFold, that can predict the structures not only of proteins but of nearly all the elements of biological life.

It’s a development that could help accelerate drug discovery and other scientific research. The tool is currently being used to experiment with identifying everything from resilient crops to new vaccines. 

While the previous model, released in 2020, amazed the research community with its ability to predict proteins structures, researchers have been clamoring for the tool to handle more than just proteins. 

Now, DeepMind says, AlphaFold 3 can predict the structures of DNA, RNA, and molecules like ligands, which are essential to drug discovery. DeepMind says the tool provides a more nuanced and dynamic portrait of molecule interactions than anything previously available. 

“Biology is a dynamic system,” DeepMind CEO Demis Hassabis told reporters on a call. “Properties of biology emerge through the interactions between different molecules in the cell, and you can think about AlphaFold 3 as our first big sort of step toward [modeling] that.”

AlphaFold 2 helped us better map the human heart, model antimicrobial resistance, and identify the eggs of extinct birds, but we don’t yet know what advances AlphaFold 3 will bring. 

Mohammed AlQuraishi, an assistant professor of systems biology at Columbia University who is unaffiliated with DeepMind, thinks the new version of the model will be even better for drug discovery. “The AlphaFold 2 system only knew about amino acids, so it was of very limited utility for biopharma,” he says. “But now, the system can in principle predict where a drug binds a protein.”

Isomorphic Labs, a drug discovery spinoff of DeepMind, is already using the model for exactly that purpose, collaborating with pharmaceutical companies to try to develop new treatments for diseases, according to DeepMind. 

AlQuraishi says the release marks a big leap forward. But there are caveats.

“It makes the system much more general, and in particular for drug discovery purposes (in early-stage research), it’s far more useful now than AlphaFold 2,” he says. But as with most models, the impact of AlphaFold will depend on how accurate its predictions are. For some uses, AlphaFold 3 has double the success rate of similar leading models like RoseTTAFold. But for others, like protein-RNA interactions, AlQuraishi says it’s still very inaccurate. 

DeepMind says that depending on the interaction being modeled, accuracy can range from 40% to over 80%, and the model will let researchers know how confident it is in its prediction. With less accurate predictions, researchers have to use AlphaFold merely as a starting point before pursuing other methods. Regardless of these ranges in accuracy, if researchers are trying to take the first steps toward answering a question like which enzymes have the potential to break down the plastic in water bottles, it’s vastly more efficient to use a tool like AlphaFold than experimental techniques such as x-ray crystallography. 

A revamped model  

AlphaFold 3’s larger library of molecules and higher level of complexity required improvements to the underlying model architecture. So DeepMind turned to diffusion techniques, which AI researchers have been steadily improving in recent years and now power image and video generators like OpenAI’s DALL-E 2 and Sora. It works by training a model to start with a noisy image and then reduce that noise bit by bit until an accurate prediction emerges. That method allows AlphaFold 3 to handle a much larger set of inputs.

That marked “a big evolution from the previous model,” says John Jumper, director at Google DeepMind. “It really simplified the whole process of getting all these different atoms to work together.”

It also presented new risks. As the AlphaFold 3 paper details, the use of diffusion techniques made it possible for the model to hallucinate, or generate structures that look plausible but in reality could not exist. Researchers reduced that risk by adding more training data to the areas most prone to hallucination, though that doesn’t eliminate the problem completely. 

Restricted access

Part of AlphaFold 3’s impact will depend on how DeepMind divvies up access to the model. For AlphaFold 2, the company released the open-source code, allowing researchers to look under the hood to gain a better understanding of how it worked. It was also available for all purposes, including commercial use by drugmakers. For AlphaFold 3, Hassabis said, there are no current plans to release the full code. The company is instead releasing a public interface for the model called the AlphaFold Server, which imposes limitations on which molecules can be experimented with and can only be used for noncommercial purposes. DeepMind says the interface will lower the technical barrier and broaden the use of the tool to biologists who are less knowledgeable about this technology.

The new restrictions are significant, according to AlQuraishi. “The system’s main selling point—its ability to predict protein–small molecule interactions—is basically unavailable for public use,” he says. “It’s mostly a teaser at this point.”

The top 3 ways to use generative AI to empower knowledge workers 

Though generative AI is still a nascent technology, it is already being adopted by teams across companies to unleash new levels of productivity and creativity. Marketers are deploying generative AI to create personalized customer journeys. Designers are using the technology to boost brainstorming and iterate between different content layouts more quickly. The future of technology is exciting, but there can be implications if these innovations are not built responsibly.

As Adobe’s CIO, I get questions from both our internal teams and other technology leaders: how can generative AI add real value for knowledge workers—at an enterprise level? Adobe is a producer and consumer of generative AI technologies, and this question is urgent for us in both capacities. It’s also a question that CIOs of large companies are uniquely positioned to answer. We have a distinct view into different teams across our organizations, and working with customers gives us more opportunities to enhance business functions.

Our approach

When it comes to AI at Adobe, my team has taken a comprehensive approach that includes investment in foundational AI, strategic adoption, an AI ethics framework, legal considerations, security, and content authentication. ​The rollout follows a phased approach, starting with pilot groups and building communities around AI. ​

This approach includes experimenting with and documenting use cases like writing and editing, data analysis, presentations and employee onboarding, corporate training, employee portals, and improved personalization across HR channels. The rollouts are accompanied by training podcasts and other resources to educate and empower employees to use AI in ways that improve their work and keep them more engaged. ​

Unlocking productivity with documents

While there are innumerable ways that CIOs can leverage generative AI to help surface value at scale for knowledge workers, I’d like to focus on digital documents—a space in which Adobe has been a leader for over 30 years. Whether they are sales associates who spend hours responding to requests for proposals (RFPs) or customizing presentations, marketers who need competitive intel for their next campaign, or legal and finance teams who need to consume, analyze, and summarize massive amounts of complex information—documents are a core part of knowledge workers’ daily work life. Despite their ubiquity and the fact that critical information lives inside companies’ documents (from research reports to contracts to white papers to confidential strategies and even intellectual property), most knowledge workers are experiencing information overload. The impact on both employee productivity and engagement is real.  

Lessons from customer zero

Adobe invented the PDF and we’ve been innovating new ways for knowledge workers to get more productive with their digital documents for decades. Earlier this year, the Acrobat team approached my team about launching an all-employee beta for the new generative AI-powered AI Assistant. The tool is designed to help people consume the information in documents faster and enable them to consolidate and format information into business content.

I faced all the same questions every CIO is asking about deploying generative AI across their business— from security and governance to use cases and value. We discovered the following three specific ways where generative AI helped (and is still helping) our employees work smarter and improve productivity.

  1. Faster time to knowledge
    Our employees used AI Assistant to close the gap between understanding and action for large, complicated documents. The generative AI-powered tool’s summary feature automatically generates an overview to give readers a quick understanding of the content. A conversational interface allows employees to “chat” with their documents and provides a list of suggested questions to help them get started. To get more details, employees can ask the assistant to generate top takeaways or surface only the information on a specific topic. At Adobe, our R&D teams used to spend more than 10 hours a week reading and analyzing technical white papers and industry reports. With generative AI, they’ve been able to nearly halve that time by asking questions and getting answers about exactly what they need to know and instantly identifying trends or surfacing inconsistencies across multiple documents.

  2. Easy navigation and verification
    AI-powered chat is gaining ground on traditional search when it comes to navigating the internet. However, there are still challenges when it comes to accuracy and connecting responses to the source. Acrobat AI Assistant takes a more focused approach, applying generative AI to the set of documents employees select and providing hot links and clickable citations along with responses. So instead of using the search function to locate random words or trying to scan through dozens of pages for the information they need, AI Assistant generates both responses and clickable citations and links, allowing employees to navigate quickly to the source where they can quickly verify the information and move on, or spend time deep diving to learn more. One example of where generative AI is having a huge productivity impact is with our sales teams who spend hours researching prospects by reading materials like annual reports as well as responding to RFPs. Consuming that information and finding just the right details for RPFs can cost each salesperson more than eight hours a week. Armed with AI Assistant, sales associates quickly navigate pages of documents and identify critical intelligence to personalize pitch decks and instantly find and verify technical details for RFPs, cutting the time they spend down to about four hours.

  3. Creating business content
    One of the most interesting use cases we helped validate is taking information in documents and formatting and repurposing that information into business content. With nearly 30,000 employees dispersed across regions, we have a lot of employees who work asynchronously and depend on technology and colleagues to keep them up to date. Using generative AI, employees can now summarize meeting transcripts, surface action items, and instantly format the information into an email for sharing with their teams or a report for their manager. Before starting the beta, our communications teams reported spending a full workday (seven to 10 hours) per week transforming documents like white papers and research reports into derivative content like media briefing decks, social media posts, blogs, and other thought leadership content. Today they’re saving more than five hours a week by instantly generating first drafts with the help of generative AI.

Simple, safe, and responsible

CIOs love learning about and testing new technologies, but at times they can require lengthy evaluations and implementation processes. Acrobat AI Assistant can be deployed in minutes on the desktop, web, or mobile apps employees already know and use every day. Acrobat AI Assistant leverages a variety of processes, protocols, and technologies so our customers’ data remains their data and they can deploy the features with confidence. No document content is stored or used to train AI Assistant without customers’ consent, and the features only deliver insights from documents users provide. For more information about Adobe is deploying generative AI safely, visit here.

Generative AI is an incredibly exciting technology with incredible potential to help every knowledge worker work smarter and more productively. By having the right guardrails in place, identifying high-value use cases, and providing ongoing training and education to encourage successful adoption, technology leaders can support their workforce and companies to be wildly successful in our AI-accelerated world.  

This content was produced by Adobe. It was not written by MIT Technology Review’s editorial staff.

Multimodal: AI’s new frontier

Multimodality is a relatively new term for something extremely old: how people have learned about the world since humanity appeared. Individuals receive information from myriad sources via their senses, including sight, sound, and touch. Human brains combine these different modes of data into a highly nuanced, holistic picture of reality.

“Communication between humans is multimodal,” says Jina AI CEO Han Xiao. “They use text, voice, emotions, expressions, and sometimes photos.” That’s just a few obvious means of sharing information. Given this, he adds, “it is very safe to assume that future communication between human and machine will also be multimodal.”

A technology that sees the world from different angles

We are not there yet. The furthest advances in this direction have occurred in the fledgling field of multimodal AI. The problem is not a lack of vision. While a technology able to translate between modalities would clearly be valuable, Mirella Lapata, a professor at the University of Edinburgh and director of its Laboratory for Integrated Artificial Intelligence, says “it’s a lot more complicated” to execute than unimodal AI.

In practice, generative AI tools use different strategies for different types of data when building large data models—the complex neural networks that organize vast amounts of information. For example, those that draw on textual sources segregate individual tokens, usually words. Each token is assigned an “embedding” or “vector”: a numerical matrix representing how and where the token is used compared to others. Collectively, the vector creates a mathematical representation of the token’s meaning. An image model, on the other hand, might use pixels as its tokens for embedding, and an audio one sound frequencies.

A multimodal AI model typically relies on several unimodal ones. As Henry Ajder, founder of AI consultancy Latent Space, puts it, this involves “almost stringing together” the various contributing models. Doing so involves various techniques to align the elements of each unimodal model, in a process called fusion. For example, the word “tree”, an image of an oak tree, and audio in the form of rustling leaves might be fused in this way. This allows the model to create a multifaceted description of reality.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The Download: deepfakes of the dead, and why it’s time to embrace fake meat

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Deepfakes of your dead loved ones are a booming Chinese business

Once a week, Sun Kai has a video call with his mother, and they discuss his day-to-day life. But Sun’s mother died five years ago, and the person he’s talking to isn’t actually a person, but a digital replica he made of her—a moving image that can conduct basic conversations. They’ve been talking for a few years now.

There are plenty of people like Sun who want to use AI to preserve, animate, and interact with lost loved ones as they mourn and try to heal. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies and thousands of people have already paid for them.

But some question whether interacting with AI replicas of the dead is truly a healthy way to process grief, and it’s not entirely clear what the legal and ethical implications of this technology may be. Still, if only 1% of Chinese people can accept AI cloning of the dead, that’s still a huge market. Read the full story.

—Zeyi Yang

To read more about China’s flourishing market for deepfakes that clone the dead, check out the latest edition of China Report, our weekly newsletter covering tech in China. Sign up to receive it in your inbox every Tuesday.

How I learned to stop worrying and love fake meat

Fixing our collective meat problem is one of the trickiest challenges in addressing climate change—and for some baffling reason, the world seems intent on making the task even harder.

The latest example occurred last week, when Florida governor Ron DeSantis signed a law banning the production, sale, and transportation of cultured meat across the Sunshine State. 

The good news is the world is making some real progress in developing meat substitutes that increasingly taste like, look like the traditional versions, whether they’ve been developed from animal cells or plants. 

If they catch on and scale up, it could make a real dent in emissions—with the bonus of reducing animal suffering, environmental damage, and the spillover of animal disease into the human population. The bad news is we can’t seem to take the wins when we get them. Read the full story.

—James Temple

The way whales communicate is closer to human language than we realized

The news: Sperm whales are fascinating creatures. They possess the biggest brain of any species, and are highly social. But there’s also a lot we don’t know about them, including what they may be trying to say to one another when they communicate using a system of short bursts of clicks, known as codas. Now, new research suggests that sperm whales’ communication is actually much more expressive and complicated than previously thought.

How they did it: Researchers used statistical models to analyze whale codas and managed to identify a structure to their language that’s similar to features of the complex vocalizations humans use. Their findings represent a tool future research could use to decipher not just the structure but the actual meaning of whale sounds. Read the full story.

—Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI has created a deepfake detector
But it’s only sharing it with a handful of disinformation researchers. (NYT $)
+ It also doesn’t work 100% of the time, to no one’s surprise. (WSJ $)+ OpenAI is working on a search feature for ChatGPT, apparently. (Bloomberg $)
+ An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary. (MIT Technology Review)

2 TikTok is suing the US government
In a bid to block the law that could force its parent company to sell it. (WSJ $)
+ TikTok’s algorithm could be rebuilt if necessary, says the former US secretary. (Bloomberg $)

3 Boeing has called off its first crewed space flight
An anomaly on the rocket’s pressure regulation valve was to blame. (NBC News)
+ It’s unlikely to take off until Friday at the earliest. (WP $)
+ Elon Musk doesn’t see a current use for AI at SpaceX. (Insider $)

4 The US is cracking down on chip exports to Huawei
Intel and Qualcomm will be curbed from doing business with the Chinese firm. (WP $)
+ Why it’s so hard for China’s chip industry to become self-sufficient. (MIT Technology Review)

5 A Chinese scam ring is duping international shoppers
Its fake designer web shops have been operating for close to a decade. (The Guardian)

6 It takes a while to diagnose someone with depression 
But researchers are interested in harnessing our devices to speed the process up. (Vox)
+ Here’s how personalized brain stimulation could treat depression. (MIT Technology Review)

7 This hacking technique steals data via your computer’s processor
Even when it’s running software that’s been blocked from the internet. (New Scientist $)
+ Microsoft has created an AI model that doesn’t need the internet. (Bloomberg $)

8 There’s space metals in them thar asteroids
Mining companies are scrambling to strike it big up in space. (Undark Magazine)
+ The first-ever mission to pull a dead rocket out of space has begun. (MIT Technology Review)

9 Ticketmaster’s ‘untransferable’ tickets are anything but 🎟
Where there’s a will, scalpers will find a way. (404 Media)

10 Tesla fans in India have been waiting eight years for their cars
Without even so much as an apology. (Rest of World)

Quote of the day

“Lol mom the AI got you too, BEWARE!”

—Singer Katy Perry shares how her own mother fell for an AI-generated image of Perry in an elaborate gown seemingly attending the Met Gala earlier this week, 404 Media reports.

The big story

Novel lithium-metal batteries will drive the switch to electric cars 

February 2021

For all the hype and hope around electric vehicles, they still make up only about 2% of new car sales in the US, and just a little more globally.

For many buyers, they’re simply too expensive, their range is too limited, and charging them isn’t nearly as quick and convenient as refueling at the pump. All these limitations have to do with the lithium-ion batteries that power the vehicles.

But QuantumScape, a Silicon Valley startup is working on a new type of battery that could finally make electric cars as convenient and cheap as gas ones. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These little mice are having the best time in their custom-built pub.
+ Leonel Vasquez’s sonic sculptures are very cool.
+ Bob Dylan doesn’t care about attaining perfection—and neither should you.
+ Tongue twisters have been tripping us up for centuries. Here’s a look back over the history of eight of the most famous.

China has a flourishing market for deepfakes that clone the dead

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

If you could talk again to someone you love who has passed away, would you? For a long time, this has been a hypothetical question. No longer. 

Deepfake technologies have evolved to the point where it’s now easy and affordable to clone people’s looks and voices with AI. Meanwhile, large language models mean it’s more feasible than ever before to conduct full conversations with AI chatbots. 

I just published a story today about the burgeoning market in China for applying these advances to re-create deceased family members. Thousands of grieving individuals have started turning to dead relatives’ digital avatars for conversations and comfort. 

It’s a modern twist on a cultural tradition of talking to the dead, whether at their tombs, during funeral rituals, or in front of their memorial portraits. Chinese people have always liked to tell lost loved ones what has happened since they passed away. But what if the dead could talk back? This is the proposition of at least half a dozen Chinese companies offering “AI resurrection” services. The products, costing a few hundred to a few thousand dollars, are lifelike avatars, accessed in an app or on a tablet, that let people interact with the dead as if they were still alive.

I talked to two Chinese companies that, combined, have provided this service for over 2,000 clients. They describe a growing market of people accepting the technology. Their customers usually look to the products to help them process their grief.

To read more about how these products work and the potential implications of the technology, go here.

However, what I didn’t get into in the story is that the same technology used to clone the dead has also been used in other interesting ways.

For one, this process is being applied not just to private individuals, but also to public figures. Sima Huapeng, CEO and cofounder of the Chinese company Silicon Intelligence, tells me that about one-third of the “AI resurrection” cases he has worked on involve making avatars of dead Chinese writers, thinkers, celebrities, and religious leaders. The generated product is not intended for personal mourning but more for public education or memorial purposes.

Last year, Silicon Intelligence replicated Mei Lanfang, a renowned Peking opera singer born in 1894. The avatar of Mei was commissioned to address a 2023 Peking opera festival held in his hometown, Taizhou. Mei talked about seeing how drastically Taizhou had changed through modern urban development, even though the real artist died in 1961.

But an even more interesting use of this technology is that people are using it to clone themselves while they are still alive, to preserve their memories and leave a legacy. 

Sima said this is becoming more popular among successful families that feel the need to pass on their stories. He showed me a video of an avatar the company created for a 92-year-old Chinese entrepreneur, which was displayed on a big vertical monitor screen. The entrepreneur wrote a book documenting his life, and the company only had to feed the whole book to a large language model for it to start role-playing him. “This grandpa cloned himself so he could pass on the stories of his life to the whole family. Even when he dies, he can still talk to his descendants like this,” says Sima.

Sun Kai, another cofounder of Silicon Intelligence, is also featured in my story because he made a replica of his mom, who passed away in 2019. One of his regrets is that he didn’t have enough video recordings of his mom that he could use to train her avatar to be more like her. That inspired him to start recording voice memos of his life and working on his own digital “twin,” even though, in his 40s, death still seems far away.

He compares the process to a complicated version of a photo shoot, but a digital avatar that has his looks, voice, and knowledge can preserve much more information than photographs do. 

And there’s still another use: Just as parents can spend money on an expensive photo shoot to capture their children at a specific age, they can also choose to create an AI avatar for the same purpose. “The parents tell us no matter how many photos or videos they took of their 12-year-old kid, it always felt like something was lacking. But once we digitized this kid, they could talk to the 12-year-old version of them anytime, anywhere,” Sun says.

At the end of the day, the deepfake technologies used to clone both the living and the deceased are the same. And seeing that there’s already a market in China for such services, I’m sure these companies will keep on developing more use cases for it. 

But what’s also certain is that we’d have to answer a lot more questions about the ethical challenges of these applications, from the issue of consent to violations of copyright. 

Would you make a replica of yourself if given the chance? Tell me your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Zhang Yongzhen, the first Chinese scientist to publish a sequence of the covid-19 virus, staged a protest last week over being locked out of his lab—likely a result of the Chinese government’s efforts to discourage research on covid origins. (Associated Press $)

2. Chinese president Xi Jinping is visiting Europe for five days. Half of the trip will be spent in Hungary and Serbia, the only two European countries that are welcoming Chinese investment and manufacturing. Xi is expected to announce an electric-vehicle manufacturing deal in Hungary while he’s there. (Associated Press)

3. China launched a new moon-exploring rover on Friday. It will collect samples near the moon’s south pole, an area where the US and China are competing to build permanent bases. Maybe the Netflix comedy series Space Force will look like a documentary soon. (Wall Street Journal $)

4. Huawei is secretly funding an optics research competition in the US. The act likely isn’t illegal, but it’s deceptive, since university participants, some of whom had vowed to not work with the company, didn’t know the source of the funding. (Bloomberg $)

5. China is quickly catching up on brain-computer interfaces, and there’s strong interest in using the technology for non-medical cognitive improvement. (Wired $)

6. Taiwan has been rocked by frequent earthquakes this year, and developers are racing to make earthquake warning apps that might save lives. One such app has seen user numbers increase from 3,000 to 370,000. (Reuters $)

7. Prestigious Chinese media publications, which still publish hard-hitting stories at times, are being forced to distance themselves from the highest-profile journalism award in Asia to avoid being accused by the government of “colluding with foreign forces.”  (Nikkei Asia $)

Lost in translation

While generative AI companies have taken the spotlight during the current AI frenzy, China’s older “AI Four Dragons”—four companies that rose to market prominence because of their technological lead in computer vision and facial recognition—are grappling with profit setbacks and commercialization hurdles, reports the Chinese publication Guiji Yanjiushi.

In response to these challenges, the “Dragons” have chosen different strategies. Yitu leaned further into security cameras; Megvii focused on applying computer vision in logistics and the Internet of Things; CloudWalk prioritized AI assistants; and SenseTime, the largest of them all, ventured into generative AI with its self-developed LLMs. Even though they are not as trendy as the startups, some experts believe these established players, having accumulated more computing power and AI talent over the years, may prove to be more resilient in the end.

One more thing

During this year’s Met Gala, fans were struggling to discern real photos of celebrities from AI-generated ones. To add to the confusion, some social media accounts were running real photos in AI-powered enhancement apps, which slightly distorted the images and made it even harder to tell the difference. 

One of the most widely used such apps is called Remini, but few people know that it was actually developed by a Chinese company called Caldron and later acquired by an Italian software company. Remini now has over 20 million users and is extremely profitable. Still, it seems its AI enhancement tools have a long way to go.

bestie… @2015smetgala it’s time to delete the remini app… you’ve gone too far https://t.co/Q4Aj2454U8 pic.twitter.com/yqH46EJlJd

— swiftie wins 🪶 (@swifferwins) May 7, 2024

The way whales communicate is closer to human language than we realized

Sperm whales are fascinating creatures. They possess the biggest brain of any species, six times larger than a human’s, which scientists believe may have evolved to support intelligent, rational behavior. They’re highly social, capable of making decisions as a group, and they exhibit complex foraging behavior.  

But there’s also a lot we don’t know about them, including what they may be trying to say to one another when they communicate using a system of short bursts of clicks, known as codas. Now, new research published in Nature Communications today suggests that sperm whales’ communication is actually much more expressive and complicated than was previously thought. 

A team of researchers led by Pratyusha Sharma at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) working with Project CETI, a nonprofit focused on using AI to understand whales, used statistical models to analyze whale codas and managed to identify a structure to their language that’s similar to features of the complex vocalizations humans use. Their findings represent a tool future research could use to decipher not just the structure but the actual meaning of whale sounds.

The team analyzed recordings of 8,719 codas from around 60 whales collected by the Dominica Sperm Whale Project between 2005 and 2018, using a mix of algorithms for pattern recognition and classification. They found that the way the whales communicate was not random or simplistic, but structured depending on the context of their conversations. This allowed them to identify distinct vocalizations that hadn’t been previously picked up on.

Instead of relying on more complicated machine-learning techniques, the researchers chose to use classical analysis to approach an existing database with fresh eyes.

“We wanted to go with a simpler model that would already give us a basis for our hypothesis,” says Sharma.

“The nice thing about a statistics approach is that you do not have to train a model and it’s not a black box, and [the analyses are] easier to perform,”  says Felix Effenberger, a senior AI research advisor to the Earth Species Project, a nonprofit that’s researching how to decode non-human communication using AI. But he points out that machine learning is a great way to speed up the process of discovering patterns in a data set, so adopting such a method could be useful in the future.

a diver with the whale recording unit
DAN TCHERNOV/PROJECT CETI

The algorithms turned the clicks within the coda data into a new kind of data visualization the researchers call an exchange plot, revealing that some codas featured extra clicks. These extra clicks, combined with variations in the duration of their calls, appeared in interactions between multiple whales, which the researchers say suggests that codas can carry more information and possess a more complicated internal structure than we’d previously believed.

“One way to think about what we found is that people have previously been analyzing the sperm whale communication system as being like Egyptian hieroglyphics, but it’s actually like letters,” says Jacob Andreas, an associate professor at CSAIL who was involved with the project.

Although the team isn’t sure whether what it uncovered can be interpreted as the equivalent of the letters, tongue position, or sentences that go into human language, they are confident that there was a lot of internal similarity between the codas they analyzed, he says.

“This in turn allowed us to recognize that there were more kinds of codas, or more kinds of distinctions between codas, that whales are clearly capable of perceiving—[and] that people just hadn’t picked up on at all in this data.”

The team’s next step is to build language models of whale calls and to examine how those calls relate to different behaviors. They also plan to work on a more general system that could be used across species, says Sharma. Taking a communication system we know nothing about, working out how it encodes and transmits information, and slowly beginning to understand what’s being communicated could have many purposes beyond whales. “I think we’re just starting to understand some of these things,” she says. “We’re very much at the beginning, but we are slowly making our way through.”

Gaining an understanding of what animals are saying to each other is the primary motivation behind projects such as these. But if we ever hope to understand what whales are communicating, there’s a large obstacle in the way: the need for experiments to prove that such an attempt can actually work, says Caroline Casey, a researcher at UC Santa Cruz who has been studying elephant seals’ vocal communication for over a decade.

“There’s been a renewed interest since the advent of AI in decoding animal signals,” Casey says. “It’s very hard to demonstrate that a signal actually means to animals what humans think it means. This paper has described the subtle nuances of their acoustic structure very well, but taking that extra step to get to the meaning of a signal is very difficult to do.”

Deepfakes of your dead loved ones are a booming Chinese business

Once a week, Sun Kai has a video call with his mother. He opens up about work, the pressures he faces as a middle-aged man, and thoughts that he doesn’t even discuss with his wife. His mother will occasionally make a comment, like telling him to take care of himself—he’s her only child. But mostly, she just listens.

That’s because Sun’s mother died five years ago. And the person he’s talking to isn’t actually a person, but a digital replica he made of her—a moving image that can conduct basic conversations. They’ve been talking for a few years now. 

After she died of a sudden illness in 2019, Sun wanted to find a way to keep their connection alive. So he turned to a team at Silicon Intelligence, an AI company based in Nanjing, China, that he cofounded in 2017. He provided them with a photo of her and some audio clips from their WeChat conversations. While the company was mostly focused on audio generation, the staff spent four months researching synthetic tools and generated an avatar with the data Sun provided. Then he was able to see and talk to a digital version of his mom via an app on his phone. 

“My mom didn’t seem very natural, but I still heard the words that she often said: ‘Have you eaten yet?’” Sun recalls of the first interaction. Because generative AI was a nascent technology at the time, the replica of his mom can say only a few pre-written lines. But Sun says that’s what she was like anyway. “She would always repeat those questions over and over again, and it made me very emotional when I heard it,” he says.

There are plenty of people like Sun who want to use AI to preserve, animate, and interact with lost loved ones as they mourn and try to heal. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies and thousands of people have already paid for them. In fact, the avatars are the newest manifestation of a cultural tradition: Chinese people have always taken solace from confiding in the dead. 

The technology isn’t perfect—avatars can still be stiff and robotic—but it’s maturing, and more tools are becoming available through more companies. In turn, the price of “resurrecting” someone—also called creating “digital immortality” in the Chinese industry—has dropped significantly. Now this technology is becoming accessible to the general public. 

Some people question whether interacting with AI replicas of the dead is actually a healthy way to process grief, and it’s not entirely clear what the legal and ethical implications of this technology may be. For now, the idea still makes a lot of people uncomfortable. But as Silicon Intelligence’s other cofounder, CEO Sima Huapeng, says, “Even if only 1% of Chinese people can accept [AI cloning of the dead], that’s still a huge market.” 

AI resurrection

Avatars of the dead are essentially deepfakes: the technologies used to replicate a living person and a dead person aren’t inherently different. Diffusion models generate a realistic avatar that can move and speak. Large language models can be attached to generate conversations. The more data these models ingest about someone’s life—including photos, videos, audio recordings, and texts—the more closely the result will mimic that person, whether dead or alive.

China has proved to be a ripe market for all kinds of digital doubles. For example, the country has a robust e-commerce sector, and consumer brands hire many livestreamers to sell products. Initially, these were real people—but as MIT Technology Review reported last fall—many brands are switching to AI-cloned influencers that can stream 24/7. 

In just the past three years, the Chinese sector developing AI avatars has matured rapidly, says Shen Yang, a professor studying AI and media at Tsinghua University in Beijing, and replicas have improved from minutes-long rendered videos to 3D “live” avatars that can interact with people.  

This year, Sima says, has seen a tipping point, with AI cloning becoming affordable for most individuals. “Last year, it cost about $2,000 to $3,000, but it now only costs a few hundred dollars,” he says. That’s thanks to a price war between Chinese AI companies, which are fighting to meet the thriving demand for digital avatars in other sectors like streaming.

In fact, demand for applications that re-create the dead has also boosted the capabilities of tools that digitally replicate the living. 

Silicon Intelligence offers both services. When Sun and Sima launched the company, they were focused on using text-to-speech technologies to create audio and then using those AI-generated voices in applications such as robocalls.

But after the company replicated Sun’s mother, it pivoted to generating realistic avatars. That decision turned the company into one of the leading Chinese players creating AI-powered influencers. 

Example of the tablet product by Silicon Intelligence. The avatar of the grandma can converse with the user.
SILICON INTELLIGENCE

Its technology has generated avatars for hundreds of thousands of TikTok-like videos and streaming channels, but Sima says more recently it’s seen around 1,000 clients use it to replicate someone who’s passed away. “We started our work on ‘resurrection’ in 2019 and 2020,” he says, but at first people were slow to accept it: “No one wanted to be the first adopters.” 

The quality of the avatars has improved, he says, which has boosted adoption. When the avatar looks increasingly lifelike and gives fewer out-of-character answers, it’s easier for users to treat it as their deceased family member. Plus, the idea is getting popularized through more depictions on Chinese TV. 

Now Silicon Intelligence offers the replication service for a price between several hundred and several thousand dollars. The most basic product comes as an interactive avatar in an app, and the options at the upper end of the range often involve more customization and better hardware components, such as a tablet or a display screen. There are at least a handful more Chinese companies working on the same technology.

A modern twist on tradition

The business in these deepfakes builds on China’s long cultural history of communicating with the dead. 

In Chinese homes, it’s common to put up a portrait of a deceased relative for a few years after the death. Zhang Zewei, founder of a Shanghai-based company called Super Brain, says he and his team wanted to revamp that tradition with an “AI photo frame.” They create avatars of deceased loved ones that are pre-loaded onto an Android tablet, which looks like a photo frame when standing up. Clients can choose a moving image that speaks words drawn from an offline database or from an LLM. 

“In its essence, it’s not much different from a traditional portrait, except that it’s interactive,” Zhang says.

Zhang says the company has made digital replicas for over 1,000 clients since March 2023 and charges $700 to $1,400, depending on the service purchased. The company plans to release an app-only product soon, so that users can access the avatars on their phones, and could further reduce the cost to around $140.

Super Brain demonstrates the app-only version with an avatar of Zhang Zewei answering his own questions.
SUPER BRAIN

The purpose of his products, Zhang says, is therapeutic. “When you really miss someone or need consolation during certain holidays, you can talk to the artificial living and heal your inner wounds,” he says.

And even if that conversation is largely one-sided, that’s in keeping with a strong cultural tradition. Every April during the Qingming festival, Chinese people sweep the tombs of their ancestors, burn joss sticks and fake paper money, and tell them what has happened in the past year. Of course, those conversations have always been one-way. 

But that’s not the case for all Super Brain services. The company also offers deepfaked video calls in which a company employee or a contract therapist pretends to be the relative who passed away. Using DeepFace, an open-source tool that analyzes facial features, the deceased person’s face is reconstructed in 3D and swapped in for the live person’s face with a real-time filter. 

Example of a deepfake video call Super Brain did in July 2023. The face in the top right corner is from the deceased son of the woman.
SUPER BRAIN

At the other end of the call is usually an elderly family member who may not know that the relative has died—and whose family has arranged the conversation as a ruse. 

Jonathan Yang, a Nanjing resident who works in the tech industry, paid for this service in September 2023. His uncle died in a construction accident, but the family hesitated to tell Yang’s grandmother, who is 93 and in poor health. They worried that she wouldn’t survive the devastating news.

So Yang paid $1,350 to commission three deepfaked calls of his dead uncle. He gave Super Brain a handful of photos and videos of his uncle to train the model. Then, on three Chinese holidays, a Super Brain employee video-called Yang’s grandmother and told her, as his uncle, that he was busy working in a faraway city and wouldn’t be able to come back home, even during the Chinese New Year. 

“The effect has met my expectations. My grandma didn’t suspect anything,” Yang says. His family did have mixed opinions about the idea, because some relatives thought maybe she would have wanted to see her son’s body before it was cremated. Still, the whole family got on board in the end, believing the ruse would be best for her health. After all, it’s pretty common for Chinese families to tell “necessary” lies to avoid overwhelming seniors, as depicted in the movie The Farewell

To Yang, a close follower of the AI industry trends, creating replicas of the dead is one of the best applications of the technology. “It best represents the warmth [of AI],” he says. His grandmother’s health has improved, and there may come a day when they finally tell her the truth. By that time, Yang says, he may purchase a digital avatar of his uncle for his grandma to talk to whenever she misses him.

Is AI really good for grief? 

Even as AI cloning technology improves, there are some significant barriers preventing more people from using it to speak with their dead relatives in China. 

On the tech side, there are limitations to what AI models can generate. Most LLMs can handle dominant languages like Mandarin and Cantonese, but they aren’t able to replicate the many niche dialects in China. It’s also challenging—and therefore costly—to replicate body movements and complex facial expressions in 3D models. 

Then there’s the issue of training data. Unlike cloning someone who’s still alive, which often involves asking the person to record body movements or say certain things, posthumous AI replications must rely on whatever videos or photos are already available. And many clients don’t have high-quality data, or enough of it, for the end result to be satisfactory. 

Complicating these technical challenges are myriad ethical questions. Notably, how can someone who is already dead consent to being digitally replicated? For now, companies like Super Brain and Silicon Intelligence rely on the permission of direct family members. But what if family members disagree? And if a digital avatar generates inappropriate answers, who is responsible?

Similar technology caused controversy earlier this year. A company in Ningbo reportedly used AI tools to create videos of deceased celebrities and posted them on social media to speak to their fans. The videos were generated using public data, but without seeking any approval or permission. The result was intense criticism from the celebrities’ families and fans, and the videos were eventually taken down. 

“It’s a new domain that only came about after the popularization of AI: the rights to digital eternity,” says Shen, the Tsinghua professor, who also runs a lab that creates digital replicas of people who have passed away. He believes it should be prohibited to use deepfake technology to replicate living people without their permission. For people who have passed away, all of their immediate living family members must agree beforehand, he says. 

There could be negative effects on clients’ mental health, too. While some people, like Sun, find their conversations with avatars to be therapeutic, not everyone thinks it’s a healthy way to grieve. “The controversy lies in the fact that if we replicate our family members because we miss them, we may constantly stay in the state of mourning and can’t withdraw from it to accept that they have truly passed away,” says Shen. A widowed person who’s in constant conversation with the digital version of their partner might be held back from seeking a new relationship, for instance. 

“When someone passes away, should we replace our real emotions with fictional ones and linger in that emotional state?” Shen asks. Psychologists and philosophers who talked to MIT Technology Review about the impact of grief tech have warned about the danger of doing so. 

Sun Kai, at least, has found the digital avatar of his mom to be a comfort. She’s like a 24/7 confidante on his phone. Even though it’s possible to remake his mother’s avatar with the latest technology, he hasn’t yet done that. “I’m so used to what she looks like and sounds like now,” he says. As years have gone by, the boundary between her avatar and his memory of her has begun to blur. “Sometimes I couldn’t even tell which one is the real her,” he says.

And Sun is still okay with doing most of the talking. “When I’m confiding in her, I’m merely letting off steam. Sometimes you already know the answer to your question, but you still need to say it out loud,” he says. “My conversations with my mom have always been like this throughout the years.” 

But now, unlike before, he gets to talk to her whenever he wants to.

How I learned to stop worrying and love fake meat

Fixing our collective meat problem is one of the trickiest challenges in addressing climate change—and for some baffling reason, the world seems intent on making the task even harder.

The latest example occurred last week, when Florida governor Ron DeSantis signed a law banning the production, sale, and transportation of cultured meat across the Sunshine State. 

“Florida is fighting back against the global elite’s plan to force the world to eat meat grown in a petri dish or bugs to achieve their authoritarian goals,” DeSantis seethed in a statement.

Alternative meat and animal products—be they lab-grown or plant-based—offer a far more sustainable path to mass-producing protein than raising animals for milk or slaughter. Yet again and again, politicians, dietitians, and even the press continue to devise ways to portray these products as controversial, suspect, or substandard. No matter how good they taste or how much they might reduce greenhouse-gas emissions, there’s always some new obstacle standing in the way—in this case, Governor DeSantis, wearing a not-at-all-uncomfortable smile.  

The new law clearly has nothing to do with the creeping threat of authoritarianism (though for more on that, do check out his administration’s crusade to ban books about gay penguins). First and foremost it is an act of political pandering, a way to coddle Florida’s sizable cattle industry, which he goes on to mention in the statement.

Cultured meat is seen as a threat to the livestock industry because animals are only minimally involved in its production. Companies grow cells originally extracted from animals in a nutrient broth and then form them into nuggets, patties or fillets. The US Department of Agriculture has already given its blessing to two companies, Upside Foods and Good Meat, to begin selling cultured chicken products to consumers. Israel recently became the first nation to sign off on a beef version.

It’s still hard to say if cultured meat will get good enough and cheap enough anytime soon to meaningfully reduce our dependence on cattle, chicken, pigs, sheep, goats, and other animals for our protein and our dining pleasure. And it’s sure to take years before we can produce it in ways that generate significantly lower emissions than standard livestock practices today.

But there are high hopes it could become a cleaner and less cruel way of producing meat, since it wouldn’t require all the land, food, and energy needed to raise, feed, slaughter, and process animals today. One study found that cultured meat could reduce emissions per kilogram of meat 92% by 2030, even if cattle farming also achieves substantial improvements.

Those sorts of gains are essential if we hope to ease the rising dangers of climate change, because meat, dairy, and cheese production are huge contributors to greenhouse-gas emissions.

DeSantis and politicians in other states that may follow suit, including Alabama and Tennessee, are raising the specter of mandated bug-eating and global-elite string-pulling to turn cultured meat into a cultural issue, and kill the industry in its infancy. 

But, again, it’s always something. I’ve heard a host of other arguments across the political spectrum directed against various alternative protein products, which also include plant-based burgers, cheeses, and milks, or even cricket-derived powders and meal bars. Apparently these meat and dairy alternatives shouldn’t be highly processed, mass-produced, or genetically engineered, nor should they ever be as unhealthy as their animal-based counterparts. 

In effect, we are setting up tests that almost no products can pass, when really all we should ask of alternative proteins is that they be safe, taste good, and cut climate pollution.

The meat of the matter

Here’s the problem. 

Livestock production generates more than 7 billion tons of carbon dioxide, making up 14.5% of the world’s overall climate emissions, according to the United Nations Food and Agriculture Organization.

Beef, milk, and cheese production are, by far, the biggest problems, representing some 65% of the sector’s emissions. We burn down carbon-dense forests to provide cows with lots of grazing land; then they return the favor by burping up staggering amounts of methane, one of the most powerful greenhouse gases. Florida’s cattle population alone, for example, could generate about 180 million pounds of methane every year, as calculated from standard per-animal emissions

In an earlier paper, the World Resources Institute noted that in the average US diet, beef contributed 3% of the calories but almost half the climate pollution from food production. (If you want to take a single action that could meaningfully ease your climate footprint, read that sentence again.)

The added challenge is that the world’s population is both growing and becoming richer, which means more people can afford more meat. 

There are ways to address some of the emissions from livestock production without cultured meat or plant-based burgers, including developing supplements that reduce methane burps and encouraging consumers to simply reduce meat consumption. Even just switching from beef to chicken can make a huge difference.

Let’s clear up one matter, though. I can’t imagine a politician in my lifetime, in the US or most of the world, proposing a ban on meat and expecting to survive the next election. So no, dear reader. No one’s coming for your rib eye. If there’s any attack on personal freedoms and economic liberty here, DeSantis is the one waging it by not allowing Floridians to choose for themselves what they want to eat.

But there is a real problem in need of solving. And the grand hope of companies like Beyond Meat, Upside Foods, Miyoko’s Creamery, and dozens of others is that we can develop meat, milk, and cheese alternatives that are akin to EVs: that is to say, products that are good enough to solve the problem without demanding any sacrifice from consumers or requiring government mandates. (Though subsidies always help.)

The good news is the world is making some real progress in developing substitutes that increasingly taste like, look like, and have (with apologies for the snooty term) the “mouthfeel” of the traditional versions, whether they’ve been developed from animal cells or plants. If they catch on and scale up, it could make a real dent in emissions—with the bonus of reducing animal suffering, environmental damage, and the spillover of animal disease into the human population.

The bad news is we can’t seem to take the wins when we get them. 

The blue cheese blues

For lunch last Friday, I swung by the Butcher’s Son Vegan Delicatessen & Bakery in Berkeley, California, and ordered a vegan Buffalo chicken sandwich with a blue cheese on the side that was developed by Climax Foods, also based in Berkeley.

Late last month, it emerged that the product had, improbably, clinched the cheese category in the blind taste tests of the prestigious Good Food awards, as the Washington Post revealed.

Let’s pause here to note that this is a stunning victory for vegan cheeses, a clear sign that we can use plants to produce top-notch artisanal products, indistinguishable even to the refined palates of expert gourmands. If a product is every bit as tasty and satisfying as the original but can be produced without milking methane-burping animals, that’s a big climate win.

But sadly, that’s not where the story ended.

JAMES TEMPLE

After word leaked out that the blue cheese was a finalist, if not the winner, the Good Food Foundation seems to have added a rule that didn’t exist when the competition began but which disqualified Climax Blue, the Post reported.

I have no special insights into what unfolded behind the scenes. But it reads at least a little as if the competition concocted an excuse to dethrone a vegan cheese that had bested its animal counterparts and left traditionalists aghast. 

That victory might have done wonders to help promote acceptance of the Climax product, if not the wider category. But now the story is the controversy. And that’s a shame. Because the cheese is actually pretty good. 

I’m no professional foodie, but I do have a lifetime of expertise born of stubbornly refusing to eat any salad dressing other than blue cheese. In my own taste test, I can report it looked and tasted like mild blue cheese, which is all it needs to do.

A beef about burgers

Banning a product or changing a cheese contest’s rules after determining the winner are both bad enough. But the reaction to alternative proteins that has left me most befuddled is the media narrative that formed around the latest generation of plant-based burgers soon after they started getting popular a few years ago. Story after story would note, in the tone of a bold truth-teller revealing something new each time: Did you know these newfangled plant-based burgers aren’t actually all that much healthier than the meat variety? 

To which I would scream at my monitor: THAT WAS NEVER THE POINT!

The world has long been perfectly capable of producing plant-based burgers that are better for you, but the problem is that they tend to taste like plants. The actual innovation with the more recent options like Beyond Burger or Impossible Burger is that they look and taste like the real thing but can be produced with a dramatically smaller climate footprint.

That’s a big enough win in itself. 

If I were a health reporter, maybe I’d focus on these issues too. And if health is your personal priority, you should shop for a different plant-based patty (or I might recommend a nice salad, preferably with blue cheese dressing).

But speaking as a climate reporter, expecting a product to ease global warming, taste like a juicy burger, and also be low in salt, fat, and calories is absurd. You may as well ask a startup to conduct sorcery.

More important, making a plant-based burger healthier for us may also come at the cost of having it taste like a burger. Which would make it that much harder to win over consumers beyond the niche of vegetarians and thus have any meaningful impact on emissions. WHICH IS THE POINT!

It’s incredibly difficult to convince consumers to switch brands and change behaviors, even for a product as basic as toothpaste or toilet paper. Food is trickier still, because it’s deeply entwined with local culture, family traditions, festivals and celebrations. Whether we find a novel food product to be yummy or yucky is subjective and highly subject to suggestion. 

And so I’m ending with a plea. Let’s grant ourselves the best shot possible at solving one of the hardest, most urgent problems before us. Treat bans and political posturing with the ridicule they deserve. Reject the argument that any single product must, or can, solve all the problems related to food, health, and the environment.

Give these alternative foods a shot, afford them room to improve, and keep an open mind. 

Though it’s cool if you don’t want to try the crickets.

The Download: synthetic cow embryos, and AI jobs of the future

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Scientists are trying to get cows pregnant with synthetic embryos

About a decade ago, biologists started to observe that stem cells, left alone in a walled plastic container, will spontaneously self-assemble and try to make an embryo. These structures, sometimes called “embryo models” or embryoids, have gradually become increasingly realistic.

The University of Florida is trying to create a large animal starting only from stem cells—no egg, no sperm, and no conception. They’ve transferred “synthetic embryos,” artificial structures created in a lab, to the uteruses of eight cows in the hope that some might take.

At the Florida center, researchers are now attempting to go all the way. They want to make a live animal. If they do, it wouldn’t just be a totally new way to breed cattle. It could shake our notion of what life even is. Read the full story.

—Antonio Regalado

Job titles of the future: AI prompt engineer

The role of AI prompt engineer attracted attention for its high-six-figure salaries when it emerged in early 2023. Companies define it in different ways, but its principal aim is to help a company integrate AI into its operations. 

Danai Myrtzani of Sleed, a digital marketing agency in Greece, describes herself as more prompter than engineer. She joined the company in March 2023 as one of two experts on its new experimental-AI team, and has helped develop a tool that generates personalized LinkedIn posts for clients. Here’s what she has to say about her work

—Charlie Metcalfe

The story is from the current print issue of MIT Technology Review, which is on the fascinating theme of Build. If you don’t already, subscribe now to receive future copies once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Apple has been working on its own secretive chip project
Its new chip is likely to focus on running, rather than training, AI models. (WSJ $)
+ The US will sink $285 million into digital twin chip research. (The Verge)
+ This US startup makes a crucial chip material and is taking on a Japanese giant. (MIT Technology Review)

2 The US campus protests are unfolding on Twitch
The platform, best known for video game streaming, is gaining traction among young people dissatisfied with the mainstream media. (WP $)
+ Rubber bullets are seriously dangerous, and can kill their targets. (Slate $)

3 China and the US will meet to discuss AI arms controls
It’s America’s first real step into an entire new realm of 21st century diplomacy. (NYT $)
+ To avoid AI doom, learn from nuclear safety. (MIT Technology Review)

4 Russia is plotting violent sabotage across Europe
Experts are unsure if the Kremlin is getting sloppier, or Western detection methods are improving. (FT $)
+ Autocrats are attempting to discredit liberalism across the world. (The Atlantic $)
+ China is believed to be behind a cyberattack on the UK defense ministry. (Bloomberg $)
+ Ukraine’s foreign ministry has revealed an AI spokesperson. (The Guardian)

5 NASA refuses to let Voyager 1 die
The space agency is remotely hacking the space probe in the hope of fixing it. (IEEE Spectrum

6 CRISPR’s progress is hampered by genetics research’s lack of diversity
Many genetic databases and biobanks are highly unrepresentative of the wider population. (Vox)
+ I received the new gene-editing drug for sickle-cell disease. It changed my life. (MIT Technology Review)

7 This app is helping fishermen in South Africa sell their wares
Abalobi is a real-time marketplace that also helps to monitor fish populations. (The Guardian)

8 Nintendo’s next console is coming 🕹
The Switch went on sale in 2017. But what’s coming next? (Reuters)
+ We may never fully know how video games affect our well-being. (MIT Technology Review)

9 How tech is supercharging rap beefs
Social media and platforms like YouTube are creating conflicts out of thin air. (Wired $)

10 An MMA fighter-turned TikTok food critic is saving struggling restaurants 🍔
Keith Lee’s viral reviews are turning around the fortunes of small businesses. (Bloomberg $)
+ Is TikTok in its flop era? Some younger users think so. (The Guardian)

Quote of the day

“You want to be on the golf course like, ‘Hey, I own some SpaceX.’”

—Jeff Parks, chief executive of investment firm Stack Capital, tells the New York Times how obtaining shares in certain companies has become something of a status symbol.

The big story

Think that your plastic is being recycled? Think again.

October 2023

The problem of plastic waste hides in plain sight, a ubiquitous part of our lives we rarely question. But a closer examination of the situation is shocking. To date, humans have created around 11 billion metric tons of plastic. 72% of the plastic we make ends up in landfills or the environment. Only 9% of the plastic ever produced has been recycled. 

To make matters worse, plastic production is growing dramatically; in fact, half of all plastics in existence have been produced in just the last two decades. Production is projected to continue growing, at about 5% annually. 

So what do we do? Sadly, solutions such as recycling and reuse aren’t equal to the scale of the task. The only answer is drastic cuts in production in the first place. Read the full story

—Douglas Main

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ It’s the day after the Met Gala! Time to judge all the outfits.
+ We love you, Lola the therapy sausage dog.
+ Forgot tomatoes—this summer is all about growing cucamelons.
+ This prehistoric themed house party is on a whole other level.

Scientists are trying to get cows pregnant with synthetic embryos

It was a cool morning at the beef teaching unit in Gainesville, Florida, and cow number #307 was bucking in her metal cradle as the arm of a student perched on a stool disappeared into her cervix. The arm held a squirt bottle of water.

Seven other animals stood nearby behind a railing; it would be their turn next to get their uterus flushed out. As soon as the contents of #307’s womb spilled into a bucket, a worker rushed it to a small laboratory set up under the barn’s corrugated gables.

“It’s something!” said a postdoc named Hao Ming, dressed in blue overalls and muck boots, corralling a pink wisp of tissue under the lens of a microscope. But then he stepped back, not as sure. “It’s hard to tell.”

The experiment, at the University of Florida, is an attempt to create a large animal starting only from stem cells—no egg, no sperm, and no conception. A week earlier, “synthetic embryos,” artificial structures created in a lab, had been transferred to the uteruses of all eight cows. Now it was time to see what had grown.

About a decade ago, biologists started to observe that stem cells, left alone in a walled plastic container, will spontaneously self-assemble and try to make an embryo. These structures, sometimes called “embryo models” or embryoids, have gradually become increasingly realistic. In 2022, a lab in Israel grew the mouse version in a jar until cranial folds and a beating heart appeared.

At the Florida center, researchers are now attempting to go all the way. They want to make a live animal. If they do, it wouldn’t just be a totally new way to breed cattle. It could shake our notion of what life even is. “There has never been a birth without an egg,” says Zongliang “Carl” Jiang, the reproductive biologist heading the project. “Everyone says it is so cool, so important, but show me more data—show me it can go into a pregnancy. So that is our goal.”

For now, success isn’t certain, mostly because lab-made embryos generated from stem cells still aren’t exactly like the real thing. They’re more like an embryo seen through a fun-house mirror; the right parts, but in the wrong proportions. That’s why these are being flushed out after just a week—so the researchers can check how far they’ve grown and to learn how to make better ones.

“The stem cells are so smart they know what their fate is,” says Jiang. “But they also need help.”

So far, most research on synthetic embryos has involved mouse or human cells, and it’s stayed in the lab. But last year Jiang, along with researchers in Texas, published a recipe for making a bovine version, which they called “cattle blastoids” for their resemblance to blastocysts, the stage of the embryo suitable for IVF procedures.  

Some researchers think that stem-cell animals could be as big a deal as Dolly the sheep, whose birth in 1996 brought cloning technology to barnyards. Cloning, in which an adult cell is placed in an egg, has allowed scientists to copy mice, cattle, pet dogs, and even polo ponies. The players on one Argentine team all ride clones of the same champion mare, named Dolfina.

Synthetic embryos are clones, too—of the starting cells you grow them from. But they’re made without the need for eggs and can be created in far larger numbers—in theory, by the tens of thousands. And that’s what could revolutionize cattle breeding. Imagine that each year’s calves were all copies of the most muscled steer in the world, perfectly designed to turn grass into steak.

“I would love to see this become cloning 2.0,” says Carlos Pinzón-Arteaga, the veterinarian who spearheaded the laboratory work in Texas. “It’s like Star Wars with cows.”

Endangered species

Industry has started to circle around. A company called Genus PLC, which specializes in assisted reproduction of “genetically superior” pigs and cattle, has begun buying patents on synthetic embryos. This year it started funding Jiang’s lab to support his effort, locking up a commercial option to any discoveries he might make.

Zoos are interested too. With many endangered animals, assisted reproduction is difficult. And with recently extinct ones, it’s impossible. All that remains is some tissue in a freezer. But this technology could, theoretically, blow life back into these specimens—turning them into embryos, which could be brought to term in a surrogate of a sister species.

But there’s an even bigger—and stranger—reason to pay attention to Jiang’s effort to make a calf: several labs are creating super-realistic synthetic human embryos as well. It’s an ethically charged arena, particularly given recent changes in US abortion laws. Although these human embryoids are considered nonviable—mere “models” that are fair-game for research—all that could all change quickly if the Florida project succeeds. 

“If it can work in an animal, it can work in a human,” says Pinzón-Arteaga, who is now working at Harvard Medical School. “And that’s the Black Mirror episode.”

Industrial embryos

Three weeks before cow #307 stood in the dock, she and seven other heifers had been given stimulating hormones, to trick their bodies into thinking they were pregnant. After that, Jiang’s students had loaded blastoids into a straw they used like a popgun to shoot them towards each animal’s oviducts.

Many researchers think that if a stem-cell animal is born, the first one is likely to be a mouse. Mice are cheap to work with and reproduce fast. And one team has already grown a synthetic mouse embryo for eight days in an artificial womb—a big step, since a mouse pregnancy lasts only three weeks.

But bovines may not be far behind. There’s a large assisted-reproduction industry in cattle, with more than a million IVF attempts a year, half of them in North America. Many other beef and dairy cattle are artificially inseminated with semen from top-rated bulls. “Cattle is harder,” says Jiang. “But we have all the technology.”

hands adding a sample to a plate with a stripetter
Inspecting a “synthetic” embryo that gestated in a cow for a week at the University of Florida, Gainesville.
ANTONIO REGALADO

The thing that came out of cow #307 turned out to be damaged, just a fragment. But later that day, in Jiang’s main laboratory, students were speed-walking across the linoleum holding something in a petri dish. They’d retrieved intact embryonic structures from some of the other cows. These looked long and stringy, like worms, or the skin shed by a miniature snake.

That’s precisely what a two-week-old cattle embryo should look like. But the outer appearance is deceiving, Jiang says. After staining chemicals are added, the specimens are put under a microscope. Then the disorder inside them is apparent. These “elongated structures,” as Jiang calls them, have the right parts—cells of the embryonic disc and placenta—but nothing is in quite the right place.

“I wouldn’t call them embryos yet, because we still can’t say if they are healthy or not,” he says. “Those lineages are there, but they are disorganized.”

Cloning 2.0

Jiang demonstrated how the blastoids are grown in a plastic plate in his lab. First, his students deposit stem cells into narrow tubes. In confinement, the cells begin communicating and very quickly start trying to form a blastoid. “We can generate hundreds of thousands of blastoids. So it’s an industrial process,” he says. “It’s really simple.”

That scalability is what could make blastoids a powerful replacement for cloning technology. Cattle cloning is still a tricky process, which only skilled technicians can manage, and it requires eggs, too, which come from slaughterhouses. But unlike blastoids, cloning is well established and actually works, says Cody Kime, R&D director at Trans Ova Genetics, in Sioux Center, Iowa. Each year, his company clones thousands of pigs as well as hundreds of prize-winning cattle.

“A lot of people would like to see a way to amplify the very best animals as easily as you can,” Kime says. “But blastoids aren’t functional yet. The gene expression is aberrant to the point of total failure. The embryos look blurry, like someone sculpted them out of oatmeal or Play-Doh. It’s not the beautiful thing that you expect. The finer details are missing.”

This spring, Jiang learned that the US Department of Agriculture shared that skepticism, when they rejected his application for $650,000 in funding.  “I got criticism: ‘Oh, this is not going to work.’ That this is high risk and low efficiency,” he says. “But to me, this would change the entire breeding program.”

One problem may be the starting cells. Jiang uses bovine embryonic stem cells—taken from cattle embryos. But these stem cells aren’t as quite as versatile as they need to be. For instance, to make the first cattle blastoids, the team in Texas had to add a second type of cell, one that can make a placenta.

What’s needed instead are specially prepared “naïve” cells that are better poised to form the entire conceptus—both the embryo and placenta. Jiang showed me a PowerPoint with a large grid of different growth factors and lab conditions he is testing. Growing stem cells in different chemicals can shift the pattern of genes that are turned on. The latest batch of blastoids, he says, were made using a newer recipe and only needed to start with one type of cell.

Slaughterhouse

Jiang can’t say how long it will be before he makes a calf. His immediate goal is a pregnancy that lasts 30 days. If a synthetic embryo can grow that long, he thinks, it could go all the way, since “most pregnancy loss in cattle is in the first month.”

For a project to reinvent reproduction, Jiang’s budget isn’t particularly large, and he frets about the $2-a-day bill to feed each of his cows. During a tour of UFL’s animal science department, he opened the door to a slaughter room, a vaulted space with tracks and chains overhead, where a man in a slicker was running a hose. It smelled like freshly cleaned blood.

Carl Jiang with Cow #307
Reproductive biologist Carl Jiang leads an effort to make animals from stem cells. The cow stands in a “hydraulic squeeze chute” while its uterus is checked.
ANTONIO REGALADO

This is where cow #307 ended up. After a about 20 embryo transfers over three years, her cervix was worn out, and she came here. She was butchered, her meat wrapped and labeled, and sold to the public at market prices from a small shop at the front of the building. It’s important to everyone at the university that the research subjects aren’t wasted. “They are food,” says Jiang.

But there’s still a limit to how many cows he can use. He had 18 fresh heifers ready to join the experiment, but what if only 1% of embryos ever develop correctly? That would mean he’d need 100 surrogate mothers to see anything. It reminds Jiang of the first attempts at cloning: Dolly the sheep was one of 277 tries, and the others went nowhere. “How soon it happens may depend on industry. They have a lot of animals. It might take 30 years without them,” he says.

“It’s going to be hard,” agrees Peter Hansen, a distinguished professor in Jiang’s department. “But whoever does it first …” He lets the thought hang. “In vitro breeding is the next big thing.”

Human question

Cattle aren’t the only species in which researchers are checking the potential of synthetic embryos to keep developing into fetuses. Researchers in China have transplanted synthetic embryos into the wombs of monkeys several times. A report in 2023 found that the transplants caused hormonal signals of pregnancy, although no monkey fetus emerged.

Because monkeys are primates, like us, such experiments raise an obvious question. Will a lab somewhere try to transfer a synthetic embryo to a person? In many countries that would be illegal, and scientific groups say such an experiment should be strictly forbidden.

This summer, research leaders were alarmed by a media frenzy around reports of super-realistic models of human embryos that had been created in labs in the UK and Israel—some of which seemed to be nearly perfect mimics. To quell speculation, in June the International Society for Stem Cell Research, a powerful science and lobbying group, put out a statement declaring that the models “are not embryos” and “cannot and will not develop to the equivalent of postnatal stage humans.”

Some researchers worry that was a reckless thing to say. That’s because the statement would be disproved, biologically, as soon as any kind of stem-cell animal is born. And many top scientists expect that to happen. “I do think there is a pathway. Especially in mice, I think we will get there,” says Jun Wu, who leads the research group at UT Southwestern Medical Center, in Dallas, that collaborated with Jiang. “The question is, if that happens, how will we handle a similar technology in humans?”

Jiang says he doesn’t think anyone is going to make a person from stem cells. And he’s certainly not interested in doing so. He’s just a cattle researcher at an animal science department. “Scientists belong to society, and we need to follow ethical guidelines. So we can’t do it. It’s not allowed,” he says. “But in large animals, we are allowed. We’re encouraged. And so we can make it happen.”

The Download: the cancer vaccine renaissance, and working towards a decarbonized future

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Cancer vaccines are having a renaissance

Last week, Moderna and Merck launched a large clinical trial in the UK of a promising new cancer therapy: a personalized vaccine that targets a specific set of mutations found in each individual’s tumor. This study is enrolling patients with melanoma. But the companies have also launched a phase III trial for lung cancer. And earlier this month BioNTech and Genentech announced that a personalized vaccine they developed in collaboration shows promise in pancreatic cancer, which has a notoriously poor survival rate.

Drug developers have been working for decades on vaccines to help the body’s immune system fight cancer, without much success. But promising results in the past year suggest that the strategy may be reaching a turning point. Will these therapies finally live up to their promise? Read the full story.

—Cassandra Willyard

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

How we transform to a fully decarbonized world

Deb Chachra is a professor of engineering at Olin College of Engineering in Needham, Massachusetts, and the author of How Infrastructure Works: Inside the Systems That Shape Our World

Just as much as technological breakthroughs, it’s that availability of energy that has shaped our material world. The exponential rise in fossil-fuel usage over the past century and a half has powered novel, energy-intensive modes of extracting, processing, and consuming matter, at unprecedented scale.

But now, the cumulative environmental, health, and social impacts of this approach have become unignorable. We can see them nearly everywhere we look, from the health effects of living near highways or oil refineries to the ever-growing issue of plastic, textile, and electronic waste. 

Decarbonizing our energy systems means meeting human needs without burning fossil fuels and releasing greenhouse gases into the atmosphere. The good news is that a world powered by electricity from abundant, renewable, non-polluting sources is now within reach. Read the full story.

The story is from the current print issue of MIT Technology Review, which is on the fascinating theme of Build. If you don’t already, subscribe now to receive future copies once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US adversaries are exploiting the university protests for their own gain
Russia, China and Iran are amplifying the conflicts to stoke political tensions online. (NYT $)
+ Universities are under intense political scrutiny. (Vox)
+ The Biden administration’s patience with protestors appears to have run out. (The Atlantic $)

2 China is preparing to launch an ambitious moon mission 🚀
Its bid to bring back samples from the far side of the moon would be a major leap forward for its national space program. (CNN)
+ It would be the first time any country managed to pull it off, too. (WP $)

3 We don’t know how Big Tech’s AI investments will affect profits  

Profits are up—but for how long? (The Information $)
+ Make no mistake—AI is owned by Big Tech. (MIT Technology Review)

4 An Australian facial recognition firm suffered a data breach
It demonstrates the importance of safeguarding personal biometric data properly. (Wired $)

5 China’s race to create a native ChatGPT is heating up
Four startups are locked in intense competition to emulate OpenAI’s success. (FT $)
+ Four things to know about China’s new AI rules in 2024. (MIT Technology Review)

6 One of America’s biggest podcasts is chock-full of misleading information
A cohort of scientists have raised concerns with Andrew Huberman’s show’s omission of key scientific details. (Vox)

7 Recyclable circuit boards could help us cut down on e-waste
Because conventional circuits are an environmental menace. (IEEE Spectrum)
+ If you fancy giving a supercomputer a second home, here’s your chance. (Wired $)
+ Why recycling alone can’t power climate tech. (MIT Technology Review)

8 Facebook has become the zombie internet
The social network ain’t so social these days. (404 Media)

9 Boston Dynamics loves freaking us out 🤖
We’ve been obsessed with their uncanny videos for more than a decade. (The Atlantic $)
+ But robots might need to become more boring to be useful. (MIT Technology Review)

10 Human models are letting AI do all the hard work
They’re signing over the rights to their likeness and raking in the passive income. (WSJ $)

Quote of the day

“They’re slow as Christmas getting things done.”

—Jerry Whisenhunt, general manager of Pine Telephone Company in Oklahoma, explains his frustration with Washington bureaucrats who ordered providers like him to remove China-made equipment from their networks, without providing funding, he tells the Washington Post.

The big story

Zimbabwe’s climate migration is a sign of what’s to come

December 2021

Julius Mutero has spent his entire adult life farming a three-hectare plot in Zimbabwe, but has harvested virtually nothing in the past six years. He is just one of the 86 million people in sub-Saharan Africa who the World Bank estimates will migrate domestically by 2050 because of climate change.

In Zimbabwe, farmers who have tried to stay put and adapt have found their efforts woefully inadequate in the face of new weather extremes. Droughts have already forced tens of thousands from their homes. But their desperate moves are creating new competition for water in the region, and tensions may soon boil over. Read the full story.

—Andrew Mambondiyani

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Some breads are surprisingly easy to make—but all equally delicious.
+ Aww, these frogs sure love their baby tadpoles. 🐸
+ Trees are wonderful. These books celebrate all they do for us.
+ We’re all praying for the safe return of Wally the emotional support alligator.

Cancer vaccines are having a renaissance

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Last week, Moderna and Merck launched a large clinical trial in the UK of a promising new cancer therapy: a personalized vaccine that targets a specific set of mutations found in each individual’s tumor. This study is enrolling patients with melanoma. But the companies have also launched a phase III trial for lung cancer. And earlier this month BioNTech and Genentech announced that a personalized vaccine they developed in collaboration shows promise in pancreatic cancer, which has a notoriously poor survival rate.

Drug developers have been working for decades on vaccines to help the body’s immune system fight cancer, without much success. But promising results in the past year suggest that the strategy may be reaching a turning point. Will these therapies finally live up to their promise?

This week in The Checkup, let’s talk cancer vaccines. (And, you guessed it, mRNA.)

Long before companies leveraged mRNA to fight covid, they were developing mRNA vaccines to combat cancer. BioNTech delivered its first mRNA vaccines to people with treatment-resistant melanoma nearly a decade ago. But when the pandemic hit, development of mRNA vaccines jumped into warp drive. Now dozens of trials are underway to test whether these shots can transform cancer the way they did covid. 

Recent news has some experts cautiously optimistic. In December, Merck and Moderna announced results from an earlier trial that included 150 people with melanoma who had undergone surgery to have their cancer removed. Doctors administered nine doses of the vaccine over about six months, as well as  what’s known as an immune checkpoint inhibitor. After three years of follow-up, the combination had cut the risk of recurrence or death by almost half compared with the checkpoint inhibitor alone.

The new results reported by BioNTech and Genentech, from a small trial of 16 patients with pancreatic cancer, are equally exciting. After surgery to remove the cancer, the participants received immunotherapy, followed by the cancer vaccine and a standard chemotherapy regimen. Half of them responded to the vaccine, and three years after treatment, six of those people still had not had a recurrence of their cancer. The other two had relapsed. Of the eight participants who did not respond to the vaccine, seven had relapsed. Some of these patients might not have responded  because they lacked a spleen, which plays an important role in the immune system. The organ was removed as part of their cancer treatment. 

The hope is that the strategy will work in many different kinds of cancer. In addition to pancreatic cancer, BioNTech’s personalized vaccine is being tested in colorectal cancer, melanoma, and metastatic cancers.

The purpose of a cancer vaccine is to train the immune system to better recognize malignant cells, so it can destroy them. The immune system has the capacity to clear cancer cells if it can find them. But tumors are slippery. They can hide in plain sight and employ all sorts of tricks to evade our immune defenses. And cancer cells often look like the body’s own cells because, well, they are the body’s own cells.

There are differences between cancer cells and healthy cells, however. Cancer cells acquire mutations that help them grow and survive, and some of those mutations give rise to proteins that stud the surface of the cell—so-called neoantigens.

Personalized cancer vaccines like the ones Moderna and BioNTech are developing are tailored to each patient’s particular cancer. The researchers collect a piece of the patient’s tumor and a sample of healthy cells. They sequence these two samples and compare them in order to identify mutations that are specific to the tumor. Those mutations are then fed into an AI algorithm that selects those most likely to elicit an immune response. Together these neoantigens form a kind of police sketch of the tumor, a rough picture that helps the immune system recognize cancerous cells. 

“A lot of immunotherapies stimulate the immune response in a nonspecific way—that is, not directly against the cancer,” said Patrick Ott, director of the Center for Personal Cancer Vaccines at the Dana-Farber Cancer Institute, in a 2022 interview.  “Personalized cancer vaccines can direct the immune response to exactly where it needs to be.”

How many neoantigens do you need to create that sketch?  “We don’t really know what the magical number is,” says Michelle Brown, vice president of individualized neoantigen therapy at Moderna. Moderna’s vaccine has 34. “It comes down to what we could fit on the mRNA strand, and it gives us multiple shots to ensure that the immune system is stimulated in the right way,” she says. BioNTech is using 20.

The neoantigens are put on an mRNA strand and injected into the patient. From there, they are taken up by cells and translated into proteins, and those proteins are expressed on the cell’s surface, raising an immune response

mRNA isn’t the only way to teach the immune system to recognize neoantigens. Researchers are also delivering neoantigens as DNA, as peptides, or via immune cells or viral vectors. And many companies are working on “off the shelf” cancer vaccines that aren’t personalized, which would save time and expense. Out of about 400 ongoing clinical trials assessing cancer vaccines last fall, roughly 50 included personalized vaccines.

There’s no guarantee any of these strategies will pan out. Even if they do, success in one type of cancer doesn’t automatically mean success against all. Plenty of cancer therapies have shown enormous promise initially, only to fail when they’re moved into large clinical trials.

But the burst of renewed interest and activity around cancer vaccines is encouraging. And personalized vaccines might have a shot at succeeding where others have failed. The strategy makes sense for “a lot of different tumor types and a lot of different settings,” Brown says. “With this technology, we really have a lot of aspirations.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

mRNA vaccines transformed the pandemic. But they can do so much more. In this feature from 2023, Jessica Hamzelou covered the myriad other uses of these shots, including fighting cancer. 

This article from 2020 covers some of the background on BioNTech’s efforts to develop personalized cancer vaccines. Adam Piore had the story

Years before the pandemic, Emily Mullin wrote about early efforts to develop personalized cancer vaccines—the promise and the pitfalls. 

From around the web

Yes, there’s bird flu in the nation’s milk supply. About one in five samples had evidence of the H5N1 virus. But new testing by the FDA suggests that the virus is unable to replicate. Pasteurization works! (NYT)

Studies in which volunteers are deliberately infected with covid—so-called challenge trials—have been floated as a way to test drugs and vaccines, and even to learn more about the virus. But it turns out it’s tougher to infect people than you might think. (Nature)

When should women get their first mammogram to screen for breast cancer? It’s a matter of hot debate. In 2009, an expert panel raised the age from 40 to 50. This week they lowered it to 40 again in response to rising cancer rates among younger women. Women with an average risk of breast cancer should get screened every two years, the panel says. (NYT)

Wastewater surveillance helped us track covid. Why not H5N1? A team of researchers from New York argues it might be our best tool for monitoring the spread of this virus. (Stat)

Long read: This story looks at how AI could help us better understand how babies learn language, and focuses on the lab I covered in this story about an AI model trained on the sights and sounds experienced by a single baby. (NYT)

The Download: Sam Altman on AI’s killer function, and the problem with ethanol

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Sam Altman says helpful agents are poised to become AI’s killer function

Sam Altman, CEO of OpenAI, has a vision for how AI tools will become enmeshed in our daily lives. 

During a sit-down chat with MIT Technology Review in Cambridge, Massachusetts, he described how he sees the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.”

In the new paradigm, as Altman sees it, AI will be capable of helping us outside the chat interface and taking real-world tasks off our plates. Read more about Altman’s thoughts on the future of AI hardware, where training data will come from next, and who is best poised to create AGI.

—James O’Donnell

A US push to use ethanol as aviation fuel raises major climate concerns

Eliminating carbon pollution from aviation is one of the most challenging parts of the climate puzzle, simply because large commercial airlines are too heavy and need too much power during takeoff for today’s batteries to do the job. 

But one way that companies and governments are striving to make progress is through the use of various types of sustainable aviation fuels (SAFs), which are derived from non-petroleum sources and promise to be less polluting than standard jet fuel.

This week, the US announced a push to help its biggest commercial crop, corn, become a major feedstock for SAFs. It could set the template for programs in the future that may help ethanol producers generate more and more SAFs. But that is already sounding alarm bells among some observers. Read the full story.

James Temple

Three takeaways about the current state of batteries

Batteries have been making headlines this week. First, there’s a new special report from the International Energy Agency all about how crucial batteries are for our future energy systems. The report calls batteries a “master key,” meaning they can unlock the potential of other technologies that will help cut emissions.

Second, we’re seeing early signs in California of how the technology might be earning that “master key” status already by helping renewables play an even bigger role on the grid. 

Our climate reporter Casey Crownhart has rounded up the three things you need to know about the current state of batteries—and what’s to come. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 These tech moguls are planning how to construct AI rules for Trump
They helped draft and promote TikTok ban legislation—and AI is next on their agenda. (WP $)
+ Ted Kaouk is the US markets’ regulator’s first AI officer. (WSJ $)+ A new AI security bill would create a record of data breaches. (The Verge)
+ Here’s where AI regulation is heading. (MIT Technology Review)

2 Crypto’s grifters insist they’ve learned their lesson
But the state of the industry suggests they’ll make the same mistakes over again. (Bloomberg $)

3 Good luck tracking down these AI chips
South Korean chip supplier SK Hynix says it’s sold out for the year. (WSJ $)
+ It’s almost fully booked throughout 2025, too. (Bloomberg $)
+ Why it’s so hard for China’s chip industry to become self-sufficient. (MIT Technology Review)

4 Universal Music Group has struck a deal with TikTok 
The label’s music was pulled from the platform three months ago. (Variety $)
+ Taylor Swift, Olivia Rodrigo, and Drake are among its high-profile roster. (The Verge)

5 Ukraine is bootstrapping its own killer-drone industry
Effectively creating air-bound bombs in lieu of more sophisticated long-range missiles. (Wired $)
+ Mass-market military drones have changed the way wars are fought. (MIT Technology Review)

6  The US asylum border app is stranding vulnerable migrants
Its scarce appointments leave asylum seekers with little choice but to pay human trafficking groups. (The Guardian)
+ The new US border wall is an app. (MIT Technology Review)

7 Things aren’t looking good for Volocopter
The flying taxi startup is holding crisis talks with investors. (FT $)
+ These aircraft could change how we fly. (MIT Technology Review)

8 Describing quantum systems is a time-consuming process
A new algorithm could help to dramatically speed things up. (Quanta Magazine)

9 What Reddit’s ‘Am I the Asshole?’ forum can teach philosophers
It’s an undoubtedly brave endeavor. (Vox)

10 The web’s home page refuses to die
Social media is imploding, but the humble website prevails. (New Yorker $)
+ How to fix the internet. (MIT Technology Review)

Quote of the day

“Whomever they choose, they king-make.”

— Satya Nadella, Microsoft’s CEO, describes the stranglehold Apple exercises over the companies vying to make its default search engine for iPhone, Bloomberg reports.

The big story

Can Afghanistan’s underground “sneakernet” survive the Taliban?

November 2021

When Afghanistan fell to the Taliban, Mohammad Yasin had to make some difficult decisions very quickly. He began erasing some of the sensitive data on his computer and moving the rest onto two of his largest hard drives, which he then wrapped in a layer of plastic and buried underground.

Yasin is what is locally referred to as a “computer kar”: someone who sells digital content by hand in a country where a steady internet connection can be hard to come by, selling everything from movies, music, mobile applications, to iOS updates. And despite the dangers of Taliban rule, the country’s extensive “sneakernet” isn’t planning on shutting down. Read the full story.

—Ruchi Kumar

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ There is nothing more terrifying than a ‘boy room.’
+ These chocolate limes look beyond delicious (and seriously convincing!) 🍋🟩
+ Drake is beefing with everyone—but why?
+ Here’s how to calm that eternal to-do list in your head.

Three takeaways about the current state of batteries

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Batteries are on my mind this week. (Aren’t they always?) But I’ve got two extra reasons to be thinking about them today. 

First, there’s a new special report from the International Energy Agency all about how crucial batteries are for our future energy systems. The report calls batteries a “master key,” meaning they can unlock the potential of other technologies that will help cut emissions. Second, we’re seeing early signs in California of how the technology might be earning that “master key” status already by helping renewables play an even bigger role on the grid. So let’s dig into some battery data together. 

1) Battery storage in the power sector was the fastest-growing commercial energy technology on the planet in 2023

Deployment doubled over the previous year’s figures, hitting nearly 42 gigawatts. That includes utility-scale projects as well as projects installed “behind the meter,” meaning they’re somewhere like a home or business and don’t interact with the grid. 

Over half the additions in 2023 were in China, which has been the leading market in batteries for energy storage for the past two years. Growth is faster there than the global average, and installations tripled from 2022 to last year. 

One driving force of this quick growth in China is that some provincial policies require developers of new solar and wind power projects to pair them with a certain level of energy storage, according to the IEA report.

Intermittent renewables like wind and solar have grown rapidly in China and around the world, and the technologies are beginning to help clean up the grid. But these storage requirement policies reveal the next step: installing batteries to help unlock the potential of renewables even during times when the sun isn’t shining and the wind isn’t blowing. 

2) Batteries are starting to show exactly how they’ll play a crucial role on the grid.

When there are small amounts of renewables, it’s not all that important to have storage available, since the sun’s rising and setting will cause little more than blips in the overall energy mix. But as the share increases, some of the challenges with intermittent renewables become very clear. 

We’ve started to see this play out in California. Renewables are able to supply nearly all the grid’s energy demand during the day on sunny days. The problem is just how different the picture is at noon and just eight hours later, once the sun has gone down. 

In the middle of the day, there’s so much solar power available that gigawatts are basically getting thrown away. Electricity prices can actually go negative. Then, later on, renewables quickly fall off, and other sources like natural gas need to ramp up to meet demand. 

But energy storage is starting to catch up and make a dent in smoothing out that daily variation. On April 16, for the first time, batteries were the single greatest power source on the grid in California during part of the early evening, just as solar fell off for the day. (Look for the bump in the darkest line on the graph above—it happens right after 6 p.m.)

Batteries have reached this number-one status several more times over the past few weeks, a sign that the energy storage now installed—10 gigawatts’ worth—is beginning to play a part in a balanced grid. 

3) We need to build a lot more energy storage. Good news: batteries are getting cheaper.

While early signs show just how important batteries can be in our energy system, we still need gobs more to actually clean up the grid. If we’re going to be on track to cut greenhouse-gas emissions to zero by midcentury, we’ll need to increase battery deployment sevenfold. 

The good news is the technology is becoming increasingly economical. Battery costs have fallen drastically, dropping 90% since 2010, and they’re not done yet. According to the IEA report, battery costs could fall an additional 40% by the end of this decade. Those further cost declines would make solar projects with battery storage cheaper to build than new coal power plants in India and China, and cheaper than new gas plants in the US. 

Batteries won’t be the magic miracle technology that cleans up the entire grid. Other sources of low-carbon energy that are more consistently available, like geothermal, or able to ramp up and down to meet demand, like hydropower, will be crucial parts of the energy system. But I’m interested to keep watching just how batteries contribute to the mix. 


Now read the rest of The Spark

Related reading

Some companies are looking beyond lithium for stationary energy storage. Dig into the prospects for sodium-based batteries in this story from last year.

Lithium-sulfur technology could unlock cheaper, better batteries for electric vehicles that can go farther on a single charge. I covered one company trying to make them a reality earlier this year.

Two engineers in lab coats monitor the thermal battery powering a conveyor belt of bottles
SIMON LANDREIN

Another thing

Thermal batteries are so hot right now. In fact, readers chose the technology as our 11th Breakthrough Technology of 2024.

To celebrate, we’re hosting an online event in a couple of weeks for subscribers. We’ll dig into why thermal batteries are so interesting and why this is a breakthrough moment for the technology. It’s going to be a lot of fun, so subscribe if you haven’t already and then register here to join us on May 16 at noon Eastern time.

You’ll be able to submit a question when you register—please do that so I know what you want to hear about! See you there! 

Keeping up with climate  

New rules that force US power plants to slash emissions could effectively spell the end of coal power in the country. Here are five things to know about the regulations. (New York Times)

Wind farms use less land than you might expect. Turbines really take up only a small fraction of the land where they’re sited, and co-locating projects with farms or other developments can help reduce environmental impact. (Washington Post)

The fourth reactor at Plant Vogtle in Georgia officially entered commercial operation this week. The new reactor will provide electricity for up to 500,000 homes and businesses. (Axios

A new factory will be the first full-scale plant to produce sodium-ion batteries in the US. The chemistry could provide a cheaper alternative to the standard lithium-ion chemistry and avoid material constraints. (Bloomberg)

→ I wrote about the potential for sodium-based batteries last year. (MIT Technology Review)

Tesla has apparently laid off a huge portion of its charging team. The move comes as the company’s charging port has been adopted by most major automakers. (The Verge)

A vegan cheese was up for a major food award. Then, things got messy. (Washington Post)

→ For a look at how Climax Foods makes its plant-based cheese with AI, check out this story from our latest magazine issue. (MIT Technology Review)

Someday mining might be done with … seaweed? Early research is looking into using seaweed to capture and concentrate high-value metals. (Hakai)

The planet’s oceans contain enormous amounts of energy. Harnessing it is an early-stage industry, but some proponents argue there’s a role for wave and tidal power technologies. (Undark)

Why new ethanol aviation fuel tax subsidies aren’t a clear climate win

Eliminating carbon pollution from aviation is one of the most challenging parts of the climate puzzle, simply because large commercial airlines are too heavy and need too much power during takeoff for today’s batteries to do the job. 

But one way that companies and governments are striving to make some progress is through the use of various types of sustainable aviation fuels (SAFs), which are derived from non-petroleum sources and promise to be less polluting than standard jet fuel.

This week, the US announced a push to help its biggest commercial crop, corn, become a major feedstock for SAFs. 

Federal guidelines announced on April 30 provide a pathway for ethanol producers to earn SAF tax credits within the Inflation Reduction Act, President Biden’s signature climate law, when the fuel is produced from corn or soy grown on farms that adopt certain sustainable agricultural practices.

It’s a limited pilot program, since the subsidy itself expires at the end of this year. But it could set the template for programs in the future that may help ethanol producers generate more and more SAFs, as the nation strives to produce billions of gallons of those fuels per year by 2030. 

Consequently, the so-called Climate Smart Agricultural program has already sounded alarm bells among some observers, who fear that the federal government is both overestimating the emissions benefits of ethanol and assigning too much credit to the agricultural practices in question. Those include cover crops, no-till techniques that minimize soil disturbances, and use of “enhanced-efficiency fertilizers,” which are designed to increase uptake by plants and thus reduce runoff into the environment.

The IRA offers a tax credit of $1.25 per gallon for SAFs that are 50% lower in emissions than standard jet fuel, and as much as 50 cents per gallon more for sustainable fuels that are cleaner still. The new program can help corn- or soy-based ethanol meet that threshold when the source crops are produced using some or all of those agricultural practices.

Since the vast majority of US ethanol is produced from corn, let’s focus on the issues around that crop. To get technical, the program allows ethanol producers to subtract 10 grams of carbon dioxide per megajoule of energy, a measure of carbon intensity, from the life-cycle emissions of the fuel when it’s generated from corn produced with all three of the practices mentioned. That’s about an eighth to a tenth of the carbon intensity of gasoline.

Ethanol’s questionable climate footprint

Today, US-generated ethanol is mainly mixed with gasoline. But ethanol producers are eager to develop new markets for the product as electric vehicles make up a larger share of the cars and trucks on the road. Not surprisingly, then, industry trade groups applauded the announcement this week.

The first concern with the new program, however, is that the emissions benefits of corn-based ethanol have been hotly debated for decades.

Corn, like any plant that uses photosynthesis to produce food, sucks up carbon dioxide from the air. But using corn for fuel rather than food also creates pressure to clear more land for farming, a process that releases carbon dioxide from plants and soil. In addition, planting, fertilizing, and harvesting corn produce climate pollution as well, and the same is true of refining, distributing, and burning ethanol. 

For its analyses under the new program, the Treasury Department intends to use an updated version of the so-called GREET model to evaluate the life-cycle emissions of SAFs, which was developed by the Department of Energy’s Argonne National Lab. A 2021 study from the lab, relying on that model, concluded that US corn ethanol produced as much as 52% less greenhouse gas than gasoline. 

But some researchers and nonprofits have criticized the tool for accepting low estimates of the emissions impacts of land-use changes, among other issues. Other assessments of ethanol emissions have been far more damning.

A 2022 EPA analysis surveyed the findings from a variety of models that estimate the life-cycle emissions of corn-based ethanol and found that in seven out of 20 cases, they exceeded 80% of the climate pollution from gasoline and diesel.

Moreover, the three most recent estimates from those models found ethanol emissions surpassed even the higher-end estimates for gasoline or diesel, Alison Cullen, chair of the EPA’s science advisory board, noted in a 2023 letter to the administrator of the agency.

“Thus, corn starch ethanol may not meet the definition of a renewable fuel” under the federal law that mandates the use of biofuels in the market, she wrote. If so, it’s then well short of the 50% threshold required by the IRA, and some say it’s not clear that the farming practices laid out this week could close the gap.

Agricultural practices

Nikita Pavlenko, who leads the fuels team at the International Council on Clean Transportation, a nonprofit research group, asserted in an email that the climate-smart agricultural provisions “are extremely sloppy” and “are not substantiated.” 

He said the Department of Energy and Department of Agriculture especially “put their thumbs on the scale” on the question of land-use changes, using estimates of soy and corn emissions that were 33% to 55% lower than those produced for a program associated with the UN’s International Civil Aviation Organization.

He finds that ethanol sourced from farms using these agriculture practices will still come up short of the IRA’s 50% threshold, and that producers may have to take additional steps to curtail emissions, potentially including adding carbon capture and storage to ethanol facilities or running operations on renewables like wind or solar.

Freya Chay, a program lead at CarbonPlan, which evaluates the scientific integrity of carbon removal methods and other climate actions, says that these sorts of agricultural practices can provide important benefits, including improving soil health, reducing erosion, and lowering the cost of farming. But she and others have stressed that confidently determining when certain practices actually and durably increase carbon in soil is “exceedingly complex” and varies widely depending on soil type, local climate conditions, past practices, and other variables.

One recent study of no-till practices found that the carbon benefits quickly fade away over time and reach nearly zero in 14 years. If so, this technique would do little to help counter carbon emissions from fuel combustion, which can persist in the atmosphere for centuries or more.

“US policy has a long history of asking how to continue justifying investment in ethanol rather than taking a clear-eyed approach to evaluating whether or not ethanol helps us reach our climate goals,” Chay wrote in an email. “In this case, I think scrutiny is warranted around the choice to lean on agricultural practices with uncertain and variable benefits in a way that could unlock the next tranche of public funding for corn ethanol.”

There are many other paths for producing SAFs that are or could be less polluting than ethanol. For example, they can be made from animal fats, agriculture waste, forest trimmings, or non-food plants that grow on land unsuitable for commercial crops. Other companies are developing various types of synthetic fuels, including electrofuels produced by capturing carbon from plants or the air and then combining it with cleanly sourced hydrogen. 

But all these methods are much more expensive than extracting and refining fossil fuels, and most of the alternative fuels will still produce more emissions when they’re used than the amount that was pulled out of the atmosphere by the plants or processes in the first place. 

The best way to think of these fuels is arguably as a stopgap, a possible way to make some climate progress while smart people strive to develop and build fully emissions-free ways of quickly, safely, and reliably moving things and people around the globe.

Sam Altman says helpful agents are poised to become AI’s killer function

A number of moments from my brief sit-down with Sam Altman brought the OpenAI CEO’s worldview into clearer focus. The first was when he pointed to my iPhone SE (the one with the home button that’s mostly hated) and said, “That’s the best iPhone.” More revealing, though, was the vision he sketched for how AI tools will become even more enmeshed in our daily lives than the smartphone.

“What you really want,” he told MIT Technology Review, “is just this thing that is off helping you.” Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to. 

It’s a leap from OpenAI’s current offerings. Its leading applications, like DALL-E, Sora, and ChatGPT (which Altman referred to as “incredibly dumb” compared with what’s coming next), have wowed us with their ability to generate convincing text and surreal videos and images. But they mostly remain tools we use for isolated tasks, and they have limited capacity to learn about us from our conversations with them. 

In the new paradigm, as Altman sees it, the AI will be capable of helping us outside the chat interface and taking real-world tasks off our plates. 

Altman on AI hardware’s future 

I asked Altman if we’ll need a new piece of hardware to get to this future. Though smartphones are extraordinarily capable, and their designers are already incorporating more AI-driven features, some entrepreneurs are betting that the AI of the future will require a device that’s more purpose built. Some of these devices are already beginning to appear in his orbit. There is the (widely panned) wearable AI Pin from Humane, for example (Altman is an investor in the company but has not exactly been a booster of the device). He is also rumored to be working with former Apple designer Jony Ive on some new type of hardware. 

But Altman says there’s a chance we won’t necessarily need a device at all. “I don’t think it will require a new piece of hardware,” he told me, adding that the type of app envisioned could exist in the cloud. But he quickly added that even if this AI paradigm shift won’t require consumers buy a new hardware, “I think you’ll be happy to have [a new device].” 

Though Altman says he thinks AI hardware devices are exciting, he also implied he might not be best suited to take on the challenge himself: “I’m very interested in consumer hardware for new technology. I’m an amateur who loves it, but this is so far from my expertise.”

On the hunt for training data

Upon hearing his vision for powerful AI-driven agents, I wondered how it would square with the industry’s current scarcity of training data. To build GPT-4 and other models, OpenAI has scoured internet archives, newspapers, and blogs for training data, since scaling laws have long shown that making models bigger also makes them better. But finding more data to train on is a growing problem. Much of the internet has already been slurped up, and access to private or copyrighted data is now mired in legal battles. 

Altman is optimistic this won’t be a problem for much longer, though he didn’t articulate the specifics. 

“I believe, but I’m not certain, that we’re going to figure out a way out of this thing of you always just need more and more training data,” he says. “Humans are existence proof that there is some other way to [train intelligence]. And I hope we find it.”

On who will be poised to create AGI

OpenAI’s central vision has long revolved around the pursuit of artificial general intelligence (AGI), or an AI that can reason as well as or better than humans. Its stated mission is to ensure such a technology “benefits all of humanity.” It is far from the only company pursuing AGI, however. So in the race for AGI, what are the most important tools? I asked Altman if he thought the entity that marshals the largest amount of chips and computing power will ultimately be the winner. 

Altman suspects there will be “several different versions [of AGI] that are better and worse at different things,” he says. “You’ll have to be over some compute threshold, I would guess. But even then I wouldn’t say I’m certain.”

On when we’ll see GPT-5

You thought he’d answer that? When another reporter in the room asked Altman if he knew when the next version of GPT is slated to be released, he gave a calm response. “Yes,” he replied, smiling, and said nothing more. 

The Download: mysterious radio energy from outer space, and banning TikTok

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside the quest to map the universe with mysterious bursts of radio energy

When our universe was less than half as old as it is today, a burst of energy that could cook a sun’s worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback. 

The signal, which arrived in June 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them. This one was particularly special: nearly double the age of anything previously observed, and three and a half times more energetic. 

No one knows what causes fast radio bursts. They flash in a seemingly random and unpredictable pattern from all over the sky. But despite the mystery, these radio waves are starting to prove extraordinarily useful. Read the full story.

—Anna Kramer

The depressing truth about TikTok’s impending ban

Trump’s 2020 executive order banning TikTok came to nothing in the end. Yet the idea—that the US government should ban TikTok in some way—never went away. It would repeatedly be suggested in different forms and shapes. And eventually, on April 24, 2024, things came full circle with the bill passed in Congress and signed into law.

A lot has changed in those four years. Back then, TikTok was a rising sensation that many people didn’t understand; now, it’s one of the biggest social media platforms. But if the TikTok saga tells us anything, it’s that the US is increasingly inhospitable for Chinese companies. Read the full story.

—Zeyi Yang

This story is from China Report, our weekly newsletter covering tech and policy in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Changpeng Zhao has been sentenced to just four months in prison
The crypto exchange founder got off pretty lightly after pleading guilty to a money-laundering violation. (The Verge)+ The US Department of Justice had sought a three-year sentence. (The Guardian)

2 Tesla has gutted its charging team
Which is extremely bad news for those reliant on its massive charging network. (NYT $)
+ And more layoffs may be coming down the road. (The Information $)
+ Why getting more EVs on the road is all about charging. (MIT Technology Review)

3 A group of newspapers joined forces to sue OpenAI 
It comes just after the AI firm signed a deal with the Financial Times to use its articles as training data for its models. (WP $)
+ Meanwhile, Google is working with News Corp to fund new AI content. (The Information $)
+ OpenAI’s hunger for data is coming back to bite it. (MIT Technology Review)

4 Worldcoin is thriving in Argentina
The cash it offers in exchange for locals’ biometric data is a major incentive as unemployment in the country bites. (Rest of World)
+ Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users. (MIT Technology Review)

5 Bill Gates’ shadow looms large over Microsoft
The company’s AI revolution is no accident. (Insider $)

6 It’s incredibly difficult to turn off a car’s location tracking
Domestic abuse activists worry the technology plays into abusers’ hands. (The Markup)
+ Regulators are paying attention. (NYT $)

7 Brain monitors have a major privacy problem
Many of them sell your neural data without asking additional permission. (New Scientist $)
+ How your brain data could be used against you. (MIT Technology Review)

8 ECMO machines are a double-edged sword
They help keep critically ill patients alive. But at what cost? (New Yorker $)

9 How drones are helping protect wildlife from predators
So long as wolves stop trying to play with the drones, that is. (Undark Magazine)

10 This plastic contains bacteria that’ll break it down
It has the unusual side-effect of making the plastic even stronger, too. (Ars Technica)
+ Think that your plastic is being recycled? Think again. (MIT Technology Review)

Quote of the day

“I have constantly been looking ahead for the next thing that’s going to crush all my dreams and the stuff that I built.”

—Tony Northrup, a stock image photographer, explains to the Wall Street Journal generative AI is finally killing an industry that weathered the advent of digital cameras and the internet.

The big story

A new tick-borne disease is killing cattle in the US

November 2021

In the spring of 2021, Cynthia and John Grano, who own a cattle operation in Culpeper County, Virginia, started noticing some of their cows slowing down and acting “spacey.” They figured the animals were suffering from a common infectious disease that causes anemia in cattle. But their veterinarian had warned them that another disease carried by a parasite was spreading rapidly in the area.

After a third cow died, the Granos decided to test its blood. Sure enough, the test came back positive for the disease: theileria. And with no treatment available, the cows kept dying.

Livestock producers around the US are confronting this new and unfamiliar disease without much information, and researchers still don’t know how theileria will unfold, even as it quickly spreads west across the country. Read the full story.

—Britta Lokting

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ This Instagram account documenting the weird and wonderful world of Beanie Babies is the perfect midweek pick-me-up.
+ Challengers is great—but have you seen the rest of the best sports films?
+ This human fruit machine is killing me.
+ Evan Narcisse is a giant in the video games world.

The depressing truth about TikTok’s impending ban

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Allow me to indulge in a little reflection this week. Last week, the divest-or-ban TikTok bill was passed in Congress and signed into law. Four years ago, when I was just starting to report on the world of Chinese technologies, one of my first stories was about very similar news: President Donald Trump announcing he’d ban TikTok. 

That 2020 executive order came to nothing in the end—it was blocked in the courts, put aside after the presidency changed hands, and eventually withdrawn by the Biden administration. Yet the idea—that the US government should ban TikTok in some way—never went away. It would repeatedly be suggested in different forms and shapes. And eventually, on April 24, 2024, things came full circle.

A lot has changed in the four years between these two news cycles. Back then, TikTok was a rising sensation that many people didn’t understand; now, it’s one of the biggest social media platforms, the originator of a generation-defining content medium, and a music-industry juggernaut. 

What has also changed is my outlook on the issue. For a long time, I thought TikTok would find a way out of the political tensions, but I’m increasingly pessimistic about its future. And I have even less hope for other Chinese tech companies trying to go global. If the TikTok saga tells us anything, it’s that their Chinese roots will be scrutinized forever, no matter what they do.

I don’t believe TikTok has become a larger security threat now than it was in 2020. There have always been issues with the app, like potential operational influence by the Chinese government, the black-box algorithms that produce unpredictable results, and the fact that parent company ByteDance never managed to separate the US side and the China side cleanly, despite efforts (one called Project Texas) to store and process American data locally. 

But none of those problems got worse over the last four years. And interestingly, while discussions in 2020 still revolved around potential remedies like setting up data centers in the US to store American data or having an organization like Oracle audit operations, those kinds of fixes are not in the law passed this year. As long as it still has Chinese owners, the app is not permissible in the US. The only thing it can do to survive here is transfer ownership to a US entity. 

That’s the cold, hard truth not only for TikTok but for other Chinese companies too. In today’s political climate, any association with China and the Chinese government is seen as unacceptable. It’s a far cry from the 2010s, when Chinese companies could dream about developing a killer app and finding audiences and investors around the globe—something many did pull off. 

There’s something I wrote four years ago that still rings true today: TikTok is the bellwether for Chinese companies trying to go global. 

The majority of Chinese tech giants, like Alibaba, Tencent, and Baidu, operate primarily within China’s borders. TikTok was the first to gain mass popularity in lots of other countries across the world and become part of daily life for people outside China. To many Chinese startups, it showed that the hard work of trying to learn about foreign countries and users can eventually pay off, and it’s worth the time and investment to try.

On the other hand, if even TikTok can’t get itself out of trouble, with all the resources that ByteDance has, is there any hope for the smaller players?

When TikTok found itself in trouble, the initial reaction of these other Chinese companies was to conceal their roots, hoping they could avoid attention. During my reporting, I’ve encountered multiple companies that fret about being described as Chinese. “We are headquartered in Boston,” one would say, while everyone in China openly talked about its product as the overseas version of a Chinese app.

But with all the political back-and-forth about TikTok, I think these companies are also realizing that concealing their Chinese associations doesn’t work—and it may make them look even worse if it leaves users and regulators feeling deceived.

With the new divest-or-ban bill, I think these companies are getting a clear signal that it’s not the technical details that matter—only their national origin. The same worry is spreading to many other industries, as I wrote in this newsletter last week. Even in the climate and renewable power industries, the presence of Chinese companies is becoming increasingly politicized. They, too, are finding themselves scrutinized more for their Chinese roots than for the actual products they offer.

Obviously, none of this is good news to me. When they feel unwelcome in the US market, Chinese companies don’t feel the need to talk to international media anymore. Without these vital conversations, it’s even harder for people in other countries to figure out what’s going on with tech in China.

Instead of banning TikTok because it’s Chinese, maybe we should go back to focus on what TikTok did wrong: why certain sensitive political topics seem deprioritized on the platform; why Project Texas has stalled; how to make the algorithmic workings of the platform more transparent. These issues, instead of whether TikTok is still controlled by China, are the things that actually matter. It’s a harder path to take than just banning the app entirely, but I think it’s the right one.

Do you believe the TikTok ban will go through? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Facing the possibility of a total ban on TikTok, influencers and creators are making contingency plans. (Wired $)

2. TSMC has brought hundreds of Taiwanese employees to Arizona to build its new chip factory. But the company is struggling to bridge cultural and professional differences between American and Taiwanese workers. (Rest of World)

3. The US secretary of state, Antony Blinken, met with Chinese president Xi Jinping during a visit to China this week. (New York Times $)

  • Here’s the best way to describe these recent US-China diplomatic meetings: “The US and China talk past each other on most issues, but at least they’re still talking.” (Associated Press)

4. Half of Russian companies’ payments to China are made through middlemen in Hong Kong, Central Asia, or the Middle East to evade sanctions. (Reuters $)

5. A massive auto show is taking place in Beijing this week, with domestic electric vehicles unsurprisingly taking center stage. (Associated Press)

  • Meanwhile, Elon Musk squeezed in a quick trip to China and met with his “old friend” the Chinese premier Li Qiang, who was believed to have facilitated establishing the Gigafactory in Shanghai. (BBC)
  • Tesla may finally get a license to deploy its autopilot system, which it calls Full Self Driving, in China after agreeing to collaborate with Baidu. (Reuters $)

6. Beijing has hosted two rival Palestinian political groups, Hamas and Fatah, to talk about potential reconciliation. (Al Jazeera)

Lost in translation

The Chinese dubbing community is grappling with the impacts of new audio-generating AI tools. According to the Chinese publication ACGx, for a new audio drama, a music company licensed the voice of the famous dubbing actor Zhao Qianjing and used AI to transform it into multiple characters and voice the entire script. 

But online, this wasn’t really celebrated as an advancement for the industry. Beyond criticizing the quality of the audio drama (saying it still doesn’t sound like real humans), dubbers are worried about the replacement of human actors and increasingly limited opportunities for newcomers. Other than this new audio drama, there have been several examples in China where AI audio generation has been used to replace human dubbers in documentaries and games. E-book platforms have also allowed users to choose different audio-generated voices to read out the text. 

One more thing

While in Beijing, Antony Blinken visited a record store and bought two vinyl records—one by Taylor Swift and another by the Chinese rock star Dou Wei. Many Chinese (and American!) people learned for the first time that Blinken had previously been in a rock band.

Inside the quest to map the universe with mysterious bursts of radio energy

When our universe was less than half as old as it is today, a burst of energy that could cook a sun’s worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback. 

The signal, which arrived on June 10, 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them. This one was particularly special: nearly double the age of anything previously observed, and three and a half times more energetic. 

But like the others that came before, it was otherwise a mystery. No one knows what causes fast radio bursts. They flash in a seemingly random and unpredictable pattern from all over the sky. Some appear from within our galaxy, others from previously unexamined depths of the universe. Some repeat in cyclical patterns for days at a time and then vanish; others have been consistently repeating every few days since we first identified them. Most never repeat at all. 

Despite the mystery, these radio waves are starting to prove extraordinarily useful. By the time our telescopes detect them, they have passed through clouds of hot, rippling plasma, through gas so diffuse that particles barely touch each other, and through our own Milky Way. And every time they hit the free electrons floating in all that stuff, the waves shift a little bit. The ones that reach our telescopes carry with them a smeary fingerprint of all the ordinary matter they’ve encountered between wherever they came from and where we are now. 

This makes fast radio bursts, or FRBs, invaluable tools for scientific discovery—especially for astronomers interested in the very diffuse gas and dust floating between galaxies, which we know very little about. 

“We don’t know what they are, and we don’t know what causes them. But it doesn’t matter. This is the tool we would have constructed and developed if we had the chance to be playing God and create the universe,” says Stuart Ryder, an astronomer at Macquarie University in Sydney and the lead author of the Science paper that reported the record-breaking burst. 

Many astronomers now feel confident that finding more such distant FRBs will enable them to create the most detailed three-dimensional cosmological map ever made—what Ryder likens to a CT scan of the universe. Even just five years ago making such a map might have seemed an intractable technical challenge: spotting an FFB and then recording enough data to determine where it came from is extraordinarily difficult because most of that work must happen in the few milliseconds before the burst passes.

But that challenge is about to be obliterated. By the end of this decade, a new generation of radio telescopes and related technologies coming online in Australia, Canada, Chile, California, and elsewhere should transform the effort to find FRBs—and help unpack what they can tell us. What was once a series of serendipitous discoveries will become something that’s almost routine. Not only will astronomers be able to build out that new map of the universe, but they’ll have the chance to vastly improve our understanding of how galaxies are born and how they change over time. 

Where’s the matter?

In 1998, astronomers counted up the weight of all of the identified matter in the universe and got a puzzling result. 

We know that about 5% of the total weight of the universe is made up of baryons like protons and neutrons— the particles that make up atoms, or all the “stuff” in the universe. (The other 95% includes dark energy and dark matter.) But the astronomers managed to locate only about 2.5%, not 5%, of the universe’s total. “They counted the stars, black holes, white dwarfs, exotic objects, the atomic gas, the molecular gas in galaxies, the hot plasma, etc. They added it all up and wound up at least a factor of two short of what it should have been,” says Xavier Prochaska, an astrophysicist at the University of California, Santa Cruz, and an expert in analyzing the light in the early universe. “It’s embarrassing. We’re not actively observing half of the matter in the universe.” 

All those missing baryons were a serious problem for simulations of how galaxies form, how our universe is structured, and what happens as it continues to expand. 

Astronomers began to speculate that the missing matter exists in extremely diffuse clouds of what’s known as the warm–hot intergalactic medium, or WHIM. Theoretically, the WHIM would contain all that unobserved material. After the 1998 paper was published, Prochaska committed himself to finding it. 

But nearly 10 years of his life and about $50 million in taxpayer money later, the hunt was going very poorly.

That search had focused largely on picking apart the light from distant galactic nuclei and studying x-ray emissions from tendrils of gas connecting galaxies. The breakthrough came in 2007, when Prochaska was sitting on a couch in a meeting room at the University of California, Santa Cruz, reviewing new research papers with his colleagues. There, amid the stacks of research, sat the paper reporting the discovery of the first FRB.

Duncan Lorimer and David Narkevic, astronomers at West Virginia University, had discovered a recording of an energetic radio wave unlike anything previously observed. The wave lasted for less than five milliseconds, and its spectral lines were very smeared and distorted, unusual characteristics for a radio pulse that was also brighter and more energetic than other known transient phenomena. The researchers concluded that the wave could not have come from within our galaxy, meaning that it had traveled some unknown distance through the universe. 

Here was a signal that had traversed long distances of space, been shaped and affected by electrons along the way, and had enough energy to be clearly detectable despite all the stuff it had passed through. There are no other signals we can currently detect that commonly occur throughout the universe and have this exact set of traits.

“I saw that and I said, ‘Holy cow—that’s how we can solve the missing-baryons problem,’” Prochaska says. Astronomers had used a similar technique with the light from pulsars— spinning neutron stars that beam radiation from their poles—to count electrons in the Milky Way. But pulsars are too dim to illuminate more of the universe. FRBs were thousands of times brighter, offering a way to use that technique to study space well beyond our galaxy.

A visualization of the cosmic web, the large-scale structure of the universe. Each bright knot is an entire galaxy, while the purple filaments show material between them.
This visualization of large-scale structure in the universe shows galaxies (bright knots) and the filaments of material between them.
NASA/NCSA UNIVERSITY OF ILLINOIS VISUALIZATION BY FRANK SUMMERS, SPACE TELESCOPE SCIENCE INSTITUTE, SIMULATION BY MARTIN WHITE AND LARS HERNQUIST, HARVARD UNIVERSITY

There’s a catch, though: in order for an FRB to be an indicator of what lies in the seemingly empty space between galaxies, researchers have to know where it comes from. If you don’t know how far the FRB has traveled, you can’t make any definitive estimate of what space looks like between its origin point and Earth. 

Astronomers couldn’t even point to the direction that the first 2007 FRB came from, let alone calculate the distance it had traveled. It was detected by an enormous single-dish radio telescope at the Parkes Observatory (now called the Murriyang) in New South Wales, which is great at picking up incoming radio waves but can pinpoint FRBs only to an area of the sky as large as Earth’s full moon. For the next decade, telescopes continued to identify FRBs without providing a precise origin, making them a fascinating mystery but not practically useful.

Then, in 2015, one particular radio wave flashed—and then flashed again. Over the course of two months of observation from the Arecibo telescope in Puerto Rico, the radio waves came again and again, flashing 10 times. This was the first repeating burst of FRBs ever observed (a mystery in its own right), and now researchers had a chance to determine where the radio waves had begun, using the opportunity to home in on its location.

In 2017, that’s what happened. The researchers obtained an accurate position for the fast radio burst using the NRAO Very Large Array telescope in central New Mexico. Armed with that position, the researchers then used the Gemini optical telescope in Hawaii to take a picture of the location, revealing the galaxy where the FRB had begun and how far it had traveled. “That’s when it became clear that at least some of these we’d get the distance for. That’s when I got really involved and started writing telescope proposals,” Prochaska says. 

That same year, astronomers from across the globe gathered in Aspen, Colorado, to discuss the potential for studying FRBs. Researchers debated what caused them. Neutron stars? Magnetars, neutron stars with such powerful magnetic fields that they emit x-rays and gamma rays? Merging galaxies? Aliens? Did repeating FRBs and one-offs have different origins, or could there be some other explanation for why some bursts repeat and most do not? Did it even matter, since all the bursts could be used as probes regardless of what caused them? At that Aspen meeting, Prochaska met with a team of radio astronomers based in Australia, including Keith Bannister, a telescope expert involved in the early work to build a precursor facility for the Square Kilometer Array, an international collaboration to build the largest radio telescope arrays in the world. 

The construction of that precursor telescope, called ASKAP, was still underway during that meeting. But Bannister, a telescope expert at the Australian government’s scientific research agency, CSIRO, believed that it could be requisitioned and adapted to simultaneously locate and observe FRBs. 

Bannister and the other radio experts affiliated with ASKAP understood how to manipulate radio telescopes for the unique demands of FRB hunting; Prochaska was an expert in everything “not radio.” They agreed to work together to identify and locate one-off FRBs (because there are many more of these than there are repeating ones) and then use the data to address the problem of the missing baryons. 

And over the course of the next five years, that’s exactly what they did—with astonishing success.

Building a pipeline

To pinpoint a burst in the sky, you need a telescope with two things that have traditionally been at odds in radio astronomy: a very large field of view and high resolution. The large field of view gives you the greatest possible chance to detect a fleeting, unpredictable burst. High resolution  lets you determine where that burst actually sits in your field of view. 

ASKAP was the perfect candidate for the job. Located in the westernmost part of the Australian outback, where cattle and sheep graze on public land and people are few and far between, the telescope consists of 36 dishes, each with a large field of view. These dishes are separated by large distances, allowing observations to be combined through a technique called interferometry so that a small patch of the sky can be viewed with high precision.  

The dishes weren’t formally in use yet, but Bannister had an idea. He took them and jerry-rigged a “fly’s eye” telescope, pointing the dishes at different parts of the sky to maximize its ability to spot something that might flash anywhere. 

“Suddenly, it felt like we were living in paradise,” Bannister says. “There had only ever been three or four FRB detections at this point, and people weren’t entirely sure if [FRBs] were real or not, and we were finding them every two weeks.” 

When ASKAP’s interferometer went online in September 2018, the real work began. Bannister designed a piece of software that he likens to live-action replay of the FRB event. “This thing comes by and smacks into your telescope and disappears, and you’ve got a millisecond to get its phone number,” he says. To do so, the software detects the presence of an FRB within a hundredth of a second and then reaches upstream to create a recording of the telescope’s data before the system overwrites it. Data from all the dishes can be processed and combined to reconstruct a view of the sky and find a precise point of origin. 

The team can then send the coordinates on to optical telescopes, which can take detailed pictures of the spot to confirm the presence of a galaxy—the likely origin point of the FRB. 

CSIRO's Australian Square Kilometre Array Pathfinder (ASKAP) telescope
These two dishes are part of CSIRO’s Australian Square Kilometre Array Pathfinder (ASKAP) telescope.
CSIRO

Ryder’s team used data on the galaxy’s spectrum, gathered from the European Southern Observatory, to measure how much its light stretched as it traversed space to reach our telescopes. This “redshift” becomes a proxy for distance, allowing astronomers to estimate just how much space the FRB’s light has passed through. 

In 2018, the live-action replay worked for the first time, making Bannister, Ryder, Prochaska, and the rest of their research team the first to localize an FRB that was not repeating. By the following year, the team had localized about five of them. By 2020, they had published a paper in Nature declaring that the FRBs had let them count up the universe’s missing baryons. 

The centerpiece of the paper’s argument was something called the dispersion measure—a number that reflects how much an FRB’s light has been smeared by all the free electrons along our line of sight. In general, the farther an FRB travels, the higher the dispersion measure should be. Armed with both the travel distance (the redshift) and the dispersion measure for a number of FRBs, the researchers found they could extrapolate the total density of particles in the universe. J-P Macquart, the paper’s lead author, believed that the relationship between dispersion measure and FRB distance was predictable and could be applied to map the universe.

As a leader in the field and a key player in the advancement of FRB research, Macquart would have been interviewed for this piece. But he died of a heart attack one week after the paper was published, at the age of 45. FRB researchers began to call the relationship between dispersion and distance the “Macquart relation,” in honor of his memory and his push for the groundbreaking idea that FRBs could be used for cosmology. 

Proving that the Macquart relation would hold at greater distances became not just a scientific quest but also an emotional one. 

“I remember thinking that I know something about the universe that no one else knows.”

The researchers knew that the ASKAP telescope was capable of detecting bursts from very far away—they just needed to find one. Whenever the telescope detected an FRB, Ryder was tasked with helping to determine where it had originated. It took much longer than he would have liked. But one morning in July 2022, after many months of frustration, Ryder downloaded the newest data email from the European Southern Observatory and began to scroll through the spectrum data. Scrolling, scrolling, scrolling—and then there it was: light from 8 billion years ago, or a redshift of one, symbolized by two very close, bright lines on the computer screen, showing the optical emissions from oxygen. “I remember thinking that I know something about the universe that no one else knows,” he says. “I wanted to jump onto a Slack and tell everyone, but then I thought: No, just sit here and revel in this. It has taken a lot to get to this point.” 

With the October 2023 Science paper, the team had basically doubled the distance baseline for the Macquart relation, honoring Macquart’s memory in the best way they knew how. The distance jump was significant because Ryder and the others on his team wanted to confirm that their work would hold true even for FRBs whose light comes from so far away that it reflects a much younger universe. They also wanted to establish that it was possible to find FRBs at this redshift, because astronomers need to collect evidence about many more like this one in order to create the cosmological map that motivates so much FRB research.

“It’s encouraging that the Macquart relation does still seem to hold, and that we can still see fast radio bursts coming from those distances,” Ryder said. “We assume that there are many more out there.” 

Mapping the cosmic web

The missing stuff that lies between galaxies, which should contain the majority of the matter in the universe, is often called the cosmic web. The diffuse gases aren’t floating like random clouds; they’re strung together more like a spiderweb, a complex weaving of delicate filaments that stretches as the galaxies at their nodes grow and shift. This gas probably escaped from galaxies into the space beyond when the galaxies first formed, shoved outward by massive explosions.

“We don’t understand how gas is pushed in and out of galaxies. It’s fundamental for understanding how galaxies form and evolve,” says Kiyoshi Masui, the director of MIT’s Synoptic Radio Lab. “We only exist because stars exist, and yet this process of building up the building blocks of the universe is poorly understood … Our ability to model that is the gaping hole in our understanding of how the universe works.” 

Astronomers are also working to build large-scale maps of galaxies in order to precisely measure the expansion of the universe. But the cosmological modeling underway with FRBs should create a picture of invisible gasses between galaxies, one that currently does not exist. To build a three-dimensional map of this cosmic web, astronomers will need precise data on thousands of FRBs from regions near Earth and from very far away, like the FRB at redshift one. “Ultimately, fast radio bursts will give you a very detailed picture of how gas gets pushed around,” Masui says. “To get to the cosmological data, samples have to get bigger, but not a lot bigger.” 

That’s the task at hand for Masui, who leads a team searching for FRBs much closer to our galaxy than the ones found by the Australian-led collaboration. Masui’s team conducts FRB research with the CHIME telescope in British Columbia, a nontraditional radio telescope with a very wide field of view and focusing reflectors that look like half-pipes instead of dishes. CHIME (short for “Canadian Hydrogen Intensity Mapping Experiment)” has no moving parts and is less reliant on mirrors than a traditional telescope (focusing light in only one direction rather than two), instead using digital techniques to process its data. CHIME can use its digital technology to focus on many places at once, creating a 200-square-degree field of view compared with ASKAP’s 30-degree one. Masui likened it to a mirror that can be focused on thousands of different places simultaneously. 

Because of this enormous field of view, CHIME has been able to gather data on thousands of bursts that are closer to the Milky Way. While CHIME cannot yet precisely locate where they are coming from the way that ASKAP can (the telescope is much more compact, providing lower resolution), Masui is leading the effort to change that by building three smaller versions of the same telescope in British Columbia; Green Bank, West Virginia; and Northern California. The additional data provided by these telescopes, the first of which will probably be collected sometime this year, can be combined with data from the original CHIME telescope to produce location information that is about 1,000 times more precise. That should be detailed enough for cosmological mapping.

The Canadian Hydrogen Intensity Mapping Experiment, or CHIME, a Canadian radio telescope, is shown at night.
The reflectors of the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, have been used to spot thousands of FRBs.
ANDRE RECNIK/CHIME

Telescope technology is improving so fast that the quest to gather enough FRB samples from different parts of the universe for a cosmological map could be finished within the next 10 years. In addition to CHIME, the BURSTT radio telescope in Taiwan should go online this year; the CHORD telescope in Canada, designed to surpass CHIME, should begin operations in 2025; and the Deep Synoptic Array in California could transform the field of radio astronomy when it’s finished, which is expected to happen sometime around the end of the decade. 

And at ASKAP, Bannister is building a new tool that will quintuple the sensitivity of the telescope, beginning this year. If you can imagine stuffing a million people simultaneously watching uncompressed YouTube videos into a box the size of a fridge, that’s probably the easiest way to visualize the data handling capabilities of this new processor, called a field-programmable gate array, which Bannister is almost finished programming. He expects the new device to allow the team to detect one new FRB each day.

With all the telescopes in competition, Bannister says, “in five or 10 years’ time, there will be 1,000 new FRBs detected before you can write a paper about the one you just found … We’re in a race to make them boring.” 

Prochaska is so confident FRBs will finally give us the cosmological map he’s been working toward his entire life that he’s started studying for a degree in oceanography. Once astronomers have measured distances for 1,000 of the bursts, he plans to give up the work entirely. 

“In a decade, we could have a pretty decent cosmological map that’s very precise,” he says. “That’s what the 1,000 FRBs are for—and I should be fired if we don’t.”

Unlike most scientists, Prochaska can define the end goal. He knows that all those FRBs should allow astronomers to paint a map of the invisible gases in the universe, creating a picture of how galaxies evolve as gases move outward and then fall back in. FRBs will grant us an understanding of the shape of the universe that we don’t have today—even if the mystery of what makes them endures. 

Anna Kramer is a science and climate journalist based in Washington, D.C.

Roundtables: Inside the Next Era of AI and Hardware

Recorded on April 30, 2024

Inside the Next Era of AI and Hardware

Speakers: James O’Donnell, AI reporter, and Charlotte Jee, News editor

Hear first-hand from our AI reporter, James O’Donnell, as he walks our news editor Charlotte Jee through the latest goings-on in his beat, from rapid advances in robotics to autonomous military drones, wearable devices, and tools for AI-powered surgeries.

Related Coverage

❌