Normal view

There are new articles available, click to refresh the page.
Before yesterdayMIT Technology Review

The Download: the cancer vaccine renaissance, and working towards a decarbonized future

3 May 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Cancer vaccines are having a renaissance

Last week, Moderna and Merck launched a large clinical trial in the UK of a promising new cancer therapy: a personalized vaccine that targets a specific set of mutations found in each individual’s tumor. This study is enrolling patients with melanoma. But the companies have also launched a phase III trial for lung cancer. And earlier this month BioNTech and Genentech announced that a personalized vaccine they developed in collaboration shows promise in pancreatic cancer, which has a notoriously poor survival rate.

Drug developers have been working for decades on vaccines to help the body’s immune system fight cancer, without much success. But promising results in the past year suggest that the strategy may be reaching a turning point. Will these therapies finally live up to their promise? Read the full story.

—Cassandra Willyard

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

How we transform to a fully decarbonized world

Deb Chachra is a professor of engineering at Olin College of Engineering in Needham, Massachusetts, and the author of How Infrastructure Works: Inside the Systems That Shape Our World

Just as much as technological breakthroughs, it’s that availability of energy that has shaped our material world. The exponential rise in fossil-fuel usage over the past century and a half has powered novel, energy-intensive modes of extracting, processing, and consuming matter, at unprecedented scale.

But now, the cumulative environmental, health, and social impacts of this approach have become unignorable. We can see them nearly everywhere we look, from the health effects of living near highways or oil refineries to the ever-growing issue of plastic, textile, and electronic waste. 

Decarbonizing our energy systems means meeting human needs without burning fossil fuels and releasing greenhouse gases into the atmosphere. The good news is that a world powered by electricity from abundant, renewable, non-polluting sources is now within reach. Read the full story.

The story is from the current print issue of MIT Technology Review, which is on the fascinating theme of Build. If you don’t already, subscribe now to receive future copies once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US adversaries are exploiting the university protests for their own gain
Russia, China and Iran are amplifying the conflicts to stoke political tensions online. (NYT $)
+ Universities are under intense political scrutiny. (Vox)
+ The Biden administration’s patience with protestors appears to have run out. (The Atlantic $)

2 China is preparing to launch an ambitious moon mission 🚀
Its bid to bring back samples from the far side of the moon would be a major leap forward for its national space program. (CNN)
+ It would be the first time any country managed to pull it off, too. (WP $)

3 We don’t know how Big Tech’s AI investments will affect profits  

Profits are up—but for how long? (The Information $)
+ Make no mistake—AI is owned by Big Tech. (MIT Technology Review)

4 An Australian facial recognition firm suffered a data breach
It demonstrates the importance of safeguarding personal biometric data properly. (Wired $)

5 China’s race to create a native ChatGPT is heating up
Four startups are locked in intense competition to emulate OpenAI’s success. (FT $)
+ Four things to know about China’s new AI rules in 2024. (MIT Technology Review)

6 One of America’s biggest podcasts is chock-full of misleading information
A cohort of scientists have raised concerns with Andrew Huberman’s show’s omission of key scientific details. (Vox)

7 Recyclable circuit boards could help us cut down on e-waste
Because conventional circuits are an environmental menace. (IEEE Spectrum)
+ If you fancy giving a supercomputer a second home, here’s your chance. (Wired $)
+ Why recycling alone can’t power climate tech. (MIT Technology Review)

8 Facebook has become the zombie internet
The social network ain’t so social these days. (404 Media)

9 Boston Dynamics loves freaking us out 🤖
We’ve been obsessed with their uncanny videos for more than a decade. (The Atlantic $)
+ But robots might need to become more boring to be useful. (MIT Technology Review)

10 Human models are letting AI do all the hard work
They’re signing over the rights to their likeness and raking in the passive income. (WSJ $)

Quote of the day

“They’re slow as Christmas getting things done.”

—Jerry Whisenhunt, general manager of Pine Telephone Company in Oklahoma, explains his frustration with Washington bureaucrats who ordered providers like him to remove China-made equipment from their networks, without providing funding, he tells the Washington Post.

The big story

Zimbabwe’s climate migration is a sign of what’s to come

December 2021

Julius Mutero has spent his entire adult life farming a three-hectare plot in Zimbabwe, but has harvested virtually nothing in the past six years. He is just one of the 86 million people in sub-Saharan Africa who the World Bank estimates will migrate domestically by 2050 because of climate change.

In Zimbabwe, farmers who have tried to stay put and adapt have found their efforts woefully inadequate in the face of new weather extremes. Droughts have already forced tens of thousands from their homes. But their desperate moves are creating new competition for water in the region, and tensions may soon boil over. Read the full story.

—Andrew Mambondiyani

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Some breads are surprisingly easy to make—but all equally delicious.
+ Aww, these frogs sure love their baby tadpoles. 🐸
+ Trees are wonderful. These books celebrate all they do for us.
+ We’re all praying for the safe return of Wally the emotional support alligator.

Cancer vaccines are having a renaissance

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Last week, Moderna and Merck launched a large clinical trial in the UK of a promising new cancer therapy: a personalized vaccine that targets a specific set of mutations found in each individual’s tumor. This study is enrolling patients with melanoma. But the companies have also launched a phase III trial for lung cancer. And earlier this month BioNTech and Genentech announced that a personalized vaccine they developed in collaboration shows promise in pancreatic cancer, which has a notoriously poor survival rate.

Drug developers have been working for decades on vaccines to help the body’s immune system fight cancer, without much success. But promising results in the past year suggest that the strategy may be reaching a turning point. Will these therapies finally live up to their promise?

This week in The Checkup, let’s talk cancer vaccines. (And, you guessed it, mRNA.)

Long before companies leveraged mRNA to fight covid, they were developing mRNA vaccines to combat cancer. BioNTech delivered its first mRNA vaccines to people with treatment-resistant melanoma nearly a decade ago. But when the pandemic hit, development of mRNA vaccines jumped into warp drive. Now dozens of trials are underway to test whether these shots can transform cancer the way they did covid. 

Recent news has some experts cautiously optimistic. In December, Merck and Moderna announced results from an earlier trial that included 150 people with melanoma who had undergone surgery to have their cancer removed. Doctors administered nine doses of the vaccine over about six months, as well as  what’s known as an immune checkpoint inhibitor. After three years of follow-up, the combination had cut the risk of recurrence or death by almost half compared with the checkpoint inhibitor alone.

The new results reported by BioNTech and Genentech, from a small trial of 16 patients with pancreatic cancer, are equally exciting. After surgery to remove the cancer, the participants received immunotherapy, followed by the cancer vaccine and a standard chemotherapy regimen. Half of them responded to the vaccine, and three years after treatment, six of those people still had not had a recurrence of their cancer. The other two had relapsed. Of the eight participants who did not respond to the vaccine, seven had relapsed. Some of these patients might not have responded  because they lacked a spleen, which plays an important role in the immune system. The organ was removed as part of their cancer treatment. 

The hope is that the strategy will work in many different kinds of cancer. In addition to pancreatic cancer, BioNTech’s personalized vaccine is being tested in colorectal cancer, melanoma, and metastatic cancers.

The purpose of a cancer vaccine is to train the immune system to better recognize malignant cells, so it can destroy them. The immune system has the capacity to clear cancer cells if it can find them. But tumors are slippery. They can hide in plain sight and employ all sorts of tricks to evade our immune defenses. And cancer cells often look like the body’s own cells because, well, they are the body’s own cells.

There are differences between cancer cells and healthy cells, however. Cancer cells acquire mutations that help them grow and survive, and some of those mutations give rise to proteins that stud the surface of the cell—so-called neoantigens.

Personalized cancer vaccines like the ones Moderna and BioNTech are developing are tailored to each patient’s particular cancer. The researchers collect a piece of the patient’s tumor and a sample of healthy cells. They sequence these two samples and compare them in order to identify mutations that are specific to the tumor. Those mutations are then fed into an AI algorithm that selects those most likely to elicit an immune response. Together these neoantigens form a kind of police sketch of the tumor, a rough picture that helps the immune system recognize cancerous cells. 

“A lot of immunotherapies stimulate the immune response in a nonspecific way—that is, not directly against the cancer,” said Patrick Ott, director of the Center for Personal Cancer Vaccines at the Dana-Farber Cancer Institute, in a 2022 interview.  “Personalized cancer vaccines can direct the immune response to exactly where it needs to be.”

How many neoantigens do you need to create that sketch?  “We don’t really know what the magical number is,” says Michelle Brown, vice president of individualized neoantigen therapy at Moderna. Moderna’s vaccine has 34. “It comes down to what we could fit on the mRNA strand, and it gives us multiple shots to ensure that the immune system is stimulated in the right way,” she says. BioNTech is using 20.

The neoantigens are put on an mRNA strand and injected into the patient. From there, they are taken up by cells and translated into proteins, and those proteins are expressed on the cell’s surface, raising an immune response

mRNA isn’t the only way to teach the immune system to recognize neoantigens. Researchers are also delivering neoantigens as DNA, as peptides, or via immune cells or viral vectors. And many companies are working on “off the shelf” cancer vaccines that aren’t personalized, which would save time and expense. Out of about 400 ongoing clinical trials assessing cancer vaccines last fall, roughly 50 included personalized vaccines.

There’s no guarantee any of these strategies will pan out. Even if they do, success in one type of cancer doesn’t automatically mean success against all. Plenty of cancer therapies have shown enormous promise initially, only to fail when they’re moved into large clinical trials.

But the burst of renewed interest and activity around cancer vaccines is encouraging. And personalized vaccines might have a shot at succeeding where others have failed. The strategy makes sense for “a lot of different tumor types and a lot of different settings,” Brown says. “With this technology, we really have a lot of aspirations.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

mRNA vaccines transformed the pandemic. But they can do so much more. In this feature from 2023, Jessica Hamzelou covered the myriad other uses of these shots, including fighting cancer. 

This article from 2020 covers some of the background on BioNTech’s efforts to develop personalized cancer vaccines. Adam Piore had the story

Years before the pandemic, Emily Mullin wrote about early efforts to develop personalized cancer vaccines—the promise and the pitfalls. 

From around the web

Yes, there’s bird flu in the nation’s milk supply. About one in five samples had evidence of the H5N1 virus. But new testing by the FDA suggests that the virus is unable to replicate. Pasteurization works! (NYT)

Studies in which volunteers are deliberately infected with covid—so-called challenge trials—have been floated as a way to test drugs and vaccines, and even to learn more about the virus. But it turns out it’s tougher to infect people than you might think. (Nature)

When should women get their first mammogram to screen for breast cancer? It’s a matter of hot debate. In 2009, an expert panel raised the age from 40 to 50. This week they lowered it to 40 again in response to rising cancer rates among younger women. Women with an average risk of breast cancer should get screened every two years, the panel says. (NYT)

Wastewater surveillance helped us track covid. Why not H5N1? A team of researchers from New York argues it might be our best tool for monitoring the spread of this virus. (Stat)

Long read: This story looks at how AI could help us better understand how babies learn language, and focuses on the lab I covered in this story about an AI model trained on the sights and sounds experienced by a single baby. (NYT)

The Download: Sam Altman on AI’s killer function, and the problem with ethanol

2 May 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Sam Altman says helpful agents are poised to become AI’s killer function

Sam Altman, CEO of OpenAI, has a vision for how AI tools will become enmeshed in our daily lives. 

During a sit-down chat with MIT Technology Review in Cambridge, Massachusetts, he described how he sees the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.”

In the new paradigm, as Altman sees it, AI will be capable of helping us outside the chat interface and taking real-world tasks off our plates. Read more about Altman’s thoughts on the future of AI hardware, where training data will come from next, and who is best poised to create AGI.

—James O’Donnell

A US push to use ethanol as aviation fuel raises major climate concerns

Eliminating carbon pollution from aviation is one of the most challenging parts of the climate puzzle, simply because large commercial airlines are too heavy and need too much power during takeoff for today’s batteries to do the job. 

But one way that companies and governments are striving to make progress is through the use of various types of sustainable aviation fuels (SAFs), which are derived from non-petroleum sources and promise to be less polluting than standard jet fuel.

This week, the US announced a push to help its biggest commercial crop, corn, become a major feedstock for SAFs. It could set the template for programs in the future that may help ethanol producers generate more and more SAFs. But that is already sounding alarm bells among some observers. Read the full story.

James Temple

Three takeaways about the current state of batteries

Batteries have been making headlines this week. First, there’s a new special report from the International Energy Agency all about how crucial batteries are for our future energy systems. The report calls batteries a “master key,” meaning they can unlock the potential of other technologies that will help cut emissions.

Second, we’re seeing early signs in California of how the technology might be earning that “master key” status already by helping renewables play an even bigger role on the grid. 

Our climate reporter Casey Crownhart has rounded up the three things you need to know about the current state of batteries—and what’s to come. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 These tech moguls are planning how to construct AI rules for Trump
They helped draft and promote TikTok ban legislation—and AI is next on their agenda. (WP $)
+ Ted Kaouk is the US markets’ regulator’s first AI officer. (WSJ $)+ A new AI security bill would create a record of data breaches. (The Verge)
+ Here’s where AI regulation is heading. (MIT Technology Review)

2 Crypto’s grifters insist they’ve learned their lesson
But the state of the industry suggests they’ll make the same mistakes over again. (Bloomberg $)

3 Good luck tracking down these AI chips
South Korean chip supplier SK Hynix says it’s sold out for the year. (WSJ $)
+ It’s almost fully booked throughout 2025, too. (Bloomberg $)
+ Why it’s so hard for China’s chip industry to become self-sufficient. (MIT Technology Review)

4 Universal Music Group has struck a deal with TikTok 
The label’s music was pulled from the platform three months ago. (Variety $)
+ Taylor Swift, Olivia Rodrigo, and Drake are among its high-profile roster. (The Verge)

5 Ukraine is bootstrapping its own killer-drone industry
Effectively creating air-bound bombs in lieu of more sophisticated long-range missiles. (Wired $)
+ Mass-market military drones have changed the way wars are fought. (MIT Technology Review)

6  The US asylum border app is stranding vulnerable migrants
Its scarce appointments leave asylum seekers with little choice but to pay human trafficking groups. (The Guardian)
+ The new US border wall is an app. (MIT Technology Review)

7 Things aren’t looking good for Volocopter
The flying taxi startup is holding crisis talks with investors. (FT $)
+ These aircraft could change how we fly. (MIT Technology Review)

8 Describing quantum systems is a time-consuming process
A new algorithm could help to dramatically speed things up. (Quanta Magazine)

9 What Reddit’s ‘Am I the Asshole?’ forum can teach philosophers
It’s an undoubtedly brave endeavor. (Vox)

10 The web’s home page refuses to die
Social media is imploding, but the humble website prevails. (New Yorker $)
+ How to fix the internet. (MIT Technology Review)

Quote of the day

“Whomever they choose, they king-make.”

— Satya Nadella, Microsoft’s CEO, describes the stranglehold Apple exercises over the companies vying to make its default search engine for iPhone, Bloomberg reports.

The big story

Can Afghanistan’s underground “sneakernet” survive the Taliban?

November 2021

When Afghanistan fell to the Taliban, Mohammad Yasin had to make some difficult decisions very quickly. He began erasing some of the sensitive data on his computer and moving the rest onto two of his largest hard drives, which he then wrapped in a layer of plastic and buried underground.

Yasin is what is locally referred to as a “computer kar”: someone who sells digital content by hand in a country where a steady internet connection can be hard to come by, selling everything from movies, music, mobile applications, to iOS updates. And despite the dangers of Taliban rule, the country’s extensive “sneakernet” isn’t planning on shutting down. Read the full story.

—Ruchi Kumar

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ There is nothing more terrifying than a ‘boy room.’
+ These chocolate limes look beyond delicious (and seriously convincing!) 🍋🟩
+ Drake is beefing with everyone—but why?
+ Here’s how to calm that eternal to-do list in your head.

Three takeaways about the current state of batteries

2 May 2024 at 06:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Batteries are on my mind this week. (Aren’t they always?) But I’ve got two extra reasons to be thinking about them today. 

First, there’s a new special report from the International Energy Agency all about how crucial batteries are for our future energy systems. The report calls batteries a “master key,” meaning they can unlock the potential of other technologies that will help cut emissions. Second, we’re seeing early signs in California of how the technology might be earning that “master key” status already by helping renewables play an even bigger role on the grid. So let’s dig into some battery data together. 

1) Battery storage in the power sector was the fastest-growing commercial energy technology on the planet in 2023

Deployment doubled over the previous year’s figures, hitting nearly 42 gigawatts. That includes utility-scale projects as well as projects installed “behind the meter,” meaning they’re somewhere like a home or business and don’t interact with the grid. 

Over half the additions in 2023 were in China, which has been the leading market in batteries for energy storage for the past two years. Growth is faster there than the global average, and installations tripled from 2022 to last year. 

One driving force of this quick growth in China is that some provincial policies require developers of new solar and wind power projects to pair them with a certain level of energy storage, according to the IEA report.

Intermittent renewables like wind and solar have grown rapidly in China and around the world, and the technologies are beginning to help clean up the grid. But these storage requirement policies reveal the next step: installing batteries to help unlock the potential of renewables even during times when the sun isn’t shining and the wind isn’t blowing. 

2) Batteries are starting to show exactly how they’ll play a crucial role on the grid.

When there are small amounts of renewables, it’s not all that important to have storage available, since the sun’s rising and setting will cause little more than blips in the overall energy mix. But as the share increases, some of the challenges with intermittent renewables become very clear. 

We’ve started to see this play out in California. Renewables are able to supply nearly all the grid’s energy demand during the day on sunny days. The problem is just how different the picture is at noon and just eight hours later, once the sun has gone down. 

In the middle of the day, there’s so much solar power available that gigawatts are basically getting thrown away. Electricity prices can actually go negative. Then, later on, renewables quickly fall off, and other sources like natural gas need to ramp up to meet demand. 

But energy storage is starting to catch up and make a dent in smoothing out that daily variation. On April 16, for the first time, batteries were the single greatest power source on the grid in California during part of the early evening, just as solar fell off for the day. (Look for the bump in the darkest line on the graph above—it happens right after 6 p.m.)

Batteries have reached this number-one status several more times over the past few weeks, a sign that the energy storage now installed—10 gigawatts’ worth—is beginning to play a part in a balanced grid. 

3) We need to build a lot more energy storage. Good news: batteries are getting cheaper.

While early signs show just how important batteries can be in our energy system, we still need gobs more to actually clean up the grid. If we’re going to be on track to cut greenhouse-gas emissions to zero by midcentury, we’ll need to increase battery deployment sevenfold. 

The good news is the technology is becoming increasingly economical. Battery costs have fallen drastically, dropping 90% since 2010, and they’re not done yet. According to the IEA report, battery costs could fall an additional 40% by the end of this decade. Those further cost declines would make solar projects with battery storage cheaper to build than new coal power plants in India and China, and cheaper than new gas plants in the US. 

Batteries won’t be the magic miracle technology that cleans up the entire grid. Other sources of low-carbon energy that are more consistently available, like geothermal, or able to ramp up and down to meet demand, like hydropower, will be crucial parts of the energy system. But I’m interested to keep watching just how batteries contribute to the mix. 


Now read the rest of The Spark

Related reading

Some companies are looking beyond lithium for stationary energy storage. Dig into the prospects for sodium-based batteries in this story from last year.

Lithium-sulfur technology could unlock cheaper, better batteries for electric vehicles that can go farther on a single charge. I covered one company trying to make them a reality earlier this year.

Two engineers in lab coats monitor the thermal battery powering a conveyor belt of bottles
SIMON LANDREIN

Another thing

Thermal batteries are so hot right now. In fact, readers chose the technology as our 11th Breakthrough Technology of 2024.

To celebrate, we’re hosting an online event in a couple of weeks for subscribers. We’ll dig into why thermal batteries are so interesting and why this is a breakthrough moment for the technology. It’s going to be a lot of fun, so subscribe if you haven’t already and then register here to join us on May 16 at noon Eastern time.

You’ll be able to submit a question when you register—please do that so I know what you want to hear about! See you there! 

Keeping up with climate  

New rules that force US power plants to slash emissions could effectively spell the end of coal power in the country. Here are five things to know about the regulations. (New York Times)

Wind farms use less land than you might expect. Turbines really take up only a small fraction of the land where they’re sited, and co-locating projects with farms or other developments can help reduce environmental impact. (Washington Post)

The fourth reactor at Plant Vogtle in Georgia officially entered commercial operation this week. The new reactor will provide electricity for up to 500,000 homes and businesses. (Axios

A new factory will be the first full-scale plant to produce sodium-ion batteries in the US. The chemistry could provide a cheaper alternative to the standard lithium-ion chemistry and avoid material constraints. (Bloomberg)

→ I wrote about the potential for sodium-based batteries last year. (MIT Technology Review)

Tesla has apparently laid off a huge portion of its charging team. The move comes as the company’s charging port has been adopted by most major automakers. (The Verge)

A vegan cheese was up for a major food award. Then, things got messy. (Washington Post)

→ For a look at how Climax Foods makes its plant-based cheese with AI, check out this story from our latest magazine issue. (MIT Technology Review)

Someday mining might be done with … seaweed? Early research is looking into using seaweed to capture and concentrate high-value metals. (Hakai)

The planet’s oceans contain enormous amounts of energy. Harnessing it is an early-stage industry, but some proponents argue there’s a role for wave and tidal power technologies. (Undark)

Why new ethanol aviation fuel tax subsidies aren’t a clear climate win

1 May 2024 at 17:27

Eliminating carbon pollution from aviation is one of the most challenging parts of the climate puzzle, simply because large commercial airlines are too heavy and need too much power during takeoff for today’s batteries to do the job. 

But one way that companies and governments are striving to make some progress is through the use of various types of sustainable aviation fuels (SAFs), which are derived from non-petroleum sources and promise to be less polluting than standard jet fuel.

This week, the US announced a push to help its biggest commercial crop, corn, become a major feedstock for SAFs. 

Federal guidelines announced on April 30 provide a pathway for ethanol producers to earn SAF tax credits within the Inflation Reduction Act, President Biden’s signature climate law, when the fuel is produced from corn or soy grown on farms that adopt certain sustainable agricultural practices.

It’s a limited pilot program, since the subsidy itself expires at the end of this year. But it could set the template for programs in the future that may help ethanol producers generate more and more SAFs, as the nation strives to produce billions of gallons of those fuels per year by 2030. 

Consequently, the so-called Climate Smart Agricultural program has already sounded alarm bells among some observers, who fear that the federal government is both overestimating the emissions benefits of ethanol and assigning too much credit to the agricultural practices in question. Those include cover crops, no-till techniques that minimize soil disturbances, and use of “enhanced-efficiency fertilizers,” which are designed to increase uptake by plants and thus reduce runoff into the environment.

The IRA offers a tax credit of $1.25 per gallon for SAFs that are 50% lower in emissions than standard jet fuel, and as much as 50 cents per gallon more for sustainable fuels that are cleaner still. The new program can help corn- or soy-based ethanol meet that threshold when the source crops are produced using some or all of those agricultural practices.

Since the vast majority of US ethanol is produced from corn, let’s focus on the issues around that crop. To get technical, the program allows ethanol producers to subtract 10 grams of carbon dioxide per megajoule of energy, a measure of carbon intensity, from the life-cycle emissions of the fuel when it’s generated from corn produced with all three of the practices mentioned. That’s about an eighth to a tenth of the carbon intensity of gasoline.

Ethanol’s questionable climate footprint

Today, US-generated ethanol is mainly mixed with gasoline. But ethanol producers are eager to develop new markets for the product as electric vehicles make up a larger share of the cars and trucks on the road. Not surprisingly, then, industry trade groups applauded the announcement this week.

The first concern with the new program, however, is that the emissions benefits of corn-based ethanol have been hotly debated for decades.

Corn, like any plant that uses photosynthesis to produce food, sucks up carbon dioxide from the air. But using corn for fuel rather than food also creates pressure to clear more land for farming, a process that releases carbon dioxide from plants and soil. In addition, planting, fertilizing, and harvesting corn produce climate pollution as well, and the same is true of refining, distributing, and burning ethanol. 

For its analyses under the new program, the Treasury Department intends to use an updated version of the so-called GREET model to evaluate the life-cycle emissions of SAFs, which was developed by the Department of Energy’s Argonne National Lab. A 2021 study from the lab, relying on that model, concluded that US corn ethanol produced as much as 52% less greenhouse gas than gasoline. 

But some researchers and nonprofits have criticized the tool for accepting low estimates of the emissions impacts of land-use changes, among other issues. Other assessments of ethanol emissions have been far more damning.

A 2022 EPA analysis surveyed the findings from a variety of models that estimate the life-cycle emissions of corn-based ethanol and found that in seven out of 20 cases, they exceeded 80% of the climate pollution from gasoline and diesel.

Moreover, the three most recent estimates from those models found ethanol emissions surpassed even the higher-end estimates for gasoline or diesel, Alison Cullen, chair of the EPA’s science advisory board, noted in a 2023 letter to the administrator of the agency.

“Thus, corn starch ethanol may not meet the definition of a renewable fuel” under the federal law that mandates the use of biofuels in the market, she wrote. If so, it’s then well short of the 50% threshold required by the IRA, and some say it’s not clear that the farming practices laid out this week could close the gap.

Agricultural practices

Nikita Pavlenko, who leads the fuels team at the International Council on Clean Transportation, a nonprofit research group, asserted in an email that the climate-smart agricultural provisions “are extremely sloppy” and “are not substantiated.” 

He said the Department of Energy and Department of Agriculture especially “put their thumbs on the scale” on the question of land-use changes, using estimates of soy and corn emissions that were 33% to 55% lower than those produced for a program associated with the UN’s International Civil Aviation Organization.

He finds that ethanol sourced from farms using these agriculture practices will still come up short of the IRA’s 50% threshold, and that producers may have to take additional steps to curtail emissions, potentially including adding carbon capture and storage to ethanol facilities or running operations on renewables like wind or solar.

Freya Chay, a program lead at CarbonPlan, which evaluates the scientific integrity of carbon removal methods and other climate actions, says that these sorts of agricultural practices can provide important benefits, including improving soil health, reducing erosion, and lowering the cost of farming. But she and others have stressed that confidently determining when certain practices actually and durably increase carbon in soil is “exceedingly complex” and varies widely depending on soil type, local climate conditions, past practices, and other variables.

One recent study of no-till practices found that the carbon benefits quickly fade away over time and reach nearly zero in 14 years. If so, this technique would do little to help counter carbon emissions from fuel combustion, which can persist in the atmosphere for centuries or more.

“US policy has a long history of asking how to continue justifying investment in ethanol rather than taking a clear-eyed approach to evaluating whether or not ethanol helps us reach our climate goals,” Chay wrote in an email. “In this case, I think scrutiny is warranted around the choice to lean on agricultural practices with uncertain and variable benefits in a way that could unlock the next tranche of public funding for corn ethanol.”

There are many other paths for producing SAFs that are or could be less polluting than ethanol. For example, they can be made from animal fats, agriculture waste, forest trimmings, or non-food plants that grow on land unsuitable for commercial crops. Other companies are developing various types of synthetic fuels, including electrofuels produced by capturing carbon from plants or the air and then combining it with cleanly sourced hydrogen. 

But all these methods are much more expensive than extracting and refining fossil fuels, and most of the alternative fuels will still produce more emissions when they’re used than the amount that was pulled out of the atmosphere by the plants or processes in the first place. 

The best way to think of these fuels is arguably as a stopgap, a possible way to make some climate progress while smart people strive to develop and build fully emissions-free ways of quickly, safely, and reliably moving things and people around the globe.

Sam Altman says helpful agents are poised to become AI’s killer function

1 May 2024 at 15:52

A number of moments from my brief sit-down with Sam Altman brought the OpenAI CEO’s worldview into clearer focus. The first was when he pointed to my iPhone SE (the one with the home button that’s mostly hated) and said, “That’s the best iPhone.” More revealing, though, was the vision he sketched for how AI tools will become even more enmeshed in our daily lives than the smartphone.

“What you really want,” he told MIT Technology Review, “is just this thing that is off helping you.” Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to. 

It’s a leap from OpenAI’s current offerings. Its leading applications, like DALL-E, Sora, and ChatGPT (which Altman referred to as “incredibly dumb” compared with what’s coming next), have wowed us with their ability to generate convincing text and surreal videos and images. But they mostly remain tools we use for isolated tasks, and they have limited capacity to learn about us from our conversations with them. 

In the new paradigm, as Altman sees it, the AI will be capable of helping us outside the chat interface and taking real-world tasks off our plates. 

Altman on AI hardware’s future 

I asked Altman if we’ll need a new piece of hardware to get to this future. Though smartphones are extraordinarily capable, and their designers are already incorporating more AI-driven features, some entrepreneurs are betting that the AI of the future will require a device that’s more purpose built. Some of these devices are already beginning to appear in his orbit. There is the (widely panned) wearable AI Pin from Humane, for example (Altman is an investor in the company but has not exactly been a booster of the device). He is also rumored to be working with former Apple designer Jony Ive on some new type of hardware. 

But Altman says there’s a chance we won’t necessarily need a device at all. “I don’t think it will require a new piece of hardware,” he told me, adding that the type of app envisioned could exist in the cloud. But he quickly added that even if this AI paradigm shift won’t require consumers buy a new hardware, “I think you’ll be happy to have [a new device].” 

Though Altman says he thinks AI hardware devices are exciting, he also implied he might not be best suited to take on the challenge himself: “I’m very interested in consumer hardware for new technology. I’m an amateur who loves it, but this is so far from my expertise.”

On the hunt for training data

Upon hearing his vision for powerful AI-driven agents, I wondered how it would square with the industry’s current scarcity of training data. To build GPT-4 and other models, OpenAI has scoured internet archives, newspapers, and blogs for training data, since scaling laws have long shown that making models bigger also makes them better. But finding more data to train on is a growing problem. Much of the internet has already been slurped up, and access to private or copyrighted data is now mired in legal battles. 

Altman is optimistic this won’t be a problem for much longer, though he didn’t articulate the specifics. 

“I believe, but I’m not certain, that we’re going to figure out a way out of this thing of you always just need more and more training data,” he says. “Humans are existence proof that there is some other way to [train intelligence]. And I hope we find it.”

On who will be poised to create AGI

OpenAI’s central vision has long revolved around the pursuit of artificial general intelligence (AGI), or an AI that can reason as well as or better than humans. Its stated mission is to ensure such a technology “benefits all of humanity.” It is far from the only company pursuing AGI, however. So in the race for AGI, what are the most important tools? I asked Altman if he thought the entity that marshals the largest amount of chips and computing power will ultimately be the winner. 

Altman suspects there will be “several different versions [of AGI] that are better and worse at different things,” he says. “You’ll have to be over some compute threshold, I would guess. But even then I wouldn’t say I’m certain.”

On when we’ll see GPT-5

You thought he’d answer that? When another reporter in the room asked Altman if he knew when the next version of GPT is slated to be released, he gave a calm response. “Yes,” he replied, smiling, and said nothing more. 

The Download: mysterious radio energy from outer space, and banning TikTok

1 May 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside the quest to map the universe with mysterious bursts of radio energy

When our universe was less than half as old as it is today, a burst of energy that could cook a sun’s worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback. 

The signal, which arrived in June 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them. This one was particularly special: nearly double the age of anything previously observed, and three and a half times more energetic. 

No one knows what causes fast radio bursts. They flash in a seemingly random and unpredictable pattern from all over the sky. But despite the mystery, these radio waves are starting to prove extraordinarily useful. Read the full story.

—Anna Kramer

The depressing truth about TikTok’s impending ban

Trump’s 2020 executive order banning TikTok came to nothing in the end. Yet the idea—that the US government should ban TikTok in some way—never went away. It would repeatedly be suggested in different forms and shapes. And eventually, on April 24, 2024, things came full circle with the bill passed in Congress and signed into law.

A lot has changed in those four years. Back then, TikTok was a rising sensation that many people didn’t understand; now, it’s one of the biggest social media platforms. But if the TikTok saga tells us anything, it’s that the US is increasingly inhospitable for Chinese companies. Read the full story.

—Zeyi Yang

This story is from China Report, our weekly newsletter covering tech and policy in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Changpeng Zhao has been sentenced to just four months in prison
The crypto exchange founder got off pretty lightly after pleading guilty to a money-laundering violation. (The Verge)+ The US Department of Justice had sought a three-year sentence. (The Guardian)

2 Tesla has gutted its charging team
Which is extremely bad news for those reliant on its massive charging network. (NYT $)
+ And more layoffs may be coming down the road. (The Information $)
+ Why getting more EVs on the road is all about charging. (MIT Technology Review)

3 A group of newspapers joined forces to sue OpenAI 
It comes just after the AI firm signed a deal with the Financial Times to use its articles as training data for its models. (WP $)
+ Meanwhile, Google is working with News Corp to fund new AI content. (The Information $)
+ OpenAI’s hunger for data is coming back to bite it. (MIT Technology Review)

4 Worldcoin is thriving in Argentina
The cash it offers in exchange for locals’ biometric data is a major incentive as unemployment in the country bites. (Rest of World)
+ Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users. (MIT Technology Review)

5 Bill Gates’ shadow looms large over Microsoft
The company’s AI revolution is no accident. (Insider $)

6 It’s incredibly difficult to turn off a car’s location tracking
Domestic abuse activists worry the technology plays into abusers’ hands. (The Markup)
+ Regulators are paying attention. (NYT $)

7 Brain monitors have a major privacy problem
Many of them sell your neural data without asking additional permission. (New Scientist $)
+ How your brain data could be used against you. (MIT Technology Review)

8 ECMO machines are a double-edged sword
They help keep critically ill patients alive. But at what cost? (New Yorker $)

9 How drones are helping protect wildlife from predators
So long as wolves stop trying to play with the drones, that is. (Undark Magazine)

10 This plastic contains bacteria that’ll break it down
It has the unusual side-effect of making the plastic even stronger, too. (Ars Technica)
+ Think that your plastic is being recycled? Think again. (MIT Technology Review)

Quote of the day

“I have constantly been looking ahead for the next thing that’s going to crush all my dreams and the stuff that I built.”

—Tony Northrup, a stock image photographer, explains to the Wall Street Journal generative AI is finally killing an industry that weathered the advent of digital cameras and the internet.

The big story

A new tick-borne disease is killing cattle in the US

November 2021

In the spring of 2021, Cynthia and John Grano, who own a cattle operation in Culpeper County, Virginia, started noticing some of their cows slowing down and acting “spacey.” They figured the animals were suffering from a common infectious disease that causes anemia in cattle. But their veterinarian had warned them that another disease carried by a parasite was spreading rapidly in the area.

After a third cow died, the Granos decided to test its blood. Sure enough, the test came back positive for the disease: theileria. And with no treatment available, the cows kept dying.

Livestock producers around the US are confronting this new and unfamiliar disease without much information, and researchers still don’t know how theileria will unfold, even as it quickly spreads west across the country. Read the full story.

—Britta Lokting

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ This Instagram account documenting the weird and wonderful world of Beanie Babies is the perfect midweek pick-me-up.
+ Challengers is great—but have you seen the rest of the best sports films?
+ This human fruit machine is killing me.
+ Evan Narcisse is a giant in the video games world.

The depressing truth about TikTok’s impending ban

By: Zeyi Yang
1 May 2024 at 06:00

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Allow me to indulge in a little reflection this week. Last week, the divest-or-ban TikTok bill was passed in Congress and signed into law. Four years ago, when I was just starting to report on the world of Chinese technologies, one of my first stories was about very similar news: President Donald Trump announcing he’d ban TikTok. 

That 2020 executive order came to nothing in the end—it was blocked in the courts, put aside after the presidency changed hands, and eventually withdrawn by the Biden administration. Yet the idea—that the US government should ban TikTok in some way—never went away. It would repeatedly be suggested in different forms and shapes. And eventually, on April 24, 2024, things came full circle.

A lot has changed in the four years between these two news cycles. Back then, TikTok was a rising sensation that many people didn’t understand; now, it’s one of the biggest social media platforms, the originator of a generation-defining content medium, and a music-industry juggernaut. 

What has also changed is my outlook on the issue. For a long time, I thought TikTok would find a way out of the political tensions, but I’m increasingly pessimistic about its future. And I have even less hope for other Chinese tech companies trying to go global. If the TikTok saga tells us anything, it’s that their Chinese roots will be scrutinized forever, no matter what they do.

I don’t believe TikTok has become a larger security threat now than it was in 2020. There have always been issues with the app, like potential operational influence by the Chinese government, the black-box algorithms that produce unpredictable results, and the fact that parent company ByteDance never managed to separate the US side and the China side cleanly, despite efforts (one called Project Texas) to store and process American data locally. 

But none of those problems got worse over the last four years. And interestingly, while discussions in 2020 still revolved around potential remedies like setting up data centers in the US to store American data or having an organization like Oracle audit operations, those kinds of fixes are not in the law passed this year. As long as it still has Chinese owners, the app is not permissible in the US. The only thing it can do to survive here is transfer ownership to a US entity. 

That’s the cold, hard truth not only for TikTok but for other Chinese companies too. In today’s political climate, any association with China and the Chinese government is seen as unacceptable. It’s a far cry from the 2010s, when Chinese companies could dream about developing a killer app and finding audiences and investors around the globe—something many did pull off. 

There’s something I wrote four years ago that still rings true today: TikTok is the bellwether for Chinese companies trying to go global. 

The majority of Chinese tech giants, like Alibaba, Tencent, and Baidu, operate primarily within China’s borders. TikTok was the first to gain mass popularity in lots of other countries across the world and become part of daily life for people outside China. To many Chinese startups, it showed that the hard work of trying to learn about foreign countries and users can eventually pay off, and it’s worth the time and investment to try.

On the other hand, if even TikTok can’t get itself out of trouble, with all the resources that ByteDance has, is there any hope for the smaller players?

When TikTok found itself in trouble, the initial reaction of these other Chinese companies was to conceal their roots, hoping they could avoid attention. During my reporting, I’ve encountered multiple companies that fret about being described as Chinese. “We are headquartered in Boston,” one would say, while everyone in China openly talked about its product as the overseas version of a Chinese app.

But with all the political back-and-forth about TikTok, I think these companies are also realizing that concealing their Chinese associations doesn’t work—and it may make them look even worse if it leaves users and regulators feeling deceived.

With the new divest-or-ban bill, I think these companies are getting a clear signal that it’s not the technical details that matter—only their national origin. The same worry is spreading to many other industries, as I wrote in this newsletter last week. Even in the climate and renewable power industries, the presence of Chinese companies is becoming increasingly politicized. They, too, are finding themselves scrutinized more for their Chinese roots than for the actual products they offer.

Obviously, none of this is good news to me. When they feel unwelcome in the US market, Chinese companies don’t feel the need to talk to international media anymore. Without these vital conversations, it’s even harder for people in other countries to figure out what’s going on with tech in China.

Instead of banning TikTok because it’s Chinese, maybe we should go back to focus on what TikTok did wrong: why certain sensitive political topics seem deprioritized on the platform; why Project Texas has stalled; how to make the algorithmic workings of the platform more transparent. These issues, instead of whether TikTok is still controlled by China, are the things that actually matter. It’s a harder path to take than just banning the app entirely, but I think it’s the right one.

Do you believe the TikTok ban will go through? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Facing the possibility of a total ban on TikTok, influencers and creators are making contingency plans. (Wired $)

2. TSMC has brought hundreds of Taiwanese employees to Arizona to build its new chip factory. But the company is struggling to bridge cultural and professional differences between American and Taiwanese workers. (Rest of World)

3. The US secretary of state, Antony Blinken, met with Chinese president Xi Jinping during a visit to China this week. (New York Times $)

  • Here’s the best way to describe these recent US-China diplomatic meetings: “The US and China talk past each other on most issues, but at least they’re still talking.” (Associated Press)

4. Half of Russian companies’ payments to China are made through middlemen in Hong Kong, Central Asia, or the Middle East to evade sanctions. (Reuters $)

5. A massive auto show is taking place in Beijing this week, with domestic electric vehicles unsurprisingly taking center stage. (Associated Press)

  • Meanwhile, Elon Musk squeezed in a quick trip to China and met with his “old friend” the Chinese premier Li Qiang, who was believed to have facilitated establishing the Gigafactory in Shanghai. (BBC)
  • Tesla may finally get a license to deploy its autopilot system, which it calls Full Self Driving, in China after agreeing to collaborate with Baidu. (Reuters $)

6. Beijing has hosted two rival Palestinian political groups, Hamas and Fatah, to talk about potential reconciliation. (Al Jazeera)

Lost in translation

The Chinese dubbing community is grappling with the impacts of new audio-generating AI tools. According to the Chinese publication ACGx, for a new audio drama, a music company licensed the voice of the famous dubbing actor Zhao Qianjing and used AI to transform it into multiple characters and voice the entire script. 

But online, this wasn’t really celebrated as an advancement for the industry. Beyond criticizing the quality of the audio drama (saying it still doesn’t sound like real humans), dubbers are worried about the replacement of human actors and increasingly limited opportunities for newcomers. Other than this new audio drama, there have been several examples in China where AI audio generation has been used to replace human dubbers in documentaries and games. E-book platforms have also allowed users to choose different audio-generated voices to read out the text. 

One more thing

While in Beijing, Antony Blinken visited a record store and bought two vinyl records—one by Taylor Swift and another by the Chinese rock star Dou Wei. Many Chinese (and American!) people learned for the first time that Blinken had previously been in a rock band.

Inside the quest to map the universe with mysterious bursts of radio energy

1 May 2024 at 05:00

When our universe was less than half as old as it is today, a burst of energy that could cook a sun’s worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback. 

The signal, which arrived on June 10, 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them. This one was particularly special: nearly double the age of anything previously observed, and three and a half times more energetic. 

But like the others that came before, it was otherwise a mystery. No one knows what causes fast radio bursts. They flash in a seemingly random and unpredictable pattern from all over the sky. Some appear from within our galaxy, others from previously unexamined depths of the universe. Some repeat in cyclical patterns for days at a time and then vanish; others have been consistently repeating every few days since we first identified them. Most never repeat at all. 

Despite the mystery, these radio waves are starting to prove extraordinarily useful. By the time our telescopes detect them, they have passed through clouds of hot, rippling plasma, through gas so diffuse that particles barely touch each other, and through our own Milky Way. And every time they hit the free electrons floating in all that stuff, the waves shift a little bit. The ones that reach our telescopes carry with them a smeary fingerprint of all the ordinary matter they’ve encountered between wherever they came from and where we are now. 

This makes fast radio bursts, or FRBs, invaluable tools for scientific discovery—especially for astronomers interested in the very diffuse gas and dust floating between galaxies, which we know very little about. 

“We don’t know what they are, and we don’t know what causes them. But it doesn’t matter. This is the tool we would have constructed and developed if we had the chance to be playing God and create the universe,” says Stuart Ryder, an astronomer at Macquarie University in Sydney and the lead author of the Science paper that reported the record-breaking burst. 

Many astronomers now feel confident that finding more such distant FRBs will enable them to create the most detailed three-dimensional cosmological map ever made—what Ryder likens to a CT scan of the universe. Even just five years ago making such a map might have seemed an intractable technical challenge: spotting an FFB and then recording enough data to determine where it came from is extraordinarily difficult because most of that work must happen in the few milliseconds before the burst passes.

But that challenge is about to be obliterated. By the end of this decade, a new generation of radio telescopes and related technologies coming online in Australia, Canada, Chile, California, and elsewhere should transform the effort to find FRBs—and help unpack what they can tell us. What was once a series of serendipitous discoveries will become something that’s almost routine. Not only will astronomers be able to build out that new map of the universe, but they’ll have the chance to vastly improve our understanding of how galaxies are born and how they change over time. 

Where’s the matter?

In 1998, astronomers counted up the weight of all of the identified matter in the universe and got a puzzling result. 

We know that about 5% of the total weight of the universe is made up of baryons like protons and neutrons— the particles that make up atoms, or all the “stuff” in the universe. (The other 95% includes dark energy and dark matter.) But the astronomers managed to locate only about 2.5%, not 5%, of the universe’s total. “They counted the stars, black holes, white dwarfs, exotic objects, the atomic gas, the molecular gas in galaxies, the hot plasma, etc. They added it all up and wound up at least a factor of two short of what it should have been,” says Xavier Prochaska, an astrophysicist at the University of California, Santa Cruz, and an expert in analyzing the light in the early universe. “It’s embarrassing. We’re not actively observing half of the matter in the universe.” 

All those missing baryons were a serious problem for simulations of how galaxies form, how our universe is structured, and what happens as it continues to expand. 

Astronomers began to speculate that the missing matter exists in extremely diffuse clouds of what’s known as the warm–hot intergalactic medium, or WHIM. Theoretically, the WHIM would contain all that unobserved material. After the 1998 paper was published, Prochaska committed himself to finding it. 

But nearly 10 years of his life and about $50 million in taxpayer money later, the hunt was going very poorly.

That search had focused largely on picking apart the light from distant galactic nuclei and studying x-ray emissions from tendrils of gas connecting galaxies. The breakthrough came in 2007, when Prochaska was sitting on a couch in a meeting room at the University of California, Santa Cruz, reviewing new research papers with his colleagues. There, amid the stacks of research, sat the paper reporting the discovery of the first FRB.

Duncan Lorimer and David Narkevic, astronomers at West Virginia University, had discovered a recording of an energetic radio wave unlike anything previously observed. The wave lasted for less than five milliseconds, and its spectral lines were very smeared and distorted, unusual characteristics for a radio pulse that was also brighter and more energetic than other known transient phenomena. The researchers concluded that the wave could not have come from within our galaxy, meaning that it had traveled some unknown distance through the universe. 

Here was a signal that had traversed long distances of space, been shaped and affected by electrons along the way, and had enough energy to be clearly detectable despite all the stuff it had passed through. There are no other signals we can currently detect that commonly occur throughout the universe and have this exact set of traits.

“I saw that and I said, ‘Holy cow—that’s how we can solve the missing-baryons problem,’” Prochaska says. Astronomers had used a similar technique with the light from pulsars— spinning neutron stars that beam radiation from their poles—to count electrons in the Milky Way. But pulsars are too dim to illuminate more of the universe. FRBs were thousands of times brighter, offering a way to use that technique to study space well beyond our galaxy.

A visualization of the cosmic web, the large-scale structure of the universe. Each bright knot is an entire galaxy, while the purple filaments show material between them.
This visualization of large-scale structure in the universe shows galaxies (bright knots) and the filaments of material between them.
NASA/NCSA UNIVERSITY OF ILLINOIS VISUALIZATION BY FRANK SUMMERS, SPACE TELESCOPE SCIENCE INSTITUTE, SIMULATION BY MARTIN WHITE AND LARS HERNQUIST, HARVARD UNIVERSITY

There’s a catch, though: in order for an FRB to be an indicator of what lies in the seemingly empty space between galaxies, researchers have to know where it comes from. If you don’t know how far the FRB has traveled, you can’t make any definitive estimate of what space looks like between its origin point and Earth. 

Astronomers couldn’t even point to the direction that the first 2007 FRB came from, let alone calculate the distance it had traveled. It was detected by an enormous single-dish radio telescope at the Parkes Observatory (now called the Murriyang) in New South Wales, which is great at picking up incoming radio waves but can pinpoint FRBs only to an area of the sky as large as Earth’s full moon. For the next decade, telescopes continued to identify FRBs without providing a precise origin, making them a fascinating mystery but not practically useful.

Then, in 2015, one particular radio wave flashed—and then flashed again. Over the course of two months of observation from the Arecibo telescope in Puerto Rico, the radio waves came again and again, flashing 10 times. This was the first repeating burst of FRBs ever observed (a mystery in its own right), and now researchers had a chance to determine where the radio waves had begun, using the opportunity to home in on its location.

In 2017, that’s what happened. The researchers obtained an accurate position for the fast radio burst using the NRAO Very Large Array telescope in central New Mexico. Armed with that position, the researchers then used the Gemini optical telescope in Hawaii to take a picture of the location, revealing the galaxy where the FRB had begun and how far it had traveled. “That’s when it became clear that at least some of these we’d get the distance for. That’s when I got really involved and started writing telescope proposals,” Prochaska says. 

That same year, astronomers from across the globe gathered in Aspen, Colorado, to discuss the potential for studying FRBs. Researchers debated what caused them. Neutron stars? Magnetars, neutron stars with such powerful magnetic fields that they emit x-rays and gamma rays? Merging galaxies? Aliens? Did repeating FRBs and one-offs have different origins, or could there be some other explanation for why some bursts repeat and most do not? Did it even matter, since all the bursts could be used as probes regardless of what caused them? At that Aspen meeting, Prochaska met with a team of radio astronomers based in Australia, including Keith Bannister, a telescope expert involved in the early work to build a precursor facility for the Square Kilometer Array, an international collaboration to build the largest radio telescope arrays in the world. 

The construction of that precursor telescope, called ASKAP, was still underway during that meeting. But Bannister, a telescope expert at the Australian government’s scientific research agency, CSIRO, believed that it could be requisitioned and adapted to simultaneously locate and observe FRBs. 

Bannister and the other radio experts affiliated with ASKAP understood how to manipulate radio telescopes for the unique demands of FRB hunting; Prochaska was an expert in everything “not radio.” They agreed to work together to identify and locate one-off FRBs (because there are many more of these than there are repeating ones) and then use the data to address the problem of the missing baryons. 

And over the course of the next five years, that’s exactly what they did—with astonishing success.

Building a pipeline

To pinpoint a burst in the sky, you need a telescope with two things that have traditionally been at odds in radio astronomy: a very large field of view and high resolution. The large field of view gives you the greatest possible chance to detect a fleeting, unpredictable burst. High resolution  lets you determine where that burst actually sits in your field of view. 

ASKAP was the perfect candidate for the job. Located in the westernmost part of the Australian outback, where cattle and sheep graze on public land and people are few and far between, the telescope consists of 36 dishes, each with a large field of view. These dishes are separated by large distances, allowing observations to be combined through a technique called interferometry so that a small patch of the sky can be viewed with high precision.  

The dishes weren’t formally in use yet, but Bannister had an idea. He took them and jerry-rigged a “fly’s eye” telescope, pointing the dishes at different parts of the sky to maximize its ability to spot something that might flash anywhere. 

“Suddenly, it felt like we were living in paradise,” Bannister says. “There had only ever been three or four FRB detections at this point, and people weren’t entirely sure if [FRBs] were real or not, and we were finding them every two weeks.” 

When ASKAP’s interferometer went online in September 2018, the real work began. Bannister designed a piece of software that he likens to live-action replay of the FRB event. “This thing comes by and smacks into your telescope and disappears, and you’ve got a millisecond to get its phone number,” he says. To do so, the software detects the presence of an FRB within a hundredth of a second and then reaches upstream to create a recording of the telescope’s data before the system overwrites it. Data from all the dishes can be processed and combined to reconstruct a view of the sky and find a precise point of origin. 

The team can then send the coordinates on to optical telescopes, which can take detailed pictures of the spot to confirm the presence of a galaxy—the likely origin point of the FRB. 

CSIRO's Australian Square Kilometre Array Pathfinder (ASKAP) telescope
These two dishes are part of CSIRO’s Australian Square Kilometre Array Pathfinder (ASKAP) telescope.
CSIRO

Ryder’s team used data on the galaxy’s spectrum, gathered from the European Southern Observatory, to measure how much its light stretched as it traversed space to reach our telescopes. This “redshift” becomes a proxy for distance, allowing astronomers to estimate just how much space the FRB’s light has passed through. 

In 2018, the live-action replay worked for the first time, making Bannister, Ryder, Prochaska, and the rest of their research team the first to localize an FRB that was not repeating. By the following year, the team had localized about five of them. By 2020, they had published a paper in Nature declaring that the FRBs had let them count up the universe’s missing baryons. 

The centerpiece of the paper’s argument was something called the dispersion measure—a number that reflects how much an FRB’s light has been smeared by all the free electrons along our line of sight. In general, the farther an FRB travels, the higher the dispersion measure should be. Armed with both the travel distance (the redshift) and the dispersion measure for a number of FRBs, the researchers found they could extrapolate the total density of particles in the universe. J-P Macquart, the paper’s lead author, believed that the relationship between dispersion measure and FRB distance was predictable and could be applied to map the universe.

As a leader in the field and a key player in the advancement of FRB research, Macquart would have been interviewed for this piece. But he died of a heart attack one week after the paper was published, at the age of 45. FRB researchers began to call the relationship between dispersion and distance the “Macquart relation,” in honor of his memory and his push for the groundbreaking idea that FRBs could be used for cosmology. 

Proving that the Macquart relation would hold at greater distances became not just a scientific quest but also an emotional one. 

“I remember thinking that I know something about the universe that no one else knows.”

The researchers knew that the ASKAP telescope was capable of detecting bursts from very far away—they just needed to find one. Whenever the telescope detected an FRB, Ryder was tasked with helping to determine where it had originated. It took much longer than he would have liked. But one morning in July 2022, after many months of frustration, Ryder downloaded the newest data email from the European Southern Observatory and began to scroll through the spectrum data. Scrolling, scrolling, scrolling—and then there it was: light from 8 billion years ago, or a redshift of one, symbolized by two very close, bright lines on the computer screen, showing the optical emissions from oxygen. “I remember thinking that I know something about the universe that no one else knows,” he says. “I wanted to jump onto a Slack and tell everyone, but then I thought: No, just sit here and revel in this. It has taken a lot to get to this point.” 

With the October 2023 Science paper, the team had basically doubled the distance baseline for the Macquart relation, honoring Macquart’s memory in the best way they knew how. The distance jump was significant because Ryder and the others on his team wanted to confirm that their work would hold true even for FRBs whose light comes from so far away that it reflects a much younger universe. They also wanted to establish that it was possible to find FRBs at this redshift, because astronomers need to collect evidence about many more like this one in order to create the cosmological map that motivates so much FRB research.

“It’s encouraging that the Macquart relation does still seem to hold, and that we can still see fast radio bursts coming from those distances,” Ryder said. “We assume that there are many more out there.” 

Mapping the cosmic web

The missing stuff that lies between galaxies, which should contain the majority of the matter in the universe, is often called the cosmic web. The diffuse gases aren’t floating like random clouds; they’re strung together more like a spiderweb, a complex weaving of delicate filaments that stretches as the galaxies at their nodes grow and shift. This gas probably escaped from galaxies into the space beyond when the galaxies first formed, shoved outward by massive explosions.

“We don’t understand how gas is pushed in and out of galaxies. It’s fundamental for understanding how galaxies form and evolve,” says Kiyoshi Masui, the director of MIT’s Synoptic Radio Lab. “We only exist because stars exist, and yet this process of building up the building blocks of the universe is poorly understood … Our ability to model that is the gaping hole in our understanding of how the universe works.” 

Astronomers are also working to build large-scale maps of galaxies in order to precisely measure the expansion of the universe. But the cosmological modeling underway with FRBs should create a picture of invisible gasses between galaxies, one that currently does not exist. To build a three-dimensional map of this cosmic web, astronomers will need precise data on thousands of FRBs from regions near Earth and from very far away, like the FRB at redshift one. “Ultimately, fast radio bursts will give you a very detailed picture of how gas gets pushed around,” Masui says. “To get to the cosmological data, samples have to get bigger, but not a lot bigger.” 

That’s the task at hand for Masui, who leads a team searching for FRBs much closer to our galaxy than the ones found by the Australian-led collaboration. Masui’s team conducts FRB research with the CHIME telescope in British Columbia, a nontraditional radio telescope with a very wide field of view and focusing reflectors that look like half-pipes instead of dishes. CHIME (short for “Canadian Hydrogen Intensity Mapping Experiment)” has no moving parts and is less reliant on mirrors than a traditional telescope (focusing light in only one direction rather than two), instead using digital techniques to process its data. CHIME can use its digital technology to focus on many places at once, creating a 200-square-degree field of view compared with ASKAP’s 30-degree one. Masui likened it to a mirror that can be focused on thousands of different places simultaneously. 

Because of this enormous field of view, CHIME has been able to gather data on thousands of bursts that are closer to the Milky Way. While CHIME cannot yet precisely locate where they are coming from the way that ASKAP can (the telescope is much more compact, providing lower resolution), Masui is leading the effort to change that by building three smaller versions of the same telescope in British Columbia; Green Bank, West Virginia; and Northern California. The additional data provided by these telescopes, the first of which will probably be collected sometime this year, can be combined with data from the original CHIME telescope to produce location information that is about 1,000 times more precise. That should be detailed enough for cosmological mapping.

The Canadian Hydrogen Intensity Mapping Experiment, or CHIME, a Canadian radio telescope, is shown at night.
The reflectors of the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, have been used to spot thousands of FRBs.
ANDRE RECNIK/CHIME

Telescope technology is improving so fast that the quest to gather enough FRB samples from different parts of the universe for a cosmological map could be finished within the next 10 years. In addition to CHIME, the BURSTT radio telescope in Taiwan should go online this year; the CHORD telescope in Canada, designed to surpass CHIME, should begin operations in 2025; and the Deep Synoptic Array in California could transform the field of radio astronomy when it’s finished, which is expected to happen sometime around the end of the decade. 

And at ASKAP, Bannister is building a new tool that will quintuple the sensitivity of the telescope, beginning this year. If you can imagine stuffing a million people simultaneously watching uncompressed YouTube videos into a box the size of a fridge, that’s probably the easiest way to visualize the data handling capabilities of this new processor, called a field-programmable gate array, which Bannister is almost finished programming. He expects the new device to allow the team to detect one new FRB each day.

With all the telescopes in competition, Bannister says, “in five or 10 years’ time, there will be 1,000 new FRBs detected before you can write a paper about the one you just found … We’re in a race to make them boring.” 

Prochaska is so confident FRBs will finally give us the cosmological map he’s been working toward his entire life that he’s started studying for a degree in oceanography. Once astronomers have measured distances for 1,000 of the bursts, he plans to give up the work entirely. 

“In a decade, we could have a pretty decent cosmological map that’s very precise,” he says. “That’s what the 1,000 FRBs are for—and I should be fired if we don’t.”

Unlike most scientists, Prochaska can define the end goal. He knows that all those FRBs should allow astronomers to paint a map of the invisible gases in the universe, creating a picture of how galaxies evolve as gases move outward and then fall back in. FRBs will grant us an understanding of the shape of the universe that we don’t have today—even if the mystery of what makes them endures. 

Anna Kramer is a science and climate journalist based in Washington, D.C.

Roundtables: Inside the Next Era of AI and Hardware

30 April 2024 at 12:56

Recorded on April 30, 2024

Inside the Next Era of AI and Hardware

Speakers: James O’Donnell, AI reporter, and Charlotte Jee, News editor

Hear first-hand from our AI reporter, James O’Donnell, as he walks our news editor Charlotte Jee through the latest goings-on in his beat, from rapid advances in robotics to autonomous military drones, wearable devices, and tools for AI-powered surgeries.

Related Coverage

The Download: robotics’ data bottleneck, and our AI afterlives

30 April 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The robot race is fueling a fight for training data

We’re interacting with AI tools more directly—and regularly—than ever before. Interacting with robots, by way of contrast, is still a rarity for most. But experts say that’s on the cusp of changing. 

Roboticists believe that, using new AI techniques, they can unlock more capable robots that can move freely through unfamiliar environments and tackle challenges they’ve never seen before.

But something is standing in the way: lack of access to the types of data used to train robots so they can interact with the physical world. It’s far harder to come by than the data used to train the most advanced AI models, and that scarcity is one of the main things currently holding progress in robotics back.

As a result, leading companies and labs are in fierce competition to find new and better ways to gather the data they need. It’s led them down strange paths, like using robotic arms to flip pancakes for hours on end. And they’re running into the same sorts of privacy, ethics, and copyright issues as their counterparts in the world of AI. Read the full story.

—James O’Donnell

My deepfake shows how valuable our data is in the age of AI

—Melissa Heikkilä

Deepfakes are getting good. Like, really good. Earlier this month I went to a studio in East London to get myself digitally cloned by the AI video startup Synthesia. They made a hyperrealistic deepfake that looked and sounded just like me, with realistic intonation. The end result was mind-blowing. It could easily fool someone who doesn’t know me well.

Synthesia has managed to create AI avatars that are remarkably humanlike after only one year of tinkering with the latest generation of generative AI. It’s equally exciting and daunting thinking about where this technology is going. But they raise a big question: What happens to our data once we submit it to AI companies? Read the full story.

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 AI startups without products can still raise millions
How some of them plan to make money is unclear, but that doesn’t deter investors. (WSJ $)+ Those large AI models are wildly expensive to run. (Bloomberg $)
+ AI hype is built on high test scores. Those tests are flawed. (MIT Technology Review)

2 The EU says Meta isn’t doing enough to counter Russian disinformation
So it’s launching formal proceedings against the company ahead of EU elections. (The Guardian)
+ Three technology trends shaping 2024’s elections. (MIT Technology Review)

3 Meet the humans fighting back against algorithmic curation
The solution could, ironically, lie with different kinds of algorithms. (Wired $)

4 An AI blood test claims to diagnose postpartum depression
It says the presence of a gene that links moods more closely to hormonal changes is an indicator. (WP $)
+ An AI system helped to save lives in a hospital trial. (New Scientist $)

5 Tesla secretly tested its autonomous driving tech in San Francisco
Which hints that its previous ‘general solutions’ approach fell short. (The Information $)
+ Robotaxis are here. It’s time to decide what to do about them. (MIT Technology Review)

6 Why egg freezing has failed to live up to its hype
We’re finally getting a clearer picture of how effective the procedure is. (Vox)
+ I took an international trip with my frozen eggs to learn about the fertility industry. (MIT Technology Review)

7 NASA has finally solved a long-standing solar mystery 
The sun’s corona is far hotter than its surface. But why? (Quanta Magazine)

8 Do dating apps actually help you find your soulmate?
Chemistry and a great relationship are difficult to quantify. (The Guardian)
+ Here’s how the net’s newest matchmakers help you find love. (MIT Technology Review)

9 Online messaging has come a long way
BBS, anyone? (Ars Technica)

10 The three-year search for a synth-heavy pop song is over 
…But its origins are seedier than you’d expect. (404 Media)

Quote of the day

“This is the Oppenheimer Moment of our generation.”

—Alexander Schallenberg, Austria’s foreign minister, warns against granting AI too much autonomy on the battlefield during a summit in Vienna, Bloomberg reports.

The big story

Next slide, please: A brief history of the corporate presentation

August 2023

PowerPoint is everywhere. It’s used in religious sermons; by schoolchildren preparing book reports; at funerals and weddings. In 2010, Microsoft announced that PowerPoint was installed on more than a billion computers worldwide. 

But before PowerPoint, 35-millimeter film slides were king. They were the only medium for the kinds of high-impact presentations given by CEOs and top brass at annual meetings for stockholders, employees, and salespeople. 

Known in the business as “multi-image” shows, these presentations required a small army of producers, photographers, and live production staff to pull off. Read this story to delve into the fascinating, flashy history of corporate presentations

—Claire L. Evans

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ This is some seriously committed egg flipping. 🍳
+ How to spend time and make precious memories with the people you love.
+ Gen Z is on the move: to the US Midwest apparently.
+ Cool: these novels were all inspired by the authors’ day jobs.

My deepfake shows how valuable our data is in the age of AI

30 April 2024 at 05:23

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Deepfakes are getting good. Like, really good. Earlier this month I went to a studio in East London to get myself digitally cloned by the AI video startup Synthesia. They made a hyperrealistic deepfake that looked and sounded just like me, with realistic intonation. It is a long way away from the glitchiness of earlier generations of AI avatars. The end result was mind-blowing. It could easily fool someone who doesn’t know me well.

Synthesia has managed to create AI avatars that are remarkably humanlike after only one year of tinkering with the latest generation of generative AI. It’s equally exciting and daunting thinking about where this technology is going. It will soon be very difficult to differentiate between what is real and what is not, and this is a particularly acute threat given the record number of elections happening around the world this year. 

We are not ready for what is coming. If people become too skeptical about the content they see, they might stop believing in anything at all, which could enable bad actors to take advantage of this trust vacuum and lie about the authenticity of real content. Researchers have called this the “liar’s dividend.” They warn that politicians, for example, could claim that genuinely incriminating information was fake or created using AI. 

I just published a story on my deepfake creation experience, and on the big questions about a world where we increasingly can’t tell what’s real. Read it here

But there is another big question: What happens to our data once we submit it to AI companies? Synthesia says it does not sell the data it collects from actors and customers, although it does release some of it for academic research purposes. The company uses avatars for three years, at which point actors are asked if they want to renew their contracts. If so, they come into the studio to make a new avatar. If not, the company deletes their data.

But other companies are not that transparent about their intentions. As my colleague Eileen Guo reported last year, companies such as Meta license actors’ data—including their faces and  expressions—in a way that allows the companies to do whatever they want with it. Actors are paid a small up-front fee, but their likeness can then be used to train AI models in perpetuity without their knowledge. 

Even if contracts for data are transparent, they don’t apply if you die, says Carl Öhman, an assistant professor at Uppsala University who has studied the online data left by deceased people and is the author of a new book, The Afterlife of Data. The data we input into social media platforms or AI models might end up benefiting companies and living on long after we’re gone. 

“Facebook is projected to host, within the next couple of decades, a couple of billion dead profiles,” Öhman says. “They’re not really commercially viable. Dead people don’t click on any ads, but they take up server space nevertheless,” he adds. This data could be used to train new AI models, or to make inferences about the descendants of those deceased users. The whole model of data and consent with AI presumes that both the data subject and the company will live on forever, Öhman says.

Our data is a hot commodity. AI language models are trained by indiscriminately scraping the web, and that also includes our personal data. A couple of years ago I tested to see if GPT-3, the predecessor of the language model powering ChatGPT, has anything on me. It struggled, but I found that I was able to retrieve personal information about MIT Technology Review’s editor in chief, Mat Honan. 

High-quality, human-written data is crucial to training the next generation of powerful AI models, and we are on the verge of running out of free online training data. That’s why AI companies are racing to strike deals with news organizations and publishers to access their data treasure chests. 

Old social media sites are also a potential gold mine: when companies go out of business or platforms stop being popular, their assets, including users’ data, get sold to the highest bidder, says Öhman. 

“MySpace data has been bought and sold multiple times since MySpace crashed. And something similar may well happen to Synthesia, or X, or TikTok,” he says. 

Some people may not care much about what happens to their data, says Öhman. But securing exclusive access to high-quality data helps cement the monopoly position of large corporations, and that harms us all. This is something we need to grapple with as a society, he adds. 

Synthesia said it will delete my avatar after my experiment, but the whole experience did make me think of all the cringeworthy photos and posts that haunt me on Facebook and other social media platforms. I think it’s time for a purge.


Now read the rest of The Algorithm

Deeper Learning

Chatbot answers are all made up. This new tool helps you figure out which ones to trust.

Large language models are famous for their ability to make things up—in fact, it’s what they’re best at. But their inability to tell fact from fiction has left many businesses wondering if using them is worth the risk. A new tool created by Cleanlab, an AI startup spun out of MIT, is designed to provide a clearer sense of how trustworthy these models really are. 

A BS-o-meter for chatbots: Called the Trustworthy Language Model, it gives any output generated by a large language model a score between 0 and 1, according to its reliability. This lets people choose which responses to trust and which to throw out. Cleanlab hopes that its tool will make large language models more attractive to businesses worried about how much stuff they invent. Read more from Will Douglas Heaven.

Bits and Bytes

Here’s the defense tech at the center of US aid to Israel, Ukraine, and Taiwan
President Joe Biden signed a $95 billion aid package into law last week. The bill will send a significant quantity of supplies to Ukraine and Israel, while also supporting Taiwan with submarine technology to aid its defenses against China. (MIT Technology Review

Rishi Sunak promised to make AI safe. Big Tech’s not playing ball.
The UK’s prime minister thought he secured a political win when he got AI power players to agree to voluntary safety testing with the UK’s new AI Safety Institute. Six months on, it turns out pinkie promises don’t go very far. OpenAI and Meta have not granted access to the AI Safety Institute to do prerelease safety testing on their models. (Politico

Inside the race to find AI’s killer app
The AI hype bubble is starting to deflate as companies try to find a way to make profits out of the eye-wateringly expensive process of developing and running this technology. Tech companies haven’t solved some of the fundamental problems slowing its wider adoption, such as the fact that generative models constantly make things up. (The Washington Post)  

Why the AI industry’s thirst for new data centers can’t be satisfied
The current boom in data-hungry AI means there is now a shortage of parts, property, and power to build data centers. (The Wall Street Journal

The friends who became rivals in Big Tech’s AI race
This story is a fascinating look into one of the most famous and fractious relationships in AI. Demis Hassabis and Mustafa Suleyman are old friends who grew up in London and went on to cofound AI lab DeepMind. Suleyman was ousted following a bullying scandal, went on to start his own short-lived startup, and now heads rival Microsoft’s AI efforts, while Hassabis still runs DeepMind, which is now Google’s central AI research lab. (The New York Times

This creamy vegan cheese was made with AI
Startups are using artificial intelligence to design plant-based foods. The companies train algorithms on data sets of ingredients with desirable traits like flavor, scent, or stretchability. Then they use AI to comb troves of data to develop new combinations of those ingredients that perform similarly. (MIT Technology Review

The robot race is fueling a fight for training data

30 April 2024 at 05:00

Since ChatGPT was released, we now interact with AI tools more directly—and regularly—than ever before. 

But interacting with robots, by way of contrast, is still a rarity for most. If you don’t undergo complex surgery or work in logistics, the most advanced robot you encounter in your daily life might still be a vacuum cleaner (if you’re feeling young, the first Roomba was released 22 years ago). 

But that’s on the cusp of changing. Roboticists believe that by using new AI techniques, they will achieve something the field has pined after for decades: more capable robots that can move freely through unfamiliar environments and tackle challenges they’ve never seen before. 

“It’s like being strapped to the front of a rocket,” says Russ Tedrake, vice president of robotics research at the Toyota Research Institute, says of the field’s pace right now. Tedrake says he has seen plenty of hype cycles rise and fall, but none like this one. “I’ve been in the field for 20-some years. This is different,” he says. 

But something is slowing that rocket down: lack of access to the types of data used to train robots so they can interact more smoothly with the physical world. It’s far harder to come by than the data used to train the most advanced AI models like GPT—mostly text, images, and videos scraped off the internet. Simulation programs can help robots learn how to interact with places and objects, but the results still tend to fall prey to what’s known as the “sim-to-real gap,” or failures that arise when robots move from the simulation to the real world. 

For now, we still need access to physical, real-world data to train robots. That data is relatively scarce and tends to require a lot more time, effort, and expensive equipment to collect. That scarcity is one of the main things currently holding progress in robotics back. 

As a result, leading companies and labs are in fierce competition to find new and better ways to gather the data they need. It’s led them down strange paths, like using robotic arms to flip pancakes for hours on end, watching thousands of hours of graphic surgery videos pulled from YouTube, or deploying researchers to numerous Airbnbs in order to film every nook and cranny. Along the way, they’re running into the same sorts of privacy, ethics, and copyright issues as their counterparts in the world of chatbots. 

The new need for data

For decades, robots were trained on specific tasks, like picking up a tennis ball or doing a somersault. While humans learn about the physical world through observation and trial and error, many robots were learning through equations and code. This method was slow, but even worse, it meant that robots couldn’t transfer skills from one task to a new one. 

But now, AI advances are fast-tracking a shift that had already begun: letting robots teach themselves through data. Just as a language model can learn from a library’s worth of novels, robot models can be shown a few hundred demonstrations of a person washing ketchup off a plate using robotic grippers, for example, and then imitate the task without being taught explicitly what ketchup looks like or how to turn on the faucet. This approach is bringing faster progress and machines with much more general capabilities. 

Now every leading company and lab is trying to enable robots to reason their way through new tasks using AI. Whether they succeed will hinge on whether researchers can find enough diverse types of data to fine-tune models for robots, as well as novel ways to use reinforcement learning to let them know when they’re right and when they’re wrong. 

“A lot of people are scrambling to figure out what’s the next big data source,” says Pras Velagapudi, chief technology officer of Agility Robotics, which makes a humanoid robot that operates in warehouses for customers including Amazon. The answers to Velagapudi’s question will help define what tomorrow’s machines will excel at, and what roles they may fill in our homes and workplaces. 

Prime training data

To understand how roboticists are shopping for data, picture a butcher shop. There are prime, expensive cuts ready to be cooked. There are the humble, everyday staples. And then there’s the case of trimmings and off-cuts lurking in the back, requiring a creative chef to make them into something delicious. They’re all usable, but they’re not all equal.

For a taste of what prime data looks like for robots, consider the methods adopted by the Toyota Research Institute (TRI). Amid a sprawling laboratory in Cambridge, Massachusetts, equipped with robotic arms, computers, and a random assortment of everyday objects like dustpans and egg whisks, researchers teach robots new tasks through teleoperation, creating what’s called demonstration data. A human might use a robotic arm to flip a pancake 300 times in an afternoon, for example.

The model processes that data overnight, and then often the robot can perform the task autonomously the next morning, TRI says. Since the demonstrations show many iterations of the same task, teleoperation creates rich, precisely labeled data that helps robots perform well in new tasks.

The trouble is, creating such data takes ages, and it’s also limited by the number of expensive robots you can afford. To create quality training data more cheaply and efficiently, Shuran Song, head of the Robotics and Embodied AI Lab at Stanford University, designed a device that can more nimbly be used with your hands, and built at a fraction of the cost. Essentially a lightweight plastic gripper, it can collect data while you use it for everyday activities like cracking an egg or setting the table. The data can then be used to train robots to mimic those tasks. Using simpler devices like this could fast-track the data collection process.

Open-source efforts

Roboticists have recently alighted upon another method for getting more teleoperation data: sharing what they’ve collected with each other, thus saving them the laborious process of creating data sets alone. 

The Distributed Robot Interaction Dataset (DROID), published last month, was created by researchers at 13 institutions, including companies like Google DeepMind and top universities like Stanford and Carnegie Mellon. It contains 350 hours of data generated by humans doing tasks ranging from closing a waffle maker to cleaning up a desk. Since the data was collected using hardware that’s common in the robotics world, researchers can use it to create AI models and then test those models on equipment they already have. 

The effort builds on the success of the Open X-Embodiment Collaboration, a similar project from Google DeepMind that aggregated data on 527 skills, collected from a variety of different types of hardware. The data set helped build Google DeepMind’s RT-X model, which can turn text instructions (for example, “Move the apple to the left of the soda can”) into physical movements. 

Robotics models built on open-source data like this can be impressive, says Lerrel Pinto, a researcher who runs the General-purpose Robotics and AI Lab at New York University. But they can’t perform across a wide enough range of use cases to compete with proprietary models built by leading private companies. What is available via open source is simply not enough for labs to successfully build models at a scale that would produce the gold standard: robots that have general capabilities and can receive instructions through text, image, and video.

“The biggest limitation is the data,” he says. Only wealthy companies have enough. 

These companies’ data advantage is only getting more thoroughly cemented over time. In their pursuit of more training data, private robotics companies with large customer bases have a not-so-secret weapon: their robots themselves are perpetual data-collecting machines.

Covariant, a robotics company founded in 2017 by OpenAI researchers, deploys robots trained to identify and pick items in warehouses for companies like Crate & Barrel and Bonprix. These machines constantly collect footage, which is then sent back to Covariant. Every time the robot fails to pick up a bottle of shampoo, for example, it becomes a data point to learn from, and the model improves its shampoo-picking abilities for next time. The result is a massive, proprietary data set collected by the company’s own machines. 

This data set is part of why earlier this year Covariant was able to release a powerful foundation model, as AI models capable of a variety of uses are known. Customers can now communicate with its commercial robots much as you’d converse with a chatbot: you can ask questions, show photos, and instruct it to take a video of itself moving an item from one crate to another. These customer interactions with the model, which is called RFM-1, then produce even more data to help it improve.

Peter Chen, cofounder and CEO of Covariant, says exposing the robots to a number of different objects and environments is crucial to the model’s success. “We have robots handling apparel, pharmaceuticals, cosmetics, and fresh groceries,” he says. “It’s one of the unique strengths behind our data set.” Up next will be bringing its fleet into more sectors and even having the AI model power different types of robots, like humanoids, Chen says.

Learning from video

The scarcity of high-quality teleoperation and real-world data has led some roboticists to propose bypassing that collection method altogether. What if robots could just learn from videos of people?

Such video data is easier to produce, but unlike teleoperation data, it lacks “kinematic” data points, which plot the exact movements of a robotic arm as it moves through space. 

Researchers from the University of Washington and Nvidia have created a workaround, building a mobile app that lets people train robots using augmented reality. Users take videos of themselves completing simple tasks with their hands, like picking up a mug, and the AR program can translate the results into waypoints for the robotics software to learn from. 

Meta AI is pursuing a similar collection method on a larger scale through its Ego4D project, a data set of more than 3,700 hours of video taken by people around the world doing everything from laying bricks to playing basketball to kneading bread dough. The data set is broken down by task and contains thousands of annotations, which detail what’s happening in each scene, like when a weed has been removed from a garden or a piece of wood is fully sanded.

Learning from video data means that robots can encounter a much wider variety of tasks than they could if they relied solely on human teleoperation (imagine folding croissant dough with robot arms). That’s important, because just as powerful language models need complex and diverse data to learn, roboticists can create their own powerful models only if they expose robots to thousands of tasks.

To that end, some researchers are trying to wring useful insights from a vast source of abundant but low-quality data: YouTube. With thousands of hours of video uploaded every minute, there is no shortage of available content. The trouble is that most of it is pretty useless for a robot. That’s because it’s not labeled with the types of information robots need, like annotations or kinematic data. 

Photo Illustration showing a robotic hand using laptop, watching YouTube
SARAH ROGERS/MITTR | GETTY

“You can say [to a robot], Oh, this is a person playing Frisbee with their dog,” says Chen, of Covariant, imagining a typical video that might be found on YouTube. “But it’s very difficult for you to say, Well, when this person throws a Frisbee, this is the acceleration and the rotation and that’s why it flies this way.”

Nonetheless, a few attempts have proved promising. When he was a postdoc at Stanford, AI researcher Emmett Goodman looked into how AI could be brought into the operating room to make surgeries safer and more predictable. Lack of data quickly became a roadblock. In laparoscopic surgeries, surgeons often use robotic arms to manipulate surgical tools inserted through very small incisions in the body. Those robotic arms have cameras capturing footage that can help train models, once personally identifying information has been removed from the data. In more traditional open surgeries, on the other hand, surgeons use their hands instead of robotic arms. That produces much less data to build AI models with. 

“That is the main barrier to why open-surgery AI is the slowest to develop,” he says. “How do you actually collect that data?”

To tackle that problem, Goodman trained an AI model on thousands of hours of open-surgery videos, taken by doctors with handheld or overhead cameras, that his team gathered from YouTube (with identifiable information removed). His model, as described in a paper in the medical journal JAMA in December 2023, could then identify segments of the operations from the videos. This laid the groundwork for creating useful training data, though Goodman admits that the barriers to doing so at scale, like patient privacy and informed consent, have not been overcome. 

Uncharted legal waters

Chances are that wherever roboticists turn for their new troves of training data, they’ll at some point have to wrestle with some major legal battles. 

The makers of large language models are already having to navigate questions of credit and copyright. A lawsuit filed by the New York Times alleges that ChatGPT copies the expressive style of its stories when generating text. The chief technical officer of OpenAI recently made headlines when she said the company’s video generation tool Sora was trained on publicly available data, sparking a critique from YouTube’s CEO, who said that if Sora learned from YouTube videos, it would be a violation of the platform’s terms of service.

“It is an area where there’s a substantial amount of legal uncertainty,” says Frank Pasquale, a professor at Cornell Law School. If robotics companies want to join other AI companies in using copyrighted works in their training sets, it’s unclear whether that’s allowed under the fair-use doctrine, which permits copyrighted material to be used without permission in a narrow set of circumstances. An example often cited by tech companies and those sympathetic to their view is the 2015 case of Google Books, in which courts found that Google did not violate copyright laws in making a searchable database of millions of books. That legal precedent may tilt the scales slightly in tech companies’ favor, Pasquale says.

It’s far too soon to tell whether legal challenges will slow down the robotics rocket ship, since AI-related cases are sprawling and still undecided. But it’s safe to say that roboticists scouring YouTube or other internet video sources for training data will be wading in fairly uncharted waters.

The next era

Not every roboticist feels that data is the missing link for the next breakthrough. Some argue that if we build a good enough virtual world for robots to learn in, maybe we don’t need training data from the real world at all. Why go through the effort of training a pancake-flipping robot in a real kitchen, for example, if it could learn through a digital simulation of a Waffle House instead?

Roboticists have long used simulator programs, which digitally replicate the environments that robots navigate through, often down to details like the texture of the floorboards or the shadows cast by overhead lights. But as powerful as they are, roboticists using these programs to train machines have always had to work around that sim-to-real gap. 

Now the gap might be shrinking. Advanced image generation techniques and faster processing are allowing simulations to look more like the real world. Nvidia, which leveraged its experience in video game graphics to build the leading robotics simulator, called Isaac Sim, announced last month that leading humanoid robotics companies like Figure and Agility are using its program to build foundation models. These companies build virtual replicas of their robots in the simulator and then unleash them to explore a range of new environments and tasks.

Deepu Talla, vice president of robotics and edge computing at Nvidia, doesn’t hold back in predicting that this way of training will nearly replace the act of training robots in the real world. It’s simply far cheaper, he says.

“It’s going to be a million to one, if not more, in terms of how much stuff is going to be done in simulation,” he says. “Because we can afford to do it.”

But if models can solve some of the “cognitive” problems, like learning new tasks, there are a host of challenges to realizing that success in an effective and safe physical form, says Aaron Saunders, chief technology officer of Boston Dynamics. We’re a long way from building hardware that can sense different types of materials, scrub and clean, or apply a gentle amount of force.

“There’s still a massive piece of the equation around how we’re going to program robots to actually act on all that information to interact with that world,” he says.

If we solved that problem, what would the robotic future look like? We could see nimble robots that help people with physical disabilities move through their homes, autonomous drones that clean up pollution or hazardous waste, or surgical robots that make microscopic incisions, leading to operations with a reduced risk of complications. For all these optimistic visions, though, more controversial ones are already brewing. The use of AI by militaries worldwide is on the rise, and the emergence of autonomous weapons raises troubling questions.

The labs and companies poised to lead in the race for data include, at the moment, the humanoid-robot startups beloved by investors (Figure AI was recently boosted by a $675 million funding round), commercial companies with sizable fleets of robots collecting data, and drone companies buoyed by significant military investment. Meanwhile, smaller academic labs are doing more with less to create data sets that rival those available to Big Tech. 

But what’s clear to everyone I speak with is that we’re at the very beginning of the robot data race. Since the correct way forward is far from obvious, all roboticists worth their salt are pursuing any and all methods to see what sticks.

There “isn’t really a consensus” in the field, says Benjamin Burchfiel, a senior research scientist in robotics at TRI. “And that’s a healthy place to be.”

The Download: inside the US defense tech aid package, and how AI is improving vegan cheese

29 April 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Here’s the defense tech at the center of US aid to Israel, Ukraine, and Taiwan

After weeks of drawn-out congressional debate over how much the United States should spend on conflicts abroad, President Joe Biden signed a $95 billion aid package into law last week.

The bill will send a significant quantity of supplies to Ukraine and Israel, while also supporting Taiwan with submarine technology to aid its defenses against China. It’s also sparked renewed calls for stronger crackdowns on Iranian-produced drones. 

James O’Donnell, our AI reporter, spoke to Andrew Metrick, a fellow with the defense program at the Center for a New American Security, a think tank, to discuss how the spending bill provides a window into US strategies around four key defense technologies with the power to reshape how today’s major conflicts are being fought. Read the full story.

This piece is part of MIT Technology Review Explains: a series delving into the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Hear more about how AI intersects with hardware

Hear first-hand from James in our latest subscribers-only Rountables session, as he walks news editor Charlotte Jee through the latest goings-on in his beat, from rapid advances in robotics to autonomous military drones, wearable devices, and tools for AI-powered surgeries Register now to join the discussion tomorrow at 11:30am ET.

Check out some more of James’ reporting:

+ Inside a Californian startup’s herculean efforts to bring a small slice of the chipmaking supply chain back to the US.

+ An OpenAI spinoff has built an AI model that helps robots learn tasks like humans.
But can it graduate from the lab to the warehouse floor? Read the full story.

+ Watch this robot as it learns to stitch up wounds all on its own.

+ A new satellite will use Google’s AI to map methane leaks from space. It could help to form the most detailed portrait yet of methane emissions—but companies and countries will actually have to act on the data.

This creamy vegan cheese was made with AI

Most vegan cheese falls into an edible uncanny valley full of discomforting not-quite-right versions of the real thing. But machine learning is ushering in a new age of completely vegan cheese that’s much closer in taste and texture to traditional fromage.

Several startups are using AI to design plant-based foods including cheese, training algorithms on datasets of ingredients with desirable traits like flavor, scent, or stretchability. Then they use AI to comb troves of data to develop new combinations of those ingredients that perform similarly. But not everyone in the industry is bullish about AI-assisted ingredient discovery. Read the full story.

—Andrew Rosenblum

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Tesla has struck a deal to bring its self-driving tech to China 
It’ll use mapping and navigation functions from native self-driving car company Baidu. (WSJ $)
+ Tesla is facing at least eight legal cases over the tech in the next year. (WP $)
+ It’s also struggling with a major union issue in Sweden. (Bloomberg $)
+ Baidu’s self-driving cars have been on Beijing’s streets for years. (MIT Technology Review)

 2 OpenAI will train its models on a paywalled British newspaper’s articles
ChatGPT will include links to Financial Times articles in its future responses. (FT $)
+ We could run out of data to train AI language programs. (MIT Technology Review)

3 This summer could be our hottest yet
Extreme weather events are likely to be on the horizon across the globe. (Vox)
+ One of the biggest untapped resources of renewable energy? Tidal power. (Undark Magazine)
+ Here’s how much heat your body can take. (MIT Technology Review)

4 The UK institute that helped popularize effective altruism has shut down
The controversial philosophies it championed are extremely divisive. (The Guardian)
+ Inside effective altruism, where the far future counts a lot more than the present. (MIT Technology Review)

5 Human soldiers aren’t sure how to feel about their robot counterparts
Some teams get attached to their bots. Others hate them. (IEEE Spectrum)
+ Inside the messy ethics of making war with machines. (MIT Technology Review)

6 The US and China are locked in a race to build ultrafast submarines
But China’s claims that it’s made a laser breakthrough may be overblown. (Insider $)

7 Recruiters are fighting an influx of AI job applications
Tech roles are few and far between, and generative AI is making it easier to mass-apply for what’s available. (Wired $)
+ African universities aren’t preparing graduates for work in the age of AI. (Rest of World)

8 This firm uses a robotic arm to chisel marble sculptures
But it still needs a helping hand from humans. (Bloomberg $)

9 Our email accounts are modern day diaries
It’s an instantly-searchable record of our lives. (NY Mag $)

10 TikTok has fallen in love with Super 8 cameras 🎥
Even though they’re prohibitively expensive. (WSJ $)
+ Gen Z is ditching smartphones in favor of simpler devices. (The Guardian)

Quote of the day

“I have little in common with people who take cold plunges and want to live forever.”

Ethan Mollick, a business school professor at the University of Pennsylvania who advises major companies and policymakers about AI, insists he is far from the Silicon Valley tech bro stereotype to the Wall Street Journal.

The big story

How big science failed to unlock the mysteries of the human brain

August 2021

In September 2011, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain.

That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury, and help answer one of the great questions of science: How does the brain bring about consciousness?

A decade on, the US project has wound down, and the EU project faces its deadline to build a digital brain. So have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever? Read the full story.

—Emily Mullin

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ I hope Fat Albert the polar bear is doing well.
+ Classic novels can’t please everyone—even if they’re classics for a reason.
+ Turns out we may have been mishearing Neil Armstrong’s famous first words as he set foot on the moon.
+ Hang onto those DVDs, you never know when Netflix is going to fail you. 📀

Here’s the defense tech at the center of US aid to Israel, Ukraine, and Taiwan

26 April 2024 at 09:55

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

After weeks of drawn-out congressional debate over how much the United States should spend on conflicts abroad, President Joe Biden signed a $95.3 billion aid package into law on Wednesday.

The bill will send a significant quantity of supplies to Ukraine and Israel, while also supporting Taiwan with submarine technology to aid its defenses against China. It’s also sparked renewed calls for stronger crackdowns on Iranian-produced drones. 

Though much of the money will go toward replenishing fairly standard munitions and supplies, the spending bill provides a window into US strategies around four key defense technologies that continue to reshape how today’s major conflicts are being fought.

For a closer look at the military technology at the center of the aid package, I spoke with Andrew Metrick, a fellow with the defense program at the Center for a New American Security, a think tank.

Ukraine and the role of long-range missiles

Ukraine has long sought the Army Tactical Missile System (ATACMS), a long-range ballistic missile made by Lockheed Martin. First debuted in Operation Desert Storm in Iraq in 1990, it’s 13 feet high, two feet wide, and over 3,600 pounds. It can use GPS to accurately hit targets 190 miles away. 

Last year, President Biden was apprehensive about sending such missiles to Ukraine, as US stockpiles of the weapons were relatively low. In October, the administration changed tack. The US sent shipments of ATACMS, a move celebrated by President Volodymyr Zelensky of Ukraine, but they came with restrictions: the missiles were older models with a shorter range, and Ukraine was instructed not to fire them into Russian territory, only Ukrainian territory. 

This week, just hours before the new aid package was signed, multiple news outlets reported that the US had secretly sent more powerful long-range ATACMS to Ukraine several weeks before. They were used on Tuesday, April 23, to target a Russian airfield in Crimea and Russian troops in Berdiansk, 50 miles southwest of Mariupol.

The long range of the weapons has proved essential for Ukraine, says Metrick. “It allows the Ukrainians to strike Russian targets at ranges for which they have very few other options,” he says. That means being able to hit locations like supply depots, command centers, and airfields behind Russia’s front lines in Ukraine. This capacity has grown more important as Ukraine’s troop numbers have waned, Metrick says.

Replenishing Israel’s Iron Dome

On April 13, Iran launched its first-ever direct attack on Israeli soil. In the attack, which Iran says was retaliation for Israel’s airstrike on its embassy in Syria, hundreds of missiles were lobbed into Israeli airspace. Many of them were neutralized by the web of cutting-edge missile launchers dispersed throughout Israel that can automatically detonate incoming strikes before they hit land. 

One of those systems is Israel’s Iron Dome, in which radar systems detect projectiles and then signal units to launch defensive missiles that detonate the target high in the sky before it strikes populated areas. Israel’s other system, called David’s Sling, works a similar way but can identify rockets coming from a greater distance, upwards of 180 miles. 

Both systems are hugely costly to research and build, and the new US aid package allocates $15 billion to replenish their missile stockpile. The missiles can cost anywhere from $100,000 to $10 million each, and a system like Iron Dome might fire them daily during intense periods of conflict. 

The aid comes as funding for Israel has grown more contentious amid the dire conditions faced by displaced Palestinians in Gaza. While the spending bill worked its way through Congress, increasing numbers of Democrats sought to put conditions on the military aid to Israel, particularly after an Israeli air strike on April 1 killed seven aid workers from World Central Kitchen, an international food charity. The funding package does provide $9 billion in humanitarian assistance for the conflict, but the efforts to impose conditions for Israeli military aid failed. 

Taiwan and underwater defenses against China

A rising concern for the US defense community—and a subject of “wargaming” simulations that Metrick has carried out—is an amphibious invasion of Taiwan from China. The rising risk of that scenario has driven the US to build and deploy larger numbers of advanced submarines, Metrick says. A bigger fleet of these submarines would be more likely to keep attacks from China at bay, thereby protecting Taiwan.

The trouble is that the US shipbuilding effort, experts say, is too slow. It’s been hampered by budget cuts and labor shortages, but the new aid bill aims to jump-start it. It will provide $3.3 billion to do so, specifically for the production of Columbia-class submarines, which carry nuclear weapons, and Virginia-class submarines, which carry conventional weapons. 

Though these funds aim to support Taiwan by building up the US supply of submarines, the package also includes more direct support, like $2 billion to help it purchase weapons and defense equipment from the US. 

The US’s Iranian drone problem 

Shahed drones are used almost daily on the Russia-Ukraine battlefield, and Iran launched more than 100 against Israel earlier this month. Produced by Iran and resembling model planes, the drones are fast, cheap, and lightweight, capable of being launched from the back of a pickup truck. They’re used frequently for potent one-way attacks, where they detonate upon reaching their target. US experts say the technology is tipping the scales toward Russian and Iranian military groups and their allies. 

The trouble of combating them is partly one of cost. Shooting down the drones, which can be bought for as little as $40,000, can cost millions in ammunition.

“Shooting down Shaheds with an expensive missile is not, in the long term, a winning proposition,” Metrick says. “That’s what the Iranians, I think, are banking on. They can wear people down.”

This week’s aid package renewed White House calls for stronger sanctions aimed at curbing production of the drones. The United Nations previously passed rules restricting any drone-related material from entering or leaving Iran, but those expired in October. The US now wants them reinstated. 

Even if that happens, it’s unlikely the rules would do much to contain the Shahed’s dominance. The components of the drones are not all that complex or hard to obtain to begin with, but experts also say that Iran has built a sprawling global supply chain to acquire the materials needed to manufacture them and has worked with Russia to build factories. 

“Sanctions regimes are pretty dang leaky,” Metrick says. “They [Iran] have friends all around the world.”

The Download: how to tell when a chatbot is lying, and RIP my biotech plants

26 April 2024 at 08:10

Chatbot answers are all made up. This new tool helps you figure out which ones to trust.

The news: Large language models are famous for their ability to make things up—in fact, it’s what they’re best at. But their inability to tell fact from fiction has left many businesses wondering if using them is worth the risk. A new tool created by Cleanlab, an AI startup spun out of a quantum computing lab at MIT, is designed to give high-stakes users a clearer sense of how trustworthy these models really are. 

How it works: The Trustworthy Language Model gives any output generated by a large language model a score between 0 and 1, according to its reliability. This lets people choose which responses to trust and which to throw out. In other words: a BS-o-meter for chatbots.

Why it matters: Cleanlab hopes that its tool will make large language models more attractive to businesses worried about how much stuff they invent. But while the approach could be useful, it’s unlikely to be perfect. Read the full story.

—Will Douglas Heaven

My biotech plants are dead

Antonio Regalado, MIT Technology Review’s senior biotech editor

Six weeks ago, I pre-ordered the “Firefly Petunia,” a houseplant engineered with genes from bioluminescent fungi so that it glows in the dark. 

After years of writing about anti-GMO sentiment in the US and elsewhere, I felt it was time to have some fun with biotech. These plants are among the first direct-to-consumer GM organisms you can buy, and they certainly seem like the coolest.

But when I unboxed my two petunias this week, they were in bad shape, with rotted leaves. And in a day, they were dead crisps. My first attempt to do biotech at home is a total bust, and it cost me $84, shipping included. But, although my petunias have perished, others are having success right out of the box. Read the full story.

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 ByteDance insists it won’t sell its US TikTok business
It claims that reports it plans to sell the platform without its recommendation algorithm are untrue. (WSJ $)
+ In fact, it seems like ByteDance is doubling down on its ownership. (FT $)
+ The ban is extremely unpopular among prospective young voters. (Vox)

2 Big Tech needs to work out how to make money from AI 

They’ve optimistically sunk billions into systems that aren’t yet money makers. (WP $)
+ But Google and Microsoft claim they’ve already figured out how to cash in. (Wired $)
+ Prominent tech leaders have joined the US government’s AI advisory board. (WSJ $)

3 China controls nearly all of the world’s EV graphite supply
Which makes it virtually impossible for automakers to qualify for US EV subsidies, according to South Korea. (FT $)
+ Singapore’s push into EVs isn’t resonating with car owners. (Rest of World)
+ How one mine could unlock billions in EV subsidies. (MIT Technology Review)

4 A Baltimore high school teacher created an audio deepfake to smear his boss
The fake clip of the school’s principal contained racist and antisemitic comments. (NYT $)
+ The teacher has been arrested. (NBC News)

5 The first personalized mRNA vaccine for melanoma is being trialed in the UK
Hundreds of patients will receive the vaccine in a bid to combat the cancer. (The Guardian)
+ The next generation of mRNA vaccines is on its way. (MIT Technology Review)

6 We could be closer than ever to curbing climate change
Clean energy sources are on the rise, and efficiency is growing. (Vox)
+ Want less mining? Switch to clean energy. (MIT Technology Review)

7 Russia vetoed a UN resolution on nuclear weapons in space
While China abstained from the vote. (Ars Technica)
+ How to fight a war in space (and get away with it) (MIT Technology Review)

8 Spyware developers could be barred from entering the US
The State Department wants to impose visa restrictions on them. (The Verge)

9 LinkedIn is full of weird AI images now
The junky pictures that first went viral on Facebook are seeping into the professional network. (404 Media)
+ LinkedIn is also home to a new wave of ghostwriters. (Insider $)

10 No Airbnb? No problem
New Yorkers are coming up with innovative ways to get around a crackdown. (The Guardian)

Quote of the day

“It’s a little corner of happy in a really, really tough world right now.”

—Kristie Carnevale, a BookTok creator, explains to the Washington Post why she’s so upset at the prospect of the US government banning TikTok.

The big story

Eight ways scientists are unwrapping the mysteries of the human brain

August 2021

There is no greater scientific mystery than the brain. It’s made mostly of water; much of the rest is largely fat. Yet this roughly three-pound blob of material produces our thoughts, memories, and emotions. It governs how we interact with the world, and it runs our body.

Increasingly, scientists are beginning to unravel the complexities of how it works and understand how the 86 billion neurons in the human brain form the connections that produce ideas and feelings, as well as the ability to communicate and react. 

Here’s our whistle-stop tour of some of the most cutting-edge research—and why it’s important. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Watch out, watch out, there’s aquatic spiders about. 🕷
+ I don’t know who needs to hear this, but your air fryer is a scam.
+ Insects are important. Here’s how to create a little haven for them, if you’re lucky enough to have a garden.
+ Check out these top tips for keeping your computer running as smoothly as possible.

My biotech plants are dead

26 April 2024 at 06:00

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Six weeks ago, I pre-ordered the “Firefly Petunia,” a houseplant engineered with genes from bioluminescent fungi so that it glows in the dark. 

After years of writing about anti-GMO sentiment in the US and elsewhere, I felt it was time to have some fun with biotech. These plants are among the first direct-to-consumer GM organisms you can buy, and they certainly seem like the coolest.

But when I unboxed my two petunias this week, they were in bad shape, with rotted leaves. And in a day, they were dead crisps. My first attempt to do biotech at home is a total bust, and it cost me $84, shipping included.

My plants did arrive in a handsome black box with neon lettering that alerted me to the living creature within. The petunias, about five inches tall, were each encased in a see-through plastic pod to keep them upright. Government warnings on the back of the box assured me they were free of Japanese beetles, sweet potato weevils, the snail Helix aspera, and gypsy moths.

The problem was when I opened the box. As it turns out, I left for a week’s vacation in Florida the same day that Light Bio, the startup selling the petunia, sent me an email saying “Glowing plants headed your way,” with a UPS tracking number. I didn’t see the email, and even if I had, I wasn’t there to receive them. 

That meant my petunias sat in darkness for seven days. The box became their final sarcophagus.

My fault? Perhaps. But I had no idea when Light Bio would ship my order. And others have had similar experiences. Mat Honan, the editor in chief of MIT Technology Review, told me his petunia arrived the day his family flew to Japan. Luckily, a house sitter feeding his lizard eventually opened the box, and Mat reports the plant is still clinging to life in his yard.

Dead potted petunia next to it's packaging, which reads "The plant you will love the most. www.light.bio"
One of the ill-fated petunia plants and its sarcophagus. Credit: Antonio Regalado
ANTONIO REGALADO

But what about the glow? How strong is it? 

Mat says so far, he doesn’t notice any light coming from the plant, even after carrying it into a pitch-dark bathroom. But buyers may have to wait a bit to see anything. It’s the flowers that glow most brightly, and you may need to tend your petunia for a couple of weeks before you get blooms and see the mysterious effect.  

“I had two flowers when I opened mine, but sadly they dropped and I haven’t got to see the brightness yet. Hoping they will bloom again soon,” says Kelsey Wood, a postdoctoral researcher at the University of California, Davis. 

She would like to use the plants in classes she teaches at the university. “It’s been a dream of synthetic biologists for so many years to make a bioluminescent plant,” she says. “But they couldn’t get it bright enough to see with the naked eye.”

Others are having success right out of the box. That’s the case with Tharin White, publisher of EYNTK.info, a website about theme parks. “It had a lot of protection around it and a booklet to explain what you needed to do to help it,” says White. “The glow is strong, if you are [in] total darkness. Just being in a dark room, you can’t really see it. That being said, I didn’t expect a crazy glow, so [it] meets my expectations.”

That’s no small recommendation coming from White, who has been a “cast member” at Disney parks and an operator of the park’s Avatar ride, named after the movie whose action takes place on a planet where the flora glows. “I feel we are leaps closer to Pandora—The World of Avatar being reality,” White posted to his X account.

Chronobiologist Brian Hodge also found success by resettling his petunia immediately into a larger eight-inch pot, giving it flower food and a good soaking, and putting it in the sunlight. “After a week or so it really started growing fast, and the buds started to show up around day 10. Their glow is about what I expected. It is nothing like a neon light but more of a soft gentle glow,” says Hodge, a staff scientist at the University of California, San Francisco.

In his daily work, Hodge has handled bioluminescent beings before—bacteria mostly—and says he always needed photomultiplier tubes to see anything. “My experience with bioluminescent cells is that the light they would produce was pretty hard to see with the naked eye,” he says. “So I was happy with the amount of light I was seeing from the plants. You really need to turn off all the lights for them to really pop out at you.”

Hodge posted a nifty snapshot of his petunia, but only after setting his iPhone for a two-second exposure.

Light Bio’s CEO Keith Wood didn’t respond to an email about how my plants died, but in an interview last month he told me sales of the biotech plant had been “viral” and that the company would probably run out of its initial supply. To generate new ones, it hires commercial greenhouses to place clippings in water, where they’ll sprout new roots after a couple of weeks. According to Wood, the plant is “a rare example where the benefits of GM technology are easily recognized and experienced by the public.”

Hodge says he got interested in the plants after reading an article about combating light pollution by using bioluminescent flora instead of streetlamps. As a biologist who studies how day and night affect life, he’s worried that city lights and computer screens are messing with natural cycles.

“I just couldn’t pass up being one of the first to own one,” says Hodge. “Once you flip the lights off, the glow is really beautiful … and it sorta feels like you are witnessing something out of a futuristic sci-fi movie!” 

It makes me tempted to try again. 


Now read the rest of The Checkup

From the archives 

We’re not sure if rows of glowing plants can ever replace streetlights, but there’s no doubt light pollution is growing. Artificial light emissions on Earth grew by about 50% between 1992 and 2017—and as much as 400% in some regions. That’s according to Shel Evergreen,in his story on the switch to bright LED streetlights.

It’s taken a while for scientists to figure out how to make plants glow brightly enough to interest consumers. In 2016, I looked at a failed Kickstarter that promised glow-in-the-dark roses but couldn’t deliver.  

Another thing 

Cassandra Willyard is updating us on the case of Lisa Pisano, a 54-year-old woman who is feeling “fantastic” two weeks after surgeons gave her a kidney from a genetically modified pig. It’s the latest in a series of extraordinary animal-to-human organ transplants—a technology, known as xenotransplantation, that may end the organ shortage.

From around the web

Taiwan’s government is considering steps to ease restrictions on the use of IVF. The country has an ultra-low birth rate, but it bans surrogacy, limiting options for male couples. One Taiwanese pair spent $160,000 to have a child in the United States.  (CNN)

Communities in Appalachia are starting to get settlement payments from synthetic-opioid makers like Johnson & Johnson, which along with other drug vendors will pay out $50 billion over several years. But the money, spread over thousands of jurisdictions, is “a feeble match for the scale of the problem.” (Wall Street Journal)

A startup called Climax Foods claims it has used artificial intelligence to formulate vegan cheese that tastes “smooth, rich, and velvety,” according to writer Andrew Rosenblum. He relates the results of his taste test in the new “Build” issue of MIT Technology Review. But one expert Rosenblum spoke to warns that computer-generated cheese is “significantly” overhyped.

AI hype continued this week in medicine when a startup claimed it has used “generative AI” to quickly discover new versions of CRISPR, the powerful gene-editing tool. But new gene-editing tricks won’t conquer the main obstacle, which is how to deliver these molecules where they’re needed in the bodies of patients. (New York Times).

Chatbot answers are all made up. This new tool helps you figure out which ones to trust.

25 April 2024 at 08:59

Large language models are famous for their ability to make things up—in fact, it’s what they’re best at. But their inability to tell fact from fiction has left many businesses wondering if using them is worth the risk.

A new tool created by Cleanlab, an AI startup spun out of a quantum computing lab at MIT, is designed to give high-stakes users a clearer sense of how trustworthy these models really are. Called the Trustworthy Language Model, it gives any output generated by a large language model a score between 0 and 1, according to its reliability. This lets people choose which responses to trust and which to throw out. In other words: a BS-o-meter for chatbots.

Cleanlab hopes that its tool will make large language models more attractive to businesses worried about how much stuff they invent. “I think people know LLMs will change the world, but they’ve just got hung up on the damn hallucinations,” says Cleanlab CEO Curtis Northcutt.

Chatbots are quickly becoming the dominant way people look up information on a computer. Search engines are being redesigned around the technology. Office software used by billions of people every day to create everything from school assignments to marketing copy to financial reports now comes with chatbots built in. And yet a study put out in November by Vectara, a startup founded by former Google employees, found that chatbots invent information at least 3% of the time. It might not sound like much, but it’s a potential for error most businesses won’t stomach.

Cleanlab’s tool is already being used by a handful of companies, including Berkeley Research Group, a UK-based consultancy specializing in corporate disputes and investigations. Steven Gawthorpe, associate director at Berkeley Research Group, says the Trustworthy Language Model is the first viable solution to the hallucination problem that he has seen: “Cleanlab’s TLM gives us the power of thousands of data scientists.”

In 2021, Cleanlab developed technology that discovered errors in 10 popular data sets used to train machine-learning algorithms; it works by measuring the differences in output across a range of models trained on that data. That tech is now used by several large companies, including Google, Tesla, and the banking giant Chase. The Trustworthy Language Model takes the same basic idea—that disagreements between models can be used to measure the trustworthiness of the overall system—and applies it to chatbots.

In a demo Cleanlab gave to MIT Technology Review last week, Northcutt typed a simple question into ChatGPT: “How many times does the letter ‘n’ appear in ‘enter’?” ChatGPT answered: “The letter ‘n’ appears once in the word ‘enter.’” That correct answer promotes trust. But ask the question a few more times and ChatGPT answers: “The letter ‘n’ appears twice in the word ‘enter.’”

“Not only does it often get it wrong, but it’s also random, you never know what it’s going to output,” says Northcutt. “Why the hell can’t it just tell you that it outputs different answers all the time?”

Cleanlab’s aim is to make that randomness more explicit. Northcutt asks the Trustworthy Language Model the same question. “The letter ‘n’ appears once in the word ‘enter,’” it says—and scores its answer 0.63. Six out of 10 is not a great score, suggesting that the chatbot’s answer to this question should not be trusted.

It’s a basic example, but it makes the point. Without the score, you might think the chatbot knew what it was talking about, says Northcutt. The problem is that data scientists testing large language models in high-risk situations could be misled by a few correct answers and assume that future answers will be correct too: “They try things out, they try a few examples, and they think this works. And then they do things that result in really bad business decisions.”

The Trustworthy Language Model draws on multiple techniques to calculate its scores. First, each query submitted to the tool is sent to one or more large language models. The tech will work with any model, says Northcutt, including closed-source models like OpenAI’s GPT series, the models behind ChatGPT, and open-source models like DBRX, developed by San Francisco-based AI firm Databricks. If the responses from each of these models are the same or similar, it will contribute to a higher score.

At the same time, the Trustworthy Language Model also sends variations of the original query to each of the models, swapping in words that have the same meaning. Again, if the responses to synonymous queries are similar, it will contribute to a higher score. “We mess with them in different ways to get different outputs and see if they agree,” says Northcutt.

The tool can also get multiple models to bounce responses off one another: “It’s like, ‘Here’s my answer—what do you think?’ ‘Well, here’s mine—what do you think?’ And you let them talk.” These interactions are monitored and measured and fed into the score as well.

Nick McKenna, a computer scientist at Microsoft Research in Cambridge, UK, who works on large language models for code generation, is optimistic that the approach could be useful. But he doubts it will be perfect. “One of the pitfalls we see in model hallucinations is that they can creep in very subtly,” he says.

In a range of tests across different large language models, Cleanlab shows that its trustworthiness scores correlate well with the accuracy of those models’ responses. In other words, scores close to 1 line up with correct responses, and scores close to 0 line up with incorrect ones. In another test, they also found that using the Trustworthy Language Model with GPT-4 produced more reliable responses than using GPT-4 by itself.

Large language models generate text by predicting the most likely next word in a sequence. In future versions of its tool, Cleanlab plans to make its scores even more accurate by drawing on the probabilities that a model used to make those predictions. It also wants to access the numerical values that models assign to each word in their vocabulary, which they use to calculate those probabilities. This level of detail is provided by certain platforms, such as Amazon’s Bedrock, that businesses can use to run large language models.

Cleanlab has tested its approach on data provided by Berkeley Research Group. The firm needed to search for references to health-care compliance problems in tens of thousands of corporate documents. Doing this by hand can take skilled staff weeks. By checking the documents using the Trustworthy Language Model, Berkeley Research Group was able to see which documents the chatbot was least confident about and check only those. It reduced the workload by around 80%, says Northcutt.

In another test, Cleanlab worked with a large bank (Northcutt would not name it but says it is a competitor to Goldman Sachs). Similar to Berkeley Research Group, the bank needed to search for references to insurance claims in around 100,000 documents. Again, the Trustworthy Language Model reduced the number of documents that needed to be hand-checked by more than half.

Running each query multiple times through multiple models takes longer and costs a lot more than the typical back-and-forth with a single chatbot. But Cleanlab is pitching the Trustworthy Language Model as a premium service to automate high-stakes tasks that would have been off limits to large language models in the past. The idea is not for it to replace existing chatbots but to do the work of human experts. If the tool can slash the amount of time that you need to employ skilled economists or lawyers at $2,000 an hour, the costs will be worth it, says Northcutt.

In the long run, Northcutt hopes that by reducing the uncertainty around chatbots’ responses, his tech will unlock the promise of large language models to a wider range of users. “The hallucination thing is not a large-language-model problem,” he says. “It’s an uncertainty problem.”

Correction: This article has been updated to clarify that the Trustworthy Language Model works with a range of different large language models.

The Download: hyperrealistic deepfakes, and clean energy’s implications for mining

25 April 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Until now, AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality.

For the past several years, AI video startup Synthesia has produced these kinds of AI-generated avatars. But today it launches a new generation, its first to take advantage of the latest advancements in generative AI, and they are more realistic and expressive than anything we’ve seen before.

While today’s release means almost anyone will now be able to make a digital double, before the technology went public, Synthesia agreed to make one of Melissa Heikkilä, our senior AI reporter.

This technological progress signals a much larger shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. And this threatens our trust in everything we see, which could have very dangerous consequences. Read the full story and check out the synthetic version of Melissa.

Want less mining? Switch to clean energy.

Political fights over mining and minerals are heating up, and there are growing concerns about how to source the materials the world needs to build new energy technologies. 

But low-emissions energy sources, including wind, solar, and nuclear power, have a smaller mining footprint than coal and natural gas, according to a new report from the Breakthrough Institute released today.

The report’s findings add to a growing body of evidence that technologies used to address climate change will likely lead to a future with less mining than a world powered by fossil fuels. Read the full story.

—Casey Crownhart

In the climate world, hydrogen is perhaps the ultimate multi-tool. It can be used in fuel cells or combustion engines and is sometimes called the Swiss Army knife for cleaning up emissions. But the reality today is that hydrogen is much more of a climate problem than a solution. To find out why, check out the latest edition of The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

A new kind of gene-edited pig kidney was just transplanted into a person

The news: A month ago, Richard Slayman became the first living person to receive a kidney transplant from a gene-edited pig. Now, a team of researchers from NYU Langone Health reports that Lisa Pisano, a 54-year-old woman from New Jersey, has become the second.

Why it matters: Pisano’s new kidney came from pigs that carry just a single genetic alteration—to eliminate a specific sugar called alpha-gal, which can trigger immediate organ rejection. In the coming weeks, doctors will be monitoring Pisano closely for signs of organ rejection. If it’s successful, researchers hope the approach could make scaling up the production of pig organs simpler. Read the full story.

—Cassandra Willyard

Almost every Chinese keyboard app has a security flaw that reveals what users type

In a nutshell: Almost all keyboard apps used by Chinese people around the world share a security loophole that makes it possible to spy on what users are typing.

Why it’s a big deal:
The vulnerability, which allows the keystroke data that these apps send to the cloud to be intercepted, has existed for years and could have been exploited by cybercriminals and state surveillance groups, according to researchers at the Citizen Lab, a technology and security research lab affiliated with the University of Toronto. Read the full story.

—Zeyi Yang

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Meta’s AI push is only just beginning
The company plans to sink $40 billion into its AI projects this year alone—but it hasn’t worked out how to make money from them yet. (Insider $)
+ The news didn’t go down well with Meta’s investors. (The Information $)+ Mark Zuckerberg isn’t ready to give up on the metaverse just yet. (FT $)

2 US chipmaker Micron has been given a major boost
To the tune of $13.6 billion in government funding. (FT $)
+ It could be several months before the money arrives, though. (Bloomberg $)

3 A nuclear fusion experiment has overcome two major barriers
But we don’t know if the operative ‘sweet spot’ it identified could be replicated in larger reactors. (New Scientist $)
+ The next generation of nuclear reactors is getting more advanced. (MIT Technology Review)

4 The US wants Binance’s founder to spend three years in prison
However, lawyers for Changpeng Zhao argue he shouldn’t go to prison at all. (CoinDesk)
+ The cryptocurrency exchange is attempting to distance itself from its former CEO. (NYT $)

5 Nvidia is gobbling up promising-looking startups
It’s in the company’s interests to reduce the high costs of running AI models. (The Information $)

6 In Saudi Arabia, AI is the new oil
And US tech giants are scrambling to get involved. (NYT $)

7 The Earth is rotating more slowly than it used to
You can blame climate change for the gradual slowdown. (Economist $)
+ Three climate technologies breaking through in 2024. (MIT Technology Review)

8 These men are repatriating colonial artifacts in audacious digital heists
Their work raises urgent questions about cultural ownership and appropriation. (The Guardian)
+ AI is bringing the internet to submerged Roman ruins. (MIT Technology Review)

9 Robocalls are one of life’s nuisances
David Frankel has spent an impressive 12 years trying to stop them. (IEEE Spectrum)
+ Call centers’ days could be numbered, thanks to the rise of AI. (FT $)

10 Seaweed could be a rich resource of precious minerals 
A new project is hoping to get some answers. (Hakai Magazine)

Quote of the day

“No patient should be a guinea pig, and no nurse should be replaced by a robot.”

—Cathy Kennedy, co-president of the California Nurses Association, criticizes the creep of AI into healthcare without safeguards, 404 Media reports.

The big story

The rise of the tech ethics congregation

August 2023

Just before Christmas last year, a pastor preached a gospel of morals over money to several hundred members of his flock. But the leader in question was not an ordained minister, nor even a religious man.

Polgar, 44, is the founder of All Tech Is Human, a nonprofit organization devoted to promoting ethics and responsibility in tech. His congregation is undergoing dramatic growth in an age when the life of the spirit often struggles to compete with cold, hard, capitalism.

Its leaders believe there are large numbers of individuals in and around the technology world, often from marginalized backgrounds, who wish tech focused less on profits and more on being a force for ethics and justice. But attempts to stay above the fray can cause more problems than they solve. Read the full story.

—Greg M. Epstein

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Roger Federer sounds like an all-round nice guy.
+ This list of the year’s most anticipated tours is making me excited for summer.
+ If only all cakes were this artistic.
+ What are you waiting for—now’s the time to make reservations at the world’s hottest new restaurants.

Want less mining? Switch to clean energy.

25 April 2024 at 07:00

Political fights over mining and minerals are heating up, and there are growing environmental and sociological concerns about how to source the materials the world needs to build new energy technologies. 

But low-emissions energy sources, including wind, solar, and nuclear power, have a smaller mining footprint than coal and natural gas, according to a new report from the Breakthrough Institute released today.

The report’s findings add to a growing body of evidence that technologies used to address climate change will likely lead to a future with less mining than a world powered by fossil fuels. However, experts point out that oversight will be necessary to minimize harm from the mining needed to transition to lower-emission energy sources. 

“In many ways, we talk so much about the mining of clean energy technologies, and we forget about the dirtiness of our current system,” says Seaver Wang, an author of the report and co-director of Climate and Energy at the Breakthrough Institute, an environmental research center.  

In the new analysis, Wang and his colleagues considered the total mining footprint of different energy technologies, including the amount of material needed for these energy sources and the total amount of rock that needs to be moved to extract that material.

Many minerals appear in small concentrations in source rock, so the process of extracting them has a large footprint relative to the amount of final product. A mining operation would need to move about seven kilograms of rock to get one kilogram of aluminum, for instance. For copper, the ratio is much higher, at over 500 to one. Taking these ratios into account allows for a more direct comparison of the total mining required for different energy sources. 

With this adjustment, it becomes clear that the energy source with the highest mining burden is coal. Generating one gigawatt-hour of electricity with coal requires 20 times the mining footprint as generating the same electricity with low-carbon power sources like wind and solar. Producing the same electricity with natural gas requires moving about twice as much rock.

Tallying up the amount of rock moved is an imperfect approximation of the potential environmental and sociological impact of mining related to different technologies, Wang says, but the report’s results allow researchers to draw some broad conclusions. One is that we’re on track for less mining in the future. 

Other researchers have projected a decrease in mining accompanying a move to low-emissions energy sources. “We mine so many fossil fuels today that the sum of mining activities decreases even when we assume an incredibly rapid expansion of clean energy technologies,” Joey Nijnens, a consultant at Monitor Deloitte and author of another recent study on mining demand, said in an email.

That being said, potentially moving less rock around in the future “hardly means that society shouldn’t look for further opportunities to reduce mining impacts throughout the energy transition,” Wang says.

There’s already been progress in cutting down on the material required for technologies like wind and solar. Solar modules have gotten more efficient, so the same amount of material can yield more electricity generation. Recycling can help further cut material demand in the future, and it will be especially crucial to reduce the mining needed to build batteries.  

Resource extraction may decrease overall, but it’s also likely to increase in some places as our demands change, researchers pointed out in a 2021 study. Between 32% and 40% of the mining increase in the future could occur in countries with weak, poor, or failing resource governance, where mining is more likely to harm the environment and may fail to benefit people living near the mining projects. 

“We need to ensure that the energy transition is accompanied by responsible mining that benefits local communities,” Takuma Watari, a researcher at the National Institute for Environmental Studies and an author of the study, said via email. Otherwise, the shift to lower-emissions energy sources could lead to a reduction of carbon emissions in the Global North “at the expense of increasing socio-environmental risks in local mining areas, often in the Global South.” 

Strong oversight and accountability are crucial to make sure that we can source minerals in a responsible way, Wang says: “We want a rapid energy transition, but we also want an energy transition that’s equitable.”

Hydrogen could be used for nearly everything. It probably shouldn’t be. 

25 April 2024 at 06:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

From toaster ovens that work as air fryers to hair dryers that can also curl your hair, single tools that do multiple jobs have an undeniable appeal. 

In the climate world, hydrogen is perhaps the ultimate multi-tool. It can be used in fuel cells or combustion engines and is sometimes called the Swiss Army knife for cleaning up emissions. I’ve written about efforts to use hydrogen in steelmaking, cars, and aviation, just to name a few. And a new story for our latest print issue explores the potential of hydrogen trains. 

Hydrogen might be a million tools in one, but some experts argue that it can’t do it all, and some uses could actually be distractions from real progress on emissions. So let’s dig into where we might see hydrogen used and where it might make the biggest emissions cuts. 

Hydrogen could play a role in cleaning up nearly every sector of the economy—in theory. The reality today is that hydrogen is much more of a climate problem than a solution.

Most hydrogen is used in oil refining, chemical production, and heavy industry, and it is almost exclusively generated using fossil fuels. In total, hydrogen production and use accounted for around 900 million metric tons of carbon dioxide emissions in 2022.

There are technologies on the table to clean up hydrogen production. But global hydrogen demand hit 95 million metric tons in 2022, and only about 0.7% of that was met with low-emissions hydrogen. (For more on various hydrogen sources and why the details matter, check out this newsletter from last year.) 

Transforming the global hydrogen economy won’t be fast or cheap, but it is happening. Annual production of low-emissions hydrogen is on track to hit 38 million metric tons by 2030, according to the International Energy Agency. The pipeline of new projects is growing quickly, but so is hydrogen demand, which could hit 150 million metric tons by the end of the decade. 

Basically every time I report on hydrogen, whether in transportation or energy or industry, experts tell me it’s crucial to be smart about where that low-emissions hydrogen is going. There are, of course, disagreements about what exactly the order of priorities should be, but I’ve seen a few patterns.

First, the focus should probably be on cleaning up production of the hydrogen we’re already using for things like fertilizer. “The main thing is replacing existing uses,” as Geert de Cock, electricity and energy manager at the European Federation for Transport and Environment, put it when I spoke with him earlier this year for a story about hydrogen cars.  

Beyond that, though, hydrogen will probably be most useful in industries where there aren’t other practical options already on the table. 

That’s a central idea behind an infographic I think about a lot: the Hydrogen Ladder, conceptualized and updated frequently by Michael Liebreich, founder of BloombergNEF. In this graphic, he basically ranks just about every use of hydrogen, from “unavoidable” uses at the top to “uncompetitive” ones at the bottom. His metrics include cost, convenience, and economics. 

At the top of this ladder are existing uses and industries where there’s no alternative to hydrogen. There, Liebrich agrees with most experts I’ve spoken with about hydrogen. 

On the next few rungs come sectors where there’s still no dominant technical solution for cleaning up emissions, like shipping, aviation, and steel production. You might recognize these as famously “hard to solve” sectors. 

Heavy industry often requires high temperatures, which have historically been expensive to achieve with electricity. Cost and technical challenges have pushed companies to explore using hydrogen in processes like steelmaking. For shipping and aviation, there are strict limitations on the mass and size of the fueling system, and batteries can’t make the cut just yet, leaving hydrogen a potential opening. 

Toward the bottom of Liebreich’s ladder are applications where we already have clear decarbonization options available today, making hydrogen a long shot. Take domestic heating, for example. Heat pumps are breaking through in a massive way (we put them on our list of 10 Breakthrough Technologies this year), so hydrogen has some stiff competition there. 

Cars also rank right at the bottom of the ladder, alongside two- and three-wheeled vehicles, since battery-powered transit is becoming increasingly popular and charging infrastructure is growing. That leaves little room for hydrogen vehicles to make a dent, at least in the near future.

I’m not counting hydrogen out as a fuel for any one use, and there’s plenty of room to disagree on particular uses and their particular rungs. But given that we have a growing number of options in our arsenal to fight climate change, I’m betting that as a general rule, hydrogen will find its niches rather than emerge as the magic multi-tool that saves us all.


Now read the rest of The Spark

Related reading

A fight over hydrogen trains reveals that cleaning up transportation is a political problem as much as it is a technical one. Read more in this story from Benjamin Schneider, featured in our latest magazine issue. 

Where hydrogen comes from matters immensely when it comes to climate impacts. Read more in this newsletter from last year.

Hydrogen is losing the race to cut emissions from cars, and I explored why for a story earlier this year. 

R. KIKUO JOHNSON

Another thing

It’s here! The Build issue of our print magazine just dropped, and it’s a good one. 

Dive into this story about how artificial snowdrifts could help protect seal pups from climate change. Volunteers in Finland brave freezing temperatures to help create an environment for endangered seals to thrive. 

Or if you’re feeling hungry, I’d recommend this look at how Climax Foods is using machine learning to create vegan cheeses that can stand up to discerning palates. (I have tasted these and can attest that some of them are truly uncanny.) 

Find the full issue here. Happy reading! 

Keeping up with climate  

A solar giant is moving manufacturing to the US. Tariffs and tax incentives are reshaping the solar market, but things could get challenging fast, as my colleague Zeyi Yang reported this week. (MIT Technology Review)

In a new op-ed, Daniele Visioni makes the case that proposals to crack down on geoengineering are misguided. He calls for more research, including outdoor experiments, to make better decisions about climate interventions. (MIT Technology Review)

Americans have some surprising feelings about EVs. And in a recent survey, fewer than half of US adults said they think EVs are better for the climate than gas-powered ones. (Sustainability by numbers)

An Australian supplier of fast charging equipment for EVs is in financial trouble. Tritium told regulators that it’s insolvent, and it’s unclear whether the company will be able to fill orders or service existing chargers. (Canary Media)

Offshore wind has faced its fair share of challenges, but the death of a mega-turbine may have played a major role. GE Vernova canceled plans for a 18-megawatt machine, causing ripples that ended in New York’s move to cancel contracts for three massive projects last week. (E&E News)

The UK’s final coal power station is set to close within the year. Here’s a look at the last site generating what used to be the country’s main source of energy. (The Guardian)

Is it time to retire the term “clean energy”? The term is a convenient way to roll up energy sources that cut emissions, like renewables and nuclear power, but some argue that it glosses over environmental harms. (Inside Climate News)

California saw batteries become the single largest source of power on the grid one evening last week—a major moment for energy storage. (Heatmap News)

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

25 April 2024 at 01:00

I’m stressed and running late, because what do you wear for the rest of eternity? 

This makes it sound like I’m dying, but it’s the opposite. I am, in a way, about to live forever, thanks to the AI video startup Synthesia. For the past several years, the company has produced AI-generated avatars, but today it launches a new generation, its first to take advantage of the latest advancements in generative AI, and they are more realistic and expressive than anything I’ve ever seen. While today’s release means almost anyone will now be able to make a digital double, on this early April afternoon, before the technology goes public, they’ve agreed to make one of me. 

When I finally arrive at the company’s stylish studio in East London, I am greeted by Tosin Oshinyemi, the company’s production lead. He is going to guide and direct me through the data collection process—and by “data collection,” I mean the capture of my facial features, mannerisms, and more—much like he normally does for actors and Synthesia’s customers. 

In this AI-generated footage, synthetic “Melissa” gives a performance of Hamlet’s famous soliloquy. (The magazine had no role in producing this video.)
SYNTHESIA

He introduces me to a waiting stylist and a makeup artist, and I curse myself for wasting so much time getting ready. Their job is to ensure that people have the kind of clothes that look good on camera and that they look consistent from one shot to the next. The stylist tells me my outfit is fine (phew), and the makeup artist touches up my face and tidies my baby hairs. The dressing room is decorated with hundreds of smiling Polaroids of people who have been digitally cloned before me. 

Apart from the small supercomputer whirring in the corridor, which processes the data generated at the studio, this feels more like going into a news studio than entering a deepfake factory. 

I joke that Oshinyemi has what MIT Technology Review might call a job title of the future: “deepfake creation director.” 

“We like the term ‘synthetic media’ as opposed to ‘deepfake,’” he says. 

It’s a subtle but, some would argue, notable difference in semantics. Both mean AI-generated videos or audio recordings of people doing or saying something that didn’t necessarily happen in real life. But deepfakes have a bad reputation. Since their inception nearly a decade ago, the term has come to signal something unethical, says Alexandru Voica, Synthesia’s head of corporate affairs and policy. Think of sexual content produced without consent, or political campaigns that spread disinformation or propaganda.

“Synthetic media is the more benign, productive version of that,” he argues. And Synthesia wants to offer the best version of that version.  

Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley. 

Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words. 

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences. 

“I think we might just have to say goodbye to finding out about the truth in a quick way,” says Sandra Wachter, a professor at the Oxford Internet Institute, who researches the legal and ethical implications of AI. “The idea that you can just quickly Google something and know what’s fact and what’s fiction—I don’t think it works like that anymore.” 

monitor on a video camera showing Heikkilä and Oshinyemi on set in front of the green screen
Tosin Oshinyemi, the company’s production lead, guides and directs actors and customers through the data collection process.
DAVID VINTINER

So while I was excited for Synthesia to make my digital double, I also wondered if the distinction between synthetic media and deepfakes is fundamentally meaningless. Even if the former centers a creator’s intent and, critically, a subject’s consent, is there really a way to make AI avatars safely if the end result is the same? And do we really want to get out of the uncanny valley if it means we can no longer grasp the truth?

But more urgently, it was time to find out what it’s like to see a post-truth version of yourself.

Almost the real thing

A month before my trip to the studio, I visited Synthesia CEO Victor Riparbelli at his office near Oxford Circus. As Riparbelli tells it, Synthesia’s origin story stems from his experiences exploring avant-garde, geeky techno music while growing up in Denmark. The internet allowed him to download software and produce his own songs without buying expensive synthesizers. 

“I’m a huge believer in giving people the ability to express themselves in the way that they can, because I think that that provides for a more meritocratic world,” he tells me. 

He saw the possibility of doing something similar with video when he came across research on using deep learning to transfer expressions from one human face to another on screen. 

“What that showcased was the first time a deep-learning network could produce video frames that looked and felt real,” he says. 

That research was conducted by Matthias Niessner, a professor at the Technical University of Munich, who cofounded Synthesia with Riparbelli in 2017, alongside University College London professor Lourdes Agapito and Steffen Tjerrild, whom Riparbelli had previously worked with on a cryptocurrency project. 

Initially the company built lip-synching and dubbing tools for the entertainment industry, but it found that the bar for this technology’s quality was very high and there wasn’t much demand for it. Synthesia changed direction in 2020 and launched its first generation of AI avatars for corporate clients. That pivot paid off. In 2023, Synthesia achieved unicorn status, meaning it was valued at over $1 billion—making it one of the relatively few European AI companies to do so. 

That first generation of avatars looked clunky, with looped movements and little variation. Subsequent iterations started looking more human, but they still struggled to say complicated words, and things were slightly out of sync. 

The challenge is that people are used to looking at other people’s faces. “We as humans know what real humans do,” says Jonathan Starck, Synthesia’s CTO. Since infancy, “you’re really tuned in to people and faces. You know what’s right, so anything that’s not quite right really jumps out a mile.” 

These earlier AI-generated videos, like deepfakes more broadly, were made using generative adversarial networks, or GANs—an older technique for generating images and videos that uses two neural networks that play off one another. It was a laborious and complicated process, and the technology was unstable. 

But in the generative AI boom of the last year or so, the company has found it can create much better avatars using generative neural networks that produce higher quality more consistently. The more data these models are fed, the better they learn. Synthesia uses both large language models and diffusion models to do this; the former help the avatars react to the script, and the latter generate the pixels. 

Despite the leap in quality, the company is still not pitching itself to the entertainment industry. Synthesia continues to see itself as a platform for businesses. Its bet is this: As people spend more time watching videos on YouTube and TikTok, there will be more demand for video content. Young people are already skipping traditional search and defaulting to TikTok for information presented in video form. Riparbelli argues that Synthesia’s tech could help companies convert their boring corporate comms and reports and training materials into content people will actually watch and engage with. He also suggests it could be used to make marketing materials. 

He claims Synthesia’s technology is used by 56% of the Fortune 100, with the vast majority of those companies using it for internal communication. The company lists Zoom, Xerox, Microsoft, and Reuters as clients. Services start at $22 a month.

This, the company hopes, will be a cheaper and more efficient alternative to video from a professional production company—and one that may be nearly indistinguishable from it. Riparbelli tells me its newest avatars could easily fool a person into thinking they are real. 

“I think we’re 98% there,” he says. 

For better or worse, I am about to see it for myself. 

Don’t be garbage

In AI research, there is a saying: Garbage in, garbage out. If the data that went into training an AI model is trash, that will be reflected in the outputs of the model. The more data points the AI model has captured of my facial movements, microexpressions, head tilts, blinks, shrugs, and hand waves, the more realistic the avatar will be. 

Back in the studio, I’m trying really hard not to be garbage. 

I am standing in front of a green screen, and Oshinyemi guides me through the initial calibration process, where I have to move my head and then eyes in a circular motion. Apparently, this will allow the system to understand my natural colors and facial features. I am then asked to say the sentence “All the boys ate a fish,” which will capture all the mouth movements needed to form vowels and consonants. We also film footage of me “idling” in silence.

image of Melissa standing on her mark in front of a green screen with server racks in background image
The more data points the AI system has on facial movements, microexpressions, head tilts, blinks, shrugs, and hand waves, the more realistic the avatar will be.
DAVID VINTINER

He then asks me to read a script for a fictitious YouTuber in different tones, directing me on the spectrum of emotions I should convey. First I’m supposed to read it in a neutral, informative way, then in an encouraging way, an annoyed and complain-y way, and finally an excited, convincing way. 

“Hey, everyone—welcome back to Elevate Her with your host, Jess Mars. It’s great to have you here. We’re about to take on a topic that’s pretty delicate and honestly hits close to home—dealing with criticism in our spiritual journey,” I read off the teleprompter, simultaneously trying to visualize ranting about something to my partner during the complain-y version. “No matter where you look, it feels like there’s always a critical voice ready to chime in, doesn’t it?” 

Don’t be garbage, don’t be garbage, don’t be garbage. 

“That was really good. I was watching it and I was like, ‘Well, this is true. She’s definitely complaining,’” Oshinyemi says, encouragingly. Next time, maybe add some judgment, he suggests.   

We film several takes featuring different variations of the script. In some versions I’m allowed to move my hands around. In others, Oshinyemi asks me to hold a metal pin between my fingers as I do. This is to test the “edges” of the technology’s capabilities when it comes to communicating with hands, Oshinyemi says. 

Historically, making AI avatars look natural and matching mouth movements to speech has been a very difficult challenge, says David Barber, a professor of machine learning at University College London who is not involved in Synthesia’s work. That is because the problem goes far beyond mouth movements; you have to think about eyebrows, all the muscles in the face, shoulder shrugs, and the numerous different small movements that humans use to express themselves. 

motion capture stage with detail of a mocap pattern inset
The motion capture process uses reference patterns to help align footage captured from multiple angles around the subject.
DAVID VINTINER

Synthesia has worked with actors to train its models since 2020, and their doubles make up the 225 stock avatars that are available for customers to animate with their own scripts. But to train its latest generation of avatars, Synthesia needed more data; it has spent the past year working with around 1,000 professional actors in London and New York. (Synthesia says it does not sell the data it collects, although it does release some of it for academic research purposes.)

The actors previously got paid each time their avatar was used, but now the company pays them an up-front fee to train the AI model. Synthesia uses their avatars for three years, at which point actors are asked if they want to renew their contracts. If so, they come into the studio to make a new avatar. If not, the company will delete their data. Synthesia’s enterprise customers can also generate their own custom avatars by sending someone into the studio to do much of what I’m doing.

photograph of a teleprompter screen with three arrows pointing down to "HEAD then EYES>"
The initial calibration process allows the system to understand the subject’s natural colors and facial features.
Melissa recording audio into a boom mic seated in front of a laptop stand
Synthesia also collects voice samples. In the studio, I read a passage indicating that I explicitly consent to having my voice cloned.

Between takes, the makeup artist comes in and does some touch-ups to make sure I look the same in every shot. I can feel myself blushing because of the lights in the studio, but also because of the acting. After the team has collected all the shots it needs to capture my facial expressions, I go downstairs to read more text aloud for voice samples. 

This process requires me to read a passage indicating that I explicitly consent to having my voice cloned, and that it can be used on Voica’s account on the Synthesia platform to generate videos and speech. 

Consent is key

This process is very different from the way many AI avatars, deepfakes, or synthetic media—whatever you want to call them—are created. 

Most deepfakes aren’t created in a studio. Studies have shown that the vast majority of deepfakes online are nonconsensual sexual content, usually using images stolen from social media. Generative AI has made the creation of these deepfakes easy and cheap, and there have been several high-profile cases in the US and Europe of children and women being abused in this way. Experts have also raised alarms that the technology can be used to spread political disinformation, a particularly acute threat given the record number of elections happening around the world this year. 

Synthesia’s policy is to not create avatars of people without their explicit consent. But it hasn’t been immune from abuse. Last year, researchers found pro-China misinformation that was created using Synthesia’s avatars and packaged as news, which the company said violated its terms of service. 

Since then, the company has put more rigorous verification and content moderation systems in place. It applies a watermark with information on where and how the AI avatar videos were created. Where it once had four in-house content moderators, people doing this work now make up 10% of its 300-person staff. It also hired an engineer to build better AI-powered content moderation systems. These filters help Synthesia vet every single thing its customers try to generate. Anything suspicious or ambiguous, such as content about cryptocurrencies or sexual health, gets forwarded to the human content moderators. Synthesia also keeps a record of all the videos its system creates.

And while anyone can join the platform, many features aren’t available until people go through an extensive vetting system similar to that used by the banking industry, which includes talking to the sales team, signing legal contracts, and submitting to security auditing, says Voica. Entry-level customers are limited to producing strictly factual content, and only enterprise customers using custom avatars can generate content that contains opinions. On top of this, only accredited news organizations are allowed to create content on current affairs.

“We can’t claim to be perfect. If people report things to us, we take quick action, [such as] banning or limiting individuals or organizations,” Voica says. But he believes these measures work as a deterrent, which means most bad actors will turn to freely available open-source tools instead. 

I put some of these limits to the test when I head to Synthesia’s office for the next step in my avatar generation process. In order to create the videos that will feature my avatar, I have to write a script. Using Voica’s account, I decide to use passages from Hamlet, as well as previous articles I have written. I also use a new feature on the Synthesia platform, which is an AI assistant that transforms any web link or document into a ready-made script. I try to get my avatar to read news about the European Union’s new sanctions against Iran. 

Voica immediately texts me: “You got me in trouble!” 

The system has flagged his account for trying to generate content that is restricted.

screencap from Synthesia video with text overlay "Your video was moderated for violating our Disinformation & Misinformation: Media Reporting (News) guidelines. If you believe this was an error please submit an appeal here."
AI-powered content filters help Synthesia vet every single thing its customers try to generate. Only accredited news organizations are allowed to create content on current affairs.
COURTESY OF SYNTHESIA

Offering services without these restrictions would be “a great growth strategy,” Riparbelli grumbles. But “ultimately, we have very strict rules on what you can create and what you cannot create. We think the right way to roll out these technologies in society is to be a little bit over-restrictive at the beginning.” 

Still, even if these guardrails operated perfectly, the ultimate result would nevertheless be an internet where everything is fake. And my experiment makes me wonder how we could possibly prepare ourselves. 

Our information landscape already feels very murky. On the one hand, there is heightened public awareness that AI-generated content is flourishing and could be a powerful tool for misinformation. But on the other, it is still unclear whether deepfakes are used for misinformation at scale and whether they’re broadly moving the needle to change people’s beliefs and behaviors. 

If people become too skeptical about the content they see, they might stop believing in anything at all, which could enable bad actors to take advantage of this trust vacuum and lie about the authenticity of real content. Researchers have called this the “liar’s dividend.” They warn that politicians, for example, could claim that genuinely incriminating information was fake or created using AI. 

Claire Leibowicz, the head of the AI and media integrity at the nonprofit Partnership on AI, says she worries that growing awareness of this gap will make it easier to “plausibly deny and cast doubt on real material or media as evidence in many different contexts, not only in the news, [but] also in the courts, in the financial services industry, and in many of our institutions.” She tells me she’s heartened by the resources Synthesia has devoted to content moderation and consent but says that process is never flawless.

Even Riparbelli admits that in the short term, the proliferation of AI-generated content will probably cause trouble. While people have been trained not to believe everything they read, they still tend to trust images and videos, he adds. He says people now need to test AI products for themselves to see what is possible, and should not trust anything they see online unless they have verified it. 

Never mind that AI regulation is still patchy, and the tech sector’s efforts to verify content provenance are still in their early stages. Can consumers, with their varying degrees of media literacy, really fight the growing wave of harmful AI-generated content through individual action? 

Watch out, PowerPoint

The day after my final visit, Voica emails me the videos with my avatar. When the first one starts playing, I am taken aback. It’s as painful as seeing yourself on camera or hearing a recording of your voice. Then I catch myself. At first I thought the avatar was me. 

The more I watch videos of “myself,” the more I spiral. Do I really squint that much? Blink that much? And move my jaw like that? Jesus. 

It’s good. It’s really good. But it’s not perfect. “Weirdly good animation,” my partner texts me. 

“But the voice sometimes sounds exactly like you, and at other times like a generic American and with a weird tone,” he adds. “Weird AF.” 

He’s right. The voice is sometimes me, but in real life I umm and ahh more. What’s remarkable is that it picked up on an irregularity in the way I talk. My accent is a transatlantic mess, confused by years spent living in the UK, watching American TV, and attending international school. My avatar sometimes says the word “robot” in a British accent and other times in an American accent. It’s something that probably nobody else would notice. But the AI did. 

My avatar’s range of emotions is also limited. It delivers Shakespeare’s “To be or not to be” speech very matter-of-factly. I had guided it to be furious when reading a story I wrote about Taylor Swift’s nonconsensual nude deepfakes; the avatar is complain-y and judgy, for sure, but not angry. 

This isn’t the first time I’ve made myself a test subject for new AI. Not too long ago, I tried generating AI avatar images of myself, only to get a bunch of nudes. That experience was a jarring example of just how biased AI systems can be. But this experience—and this particular way of being immortalized—was definitely on a different level.

Carl Öhman, an assistant professor at Uppsala University who has studied digital remains and is the author of a new book, The Afterlife of Data, calls avatars like the ones I made “digital corpses.” 

“It looks exactly like you, but no one’s home,” he says. “It would be the equivalent of cloning you, but your clone is dead. And then you’re animating the corpse, so that it moves and talks, with electrical impulses.” 

That’s kind of how it feels. The little, nuanced ways I don’t recognize myself are enough to put me off. Then again, the avatar could quite possibly fool anyone who doesn’t know me very well. It really shines when presenting a story I wrote about how the field of robotics could be getting its own ChatGPT moment; the virtual AI assistant summarizes the long read into a decent short video, which my avatar narrates. It is not Shakespeare, but it’s better than many of the corporate presentations I’ve had to sit through. I think if I were using this to deliver an end-of-year report to my colleagues, maybe that level of authenticity would be enough. 

And that is the sell, according to Riparbelli: “What we’re doing is more like PowerPoint than it is like Hollywood.”

Once a likeness has been generated, Synthesia is able to generate video presentations quickly from a script. In this video, synthetic “Melissa” summarizes an article real Melissa wrote about Taylor Swift deepfakes.
SYNTHESIA

The newest generation of avatars certainly aren’t ready for the silver screen. They’re still stuck in portrait mode, only showing the avatar front-facing and from the waist up. But in the not-too-distant future, Riparbelli says, the company hopes to create avatars that can communicate with their hands and have conversations with one another. It is also planning for full-body avatars that can walk and move around in a space that a person has generated. (The rig to enable this technology already exists; in fact it’s where I am in the image at the top of this piece.)

But do we really want that? It feels like a bleak future where humans are consuming AI-generated content presented to them by AI-generated avatars and using AI to repackage that into more content, which will likely be scraped to generate more AI. If nothing else, this experiment made clear to me that the technology sector urgently needs to step up its content moderation practices and ensure that content provenance techniques such as watermarking are robust. 

Even if Synthesia’s technology and content moderation aren’t yet perfect, they’re significantly better than anything I have seen in the field before, and this is after only a year or so of the current boom in generative AI. AI development moves at breakneck speed, and it is both exciting and daunting to consider what AI avatars will look like in just a few years. Maybe in the future we will have to adopt safewords to indicate that you are in fact communicating with a real human, not an AI. 

But that day is not today. 

I found it weirdly comforting that in one of the videos, my avatar rants about nonconsensual deepfakes and says, in a sociopathically happy voice, “The tech giants? Oh! They’re making a killing!” 

I would never. 

A new kind of gene-edited pig kidney was just transplanted into a person

24 April 2024 at 13:47

A month ago, Richard Slayman became the first living person to receive a kidney transplant from a gene-edited pig. Now, a team of researchers from NYU Langone Health reports that Lisa Pisano, a 54-year-old woman from New Jersey, has become the second. Her new kidney has just a single genetic modification—an approach that researchers hope could make scaling up the production of pig organs simpler. 

Pisano, who had heart failure and end-stage kidney disease, underwent two operations, one to fit her with a heart pump to improve her circulation and the second to perform the kidney transplant. She is still in the hospital, but doing well. “Her kidney function 12 days out from the transplant is perfect, and she has no signs of rejection,” said Robert Montgomery, director of the NYU Langone Transplant Institute, who led the transplant surgery, at a press conference on Wednesday.

“I feel fantastic,” said Pisano, who joined the press conference by video from her hospital bed.

Pisano is the fourth living person to receive a pig organ. Two men who received heart transplants at the University of Maryland Medical Center in 2022 and 2023 both died within a couple of months after receiving the organ. Slayman, the first pig kidney recipient, is still doing well, says Leonardo Riella, medical director for kidney transplantation at Massachusetts General Hospital, where Slayman received the transplant.  

“It’s an awfully exciting time,” says Andrew Cameron, a transplant surgeon at Johns Hopkins Medicine in Baltimore. “There is a bright future in which all 100,000 patients on the kidney transplant wait list, and maybe even the 500,000 Americans on dialysis, are more routinely offered a pig kidney as one of their options,” Cameron adds.

All the living patients who have received pig hearts and kidneys have accessed the organs under the FDA’s expanded access program, which allows patients with life-threatening conditions to receive investigational therapies outside of clinical trials. But patients may soon have another option. Both Johns Hopkins and NYU are aiming to start clinical trials in 2025. 

In the coming weeks, doctors will be monitoring Pisano closely for signs of organ rejection, which occurs when the recipient’s immune system identifies the new tissue as foreign and begins to attack it. That’s a concern even with human kidney transplants, but it’s an even greater risk when the tissue comes from another species, a procedure known as xenotransplantation.

To prevent rejection, the companies that produce these pigs have introduced genetic modifications to make their tissue appear less foreign and reduce the chance that it will spark an immune attack. But it’s not yet clear just how many genetic alterations are necessary to prevent rejection. Slayman’s kidney came from a pig developed by eGenesis, a company based in Cambridge, Massachusetts; it has 69 modifications. The vast majority of those modifications focus on inactivating viral DNA in the pig’s genome to make sure those viruses can’t be transmitted to the patient. But 10 were employed to help prevent the immune system from rejecting the organ.

Pisano’s kidney came from pigs that carry just a single genetic alteration—to eliminate a specific sugar called alpha-gal, which can trigger immediate organ rejection, from the surface of its cells. “We believe that less is more, and that the main gene edit that has been introduced into the pigs and the organs that we’ve been using is the fundamental problem,” Montgomery says. “Most of those other edits can be replaced by medications that are available to humans.”

A container reading "Porcine organ for transplant. Keep Upright. Xenokidney. Handle with Care" being lifted from the cold transport box
JOE CARROTTA/NYU LANGONE HEALTH

The kidney is implanted along with a piece of the pig’s thymus gland, which plays a key role in educating white blood cells to distinguish between friend and foe.  The idea is that the thymus will help Pisano’s immune system learn to accept the foreign tissue. The so-called UThymoKidney is being developed by United Therapeutics Corporation, but the company has also created pigs with 10 genetic alterations. The company “wanted to take multiple shots on goal,” says Leigh Peterson, executive vice president of product development and xenotransplantation at United Therapeutics.

There’s one major advantage to using a pig with a single genetic modification. “The simpler it is, in theory, the easier it’s going to be to breed and raise these animals,” says Jayme Locke, a transplant surgeon at the University of Alabama at Birmingham. Pigs with a single genetic change can be bred, but pigs with many alterations require cloning, Montgomery says. “These pigs could be rapidly expanded, and more quickly and completely solve the organ supply crisis.”

But Cameron isn’t sure that a single alteration will be enough to prevent rejection. “I think most people are worried that one knockout might not be enough, but we’re hopeful,” he says.

So is Pisano, who is working to get strong enough to leave the hospital. “I just want to spend time with my grandkids and play with them and be able to go shopping,” she says.

Almost every Chinese keyboard app has a security flaw that reveals what users type

By: Zeyi Yang
24 April 2024 at 12:32

Almost all keyboard apps used by Chinese people around the world share a security loophole that makes it possible to spy on what users are typing. 

The vulnerability, which allows the keystroke data that these apps send to the cloud to be intercepted, has existed for years and could have been exploited by cybercriminals and state surveillance groups, according to researchers at the Citizen Lab, a technology and security research lab affiliated with the University of Toronto.

These apps help users type Chinese characters more efficiently and are ubiquitous on devices used by Chinese people. The four most popular apps—built by major internet companies like Baidu, Tencent, and iFlytek—basically account for all the typing methods that Chinese people use. Researchers also looked into the keyboard apps that come preinstalled on Android phones sold in China. 

What they discovered was shocking. Almost every third-party app and every Android phone with preinstalled keyboards failed to protect users by properly encrypting the content they typed. A smartphone made by Huawei was the only device where no such security vulnerability was found.

In August 2023, the same researchers found that Sogou, one of the most popular keyboard apps, did not use Transport Layer Security (TLS) when transmitting keystroke data to its cloud server for better typing predictions. Without TLS, a widely adopted international cryptographic protocol that protects users from a known encryption loophole, keystrokes can be collected and then decrypted by third parties.

“Because we had so much luck looking at this one, we figured maybe this generalizes to the others, and they suffer from the same kinds of problems for the same reason that the one did,” says Jeffrey Knockel, a senior research associate at the Citizen Lab, “and as it turns out, we were unfortunately right.”

Even though Sogou fixed the issue after it was made public last year, some Sogou keyboards preinstalled on phones are not updated to the latest version, so they are still subject to eavesdropping. 

This new finding shows that the vulnerability is far more widespread than previously believed. 

“As someone who also has used these keyboards, this was absolutely horrifying,” says Mona Wang, a PhD student in computer science at Princeton University and a coauthor of the report. 

“The scale of this was really shocking to us,” says Wang. “And also, these are completely different manufacturers making very similar mistakes independently of one another, which is just absolutely shocking as well.”

The massive scale of the problem is compounded by the fact that these vulnerabilities aren’t hard to exploit. “You don’t need huge supercomputers crunching numbers to crack this. You don’t need to collect terabytes of data to crack it,” says Knockel. “If you’re just a person who wants to target another person on your Wi-Fi, you could do that once you understand the vulnerability.” 

The ease of exploiting the vulnerabilities and the huge payoff—knowing everything a person types, potentially including bank account passwords or confidential materials—suggest that it’s likely they have already been taken advantage of by hackers, the researchers say. But there’s no evidence of this, though state hackers working for Western governments targeted a similar loophole in a Chinese browser app in 2011.

Most of the loopholes found in this report are “so far behind modern best practices” that it’s very easy to decrypt what people are typing, says Jedidiah Crandall, an associate professor of security and cryptography at Arizona State University, who was consulted in the writing of this report. Because it doesn’t take much effort to decrypt the messages, this type of loophole can be a great target for large-scale surveillance of massive groups, he says.

After the researchers got in contact with companies that developed these keyboard apps, the majority of the loopholes were fixed. Samsung, whose self-developed app was also found to lack sufficient encryption, sent MIT Technology Review an emailed statement: “We were made aware of potential vulnerabilities and have issued patches to address these issues. As always, we recommend that all users keep their devices updated with the latest software to ensure the highest level of protection possible.”

But a few companies have been unresponsive, and the vulnerability still exists in some apps and phones, including QQ Pinyin and Baidu, as well as in any keyboard app that hasn’t been updated to the latest version. Baidu, Tencent, and iFlytek did not reply to press inquiries sent by MIT Technology Review.

One potential cause of the loopholes’ ubiquity is that most of these keyboard apps were developed in the 2000s, before the TLS protocol was commonly adopted in software development. Even though the apps have been through numerous rounds of updates since then, inertia could have prevented developers from adopting a safer alternative.

The report points out that language barriers and different tech ecosystems prevent English- and Chinese-speaking security researchers from sharing information that could fix issues like this more quickly. For example, because Google’s Play store is blocked in China, most Chinese apps are not available in Google Play, where Western researchers often go for apps to analyze. 

Sometimes all it takes is a little additional effort. After two emails about the issue to iFlytek were met with silence, the Citizen Lab researchers changed the email title to Chinese and added a one-line summary in Chinese to the English text. Just three days later, they received an email from iFlytek, saying that the problem had been resolved.

Update: The story has been updated to include Samsung’s statement.

The Download: introducing the Build issue

24 April 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: the Build issue

Building is a popular tech industry motif—especially in Silicon Valley, where “Time to build” has become something of a call to arms. Yet the future is built brick by brick from the imperfect decisions we make in the present. 

We don’t often recognize that the seeming steps forward we are taking today could be seen as steps back in the years to come. Sometimes the things we don’t do, or the steps we skip, have bigger implications than the actions we do take.

These are the themes we delve into in our Build issue. Check out these stories from the magazine:

Check out these stories from the magazine:

+ Our cover story from Melissa Heikkilä investigates whether the AI boom is going to usher in robotics’ very own ChatGPT moment.

+ Louisiana’s homes are sinking. Can a government-led project build the area up and out of crisis?

+ Axiom Space and other commercial companies are betting they can build private structures to replace the International Space Station.

+ A fascinating look at the serious weird history of brainwashing, and how America became obsessed with waging psychic war against China.

+ Why the rise of generative AI means we need a new term to replace ‘user.’

+ AI was supposed to make police bodycams better. What happened?

+ How we transform to a fully decarbonized world. A world powered by electricity from abundant, renewable resources is now within reach.

This is just a small selection of what’s on offer. Subscribe if you don’t already to check out the whole thing. Enjoy!

This solar giant is moving manufacturing back to the US

Whenever you see a solar panel, most parts of it probably come from China. The US invented the technology and once dominated its production, but over the past two decades, government subsidies and low costs in China have led most of the solar manufacturing supply chain to be concentrated there.

But the US government is trying to change that. Through high tariffs on imports and hefty domestic tax credits, it is trying to make the cost of manufacturing solar panels in the US competitive enough for companies to want to come back and set up factories.

To understand its chances of success, MIT Technology Review spoke to Shawn Qu, founder and chairman of long-standing solar firm Canadian Solar. After decades of mostly manufacturing in Asia, Canadian Solar is pivoting back to the US. He told Zeyi Yang, our China reporter, why he sees a real chance for a solar industry revival

To learn more about the state of Chinese tech in the US, including climate tech stars, check out the latest edition of China Report, our weekly newsletter covering tech, policy and power in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US Senate has passed the bill that could ban TikTok 
It could either force parent company ByteDance to sell TikTok, or face a national ban. (WP $)
+ Senators insist that TikTok’s ownership poses a real threat to the US. (FT $)+ But ByteDance is highly unlikely to complete a sale within the narrow timeframe. (Reuters)
+ Here’s what’s likely to happen next. (NYT $)

2 The AI industry is desperate for more data centers
Demand is so high, it’s causing a shortage of essential components. (WSJ $)
+ Energy-hungry data centers are quietly moving into cities. (MIT Technology Review)

3 Hackers are testing cyberattacks in developing nations
Africa, Asia and South America are targeted before they move onto richer countries. (FT $)
+ Australia is worried that AI is supercharging online extremist activity. (Bloomberg $)

4 Google has pushed back its plan to phase out cookies—again
It’s the third time the company has delayed the project. (Bloomberg $)

5 How General Motors spied on its customers
It tracked driving data and sold it to the insurance industry. (NYT $)
+ The advertising industry is kicking its heels as it waits. (WSJ $)
+ China’s car companies are turning into tech companies. (MIT Technology Review)

6 How AI could help to make sense of complicated theories
String theory, anyone? (Quanta Magazine)
+ Is it possible to really understand someone else’s mind? (MIT Technology Review)

7 The NFL is diving into big data
When it comes to optimizing sporting performance, knowledge is power. (Knowable Magazine)

8 A new industry is trying to game Reddit with AI-generated product promo
It’s the kind of sneaky approach the Reddit community famously hates. (404 Media)
+ A GPT-3 bot posted comments on Reddit for a week and no one noticed. (MIT Technology Review)

9 AI beauty pageants are a thing now 💄
Which surely undermines the point of beauty contests. (The Guardian)

10 X’s latest trend is infuriating
Look down at my keyboard? Absolutely not. (Insider $)

Quote of the day

“If the Chinese government wants data on Americans, they don’t need TikTok to get it.”

—Alan Z. Rozenshtein, an associate professor of law at the University of Minnesota, reflects on the US Senate’s decision to pressure ByteDance into selling TikTok or face a national ban, Platformer reports.

The big story

The lucky break behind the first CRISPR treatment

December 2023

The world’s first commercial gene-editing treatment is set to start changing the lives of people with sickle-cell disease. It’s called Casgevy, and it was approved last November in the UK.

The treatment, which will be sold in the US by Vertex Pharmaceuticals, employs CRISPR, which can be easily programmed by scientists to cut DNA at precise locations they choose.

But where do you aim CRISPR, and how did the researchers know what DNA to change? That’s the lesser-known story of the sickle-cell breakthrough. Read more about it.

—Antonio Regalado

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The Monument Valley games are lovely, if you’ve never played them, and their music is particularly poignant.
+ There’s nothing more satisfying than a good pressure washer video.
+ Have you ever found your doppelganger in an art gallery? These people have.
+ Replacing beef with fish in classic recipes—with surprisingly tasty results.

Three takeaways about the state of Chinese tech in the US

By: Zeyi Yang
24 April 2024 at 06:00

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

I’ve wanted to learn more about the world of solar panels ever since I realized just how dominant Chinese companies have become in this field. Although much of the technology involved was invented in the US, today about 80% of the world’s solar manufacturing takes place in China. For some parts of the process, it’s responsible for even more: 97% of wafer manufacturing, for example. 

So I jumped at the opportunity to interview Shawn Qu, the founder and chairman of Canadian Solar, one of the largest and longest-standing solar manufacturing companies in the world, last week.

Qu’s company provides a useful lens on wider efforts by the US to reshape the global solar supply chain and bring more of it back to American shores. Although most of its production is still in China and Southeast Asia, it’s now building two factories in the US, spurred on by incentives in the Inflation Reduction Act. You can read my story here.

I met Qu in Cambridge, Massachusetts, where he was attending the Harvard College China Forum, a two-day annual conference that often draws a fair number of Chinese entrepreneurs. I also attended, hoping to meet representatives of Chinese tech companies there.

At the conference, I noticed three interesting things.

One, there was a glaring absence of Chinese consumer tech companies. With the exception of one US-based manager from TikTok, I didn’t see anyone from Alibaba, Baidu, Tencent, or ByteDance. 

These companies, with their large influence on Chinese people’s everyday lives, used to be the stars of discussions around China’s tech sector. If you had come to the Harvard conference before covid-19, you would have met plenty of people representing them, as well as the venture capitalists that funded their successes. You can get a sense just by reading past speaker lists: executives from Xiaomi, Ant Financial, Sogou, Sequoia China, and Hillhouse Capital. These are the equivalents of Mark Zuckerberg and Peter Thiel in China’s tech world.

But these companies have become much more low profile since then, for a couple of main reasons. First, they underwent a harsh domestic crackdown after the government decided to tame them. (I recently talked to Angela Zhang, a law professor studying Chinese tech regulations, to understand these crackdowns.) And second, they have become the subject of national security scrutiny in the US, making it politically unwise for them to engage too much on the public stage here.

The second thing I noticed at the conference is what stood in their place: a batch of new Chinese companies, mostly in climate tech. William Li, the CEO of China’s EV startup NIO, was one of the most popular guest speakers during the conference’s opening ceremony this year. There were at least three solar panel companies present—two (JA Solar and Canadian Solar) among the top-tier manufacturers in the world, and a third that sells solar panels to Latin America. There were also many academics, investors, and even influencers working in the field of electric vehicles and other electrified transportation methods.

It’s clear that amid the increasingly urgent task of addressing climate change, China’s climate technology companies have become the new stars of the show. And they are very much willing to appear on the global stage, both bragging about their technological lead and seeking new markets. 

“The Chinese entrepreneurs are very eager,” says Jinhua Zhao, a professor studying urban transportation at MIT, who also spoke on one of the panels at the conference. “They want to come out. I think the Chinese government side also started to send signals, inviting foreign leadership and financial industries to visit China. I see a lot of gestures.” 

The problem, however, is they are also becoming subject to a lot of political animosity in the US. The Biden administration has started an investigation into Chinese-made cars, mostly electric vehicles; Chinese battery companies have been navigating a minefield of politicians’ resistance to their setting up plants in North America; and Chinese solar panel companies have been subject to sky-high tariffs. 

Back in the mid-2010s, when Chinese consumer tech companies emerged onto the global stage, the US and China had a warm relationship, creating a welcoming environment. Unfortunately, that’s not something climate tech companies can enjoy today. Even though climate change is a global issue that requires countries to collaborate, political tensions stand in the way when companies and investors on opposite sides try to work together.

On that note, the last thing I noticed at the conference is a rising geopolitical force in tech: the Middle East. A few speakers at the conference are working in Saudi Arabia and the United Arab Emirates, and they represent other deep-pocketed players who are betting on technologies like EVs and AI in both the United States and China.

But can they navigate the tensions and benefit from the technological advantages on both sides? It’ll be interesting to watch how that unfolds. 

What do you think of the role of the Middle East in the future of climate technologies? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. A batch of documents mistakenly unsealed by a Pennsylvania court reveals the origin story of TikTok’s parent company, ByteDance. Who knew it started out as a real estate venture? (New York Times $)

2. Vladimir Potanin, Russia’s richest man, said he would move some of his copper smelting factories to China to reduce the impact of Western sanctions, which block Russian companies from using international payment systems. (Financial Times $)

3. Chinese universities have found a way to circumvent the US export ban on high-end Nvidia chips: by buying resold server products made by Dell, Super Micro Computer, and Taiwan’s Gigabyte Technology. (Reuters $)

4. TikTok is testing “TikTok Notes,” a rival product to Instagram, in Australia and Canada. (The Verge)

5. Since there’s no route for personal bankruptcy in China, those who are unable to pay their debts are being penalized in novel ways: they can’t take high-speed trains, fly on planes, stay in nice hotels, or buy expensive insurance policies. (Wall Street Journal $)

6. The hunt for the origins of covid-19 has stalled in China, as Chinese politicians worry about being blamed for the findings. (Associated Press)

7. Because of pressure from the US government, Mexico will not hand out tax cuts and other incentives to Chinese EV companies. (Reuters $)

Lost in translation

Until last year, it was normal for Chinese hotels to require facial recognition to check guests in, but the city of Shanghai is now turning against the practice, according to the Chinese publication 21st Century Business Herald. The police bureau of Shanghai recently published a notice that says “scanning faces” is required only if guests don’t have any identity documents. Otherwise, they have the right to refuse it. Most hotel chains in Shanghai, and some in other cities, have updated their policies in response. 

China has a national facial recognition database tied to the government ID system, and businesses such as hotels can access it to verify customers’ identities. However, Chinese people are increasingly pushing back on the necessity of facial recognition in scenarios like this, and questioning whether hotels are handling such sensitive biometric data properly. 

One more thing

The latest queer icon in Asia is Nymphia Wind, the drag persona of a 28-year-old Taiwanese-American named Leo Tsao, who just won the latest season of RuPaul’s Drag Race. Fully embracing the color yellow as part of her identity, Nymphia Wind is also called the “Banana Buddha” by her fans. She’s hosting shows in Taoist temples in Taiwan, attracting audiences old and young.

What tech learned from Daedalus

24 April 2024 at 05:10

Today’s climate-change kraken may have been unleashed by human activity—which has discharged greenhouse-gas emissions into Earth’s atmosphere for centuries—but reversing course and taming nature’s growing fury seems beyond human means, a quest only mythical heroes could fulfill. Yet the dream of human-powered flight—of rising over the Mediterranean fueled merely by the strength of mortal limbs—was also the stuff of myths for thousands of years. Until 1988.

That year, in October, MIT Technology Review published the aeronautical engineer John Langford’s account of his mission to retrace the legendary flight of Daedalus, described in an ancient Greek myth recorded by the Roman poet Ovid in Metamorphoses. Imprisoned on the island of Crete with his son Icarus, Daedalus, a skilled inventor, crafts wings of feathers and wax to escape. In his exuberance, Icarus defies Daedalus’s warning not to fly too close to the sun. His wings melt and he plummets to his death. With heavy heart, Daedalus completes the flight, landing in Sicily. 

“Daedalus became a quest to build a perfect airplane,” says Langford, reflecting on his project team’s mission. By some measures, they succeeded. Their plane, Daedalus 88, still holds the record for absolute distance (71.5 miles, or 115 kilometers) and duration (nearly four hours) of a human-powered flight. 

Of course, Langford’s team modified some of the mythical parameters. The aircraft replaced feathers and wax with carbon-­fiber wings, and the pilot, the Greek cyclist Kanellos Kanellopoulos, didn’t flap his way into history—he pedaled. Plus, the 500-mile journey to Sicily seemed beyond mortal capacity, so Langford and his team set their sights on Santorini.

The problem with the Daedalus project, and human-powered aircraft of any kind, is the grueling effort to remain aloft, the risk of crashing, and the expense—none of which was lost on Langford. “In itself, our Daedalus project could never answer the question ‘So what?’” he admits.

At the time, unseen clouds of human-­generated chlorofluorocarbons, gathering in Earth’s stratosphere for half a century, had blasted a seasonal hole in the protective ozone layer over Antarctica, signifying a disaster unfolding across Earth’s atmosphere. As the global community rallied, the “So what?” he was looking for emerged.

To Langford, an entrepreneur whose twin passions are climate research and sustainable aeronautics, the perfect plane is an unmanned aerial vehicle able to ply the stratosphere, collect climate data such as ozone readings, and harness the sun for its energy needs. Aurora Flight Sciences, his first company, unveiled such a plane, Odysseus, in 2018. His latest company, Electra, wants to decarbonize all aviation.

That a human-powered plane able to fly mere meters above the sea for a handful of hours managed to inspire solar-­powered robotic planes that continuously comb Earth’s stratosphere could make sense only in the context of our climate challenges. Such novel aircraft symbolize the ability of human beings to achieve mythic feats when joined in a common quest, however daunting.

Bill Gourgey is a science writer based in Washington, DC, and teaches science writing at Johns Hopkins University.

This creamy vegan cheese was made with AI

24 April 2024 at 05:00

As Climax Foods CEO Oliver Zahn serves up a plate of vegan brie, feta, and blue cheese in his offices in Emeryville, California, I’m keeping my expectations modest. Most vegan cheese falls into an edible uncanny valley full of discomforting not-quite-right versions of the real thing. But the brie I taste today is smooth, rich, and velvety—and delicious. I could easily believe it was made from cow’s milk, but it is made entirely from plants. And it couldn’t have come into existence, says Zahn, without the use of machine learning.

Climax Foods is one of several startups, also including Shiru of Alameda, California, and NotCo of Chile, that have used artificial intelligence to design plant-based foods. The companies train algorithms on datasets of ingredients with desirable traits like flavor, scent, or stretchability. Then they use AI to comb troves of data to develop new combinations of those ingredients that perform similarly.

“Traditional ingredient discovery can take years and tens of millions of dollars, and what results are ingredients only incrementally better than the previous generation,” says Shiru CEO Jasmin Hume, who wrote her PhD thesis on protein engineering.[Now] we can go from scratch, meaning what nature has to offer; pick out the proteins that will function best; and prototype and test them in about three months.”

Not everyone in the industry is bullish about AI-assisted ingredient discovery. Jonathan McIntyre, a food consultant who formerly headed R&D teams in both beverages and snacks at Pepsi, thinks the technology is “significantly” overhyped as a tool for his field. “AI is only as good as the data you feed it,” he says. And given how jealously food companies guard formulas and proprietary information, he adds, there won’t necessarily be sufficient data to yield productive results. McIntyre has a cautionary tale: during his stint at Pepsi, the company attempted to use IBM’s Watson to create a better soda. “It formulated the worst-tasting thing ever,” he says.

Climax Foods circumvented the data scarcity problem by creating its own training sets to essentially reverse-engineer why cheese tastes so good. “When we started, there was very little data on why an animal product tastes the way it does—animal cheddar, blue, brie, mozzarella—because it is what it is,” says Zahn, who previously headed data science for Google’s massive ads business. “There [was] no commercial reason to understand it.”  

In the food science lab on the ground floor of the Climax offices, on the site of an old chocolate factory, Zahn shows off some of the instruments his team used to build its data trove. There’s a machine that uses ion chromatography to show the precise balance of different acids after bacterial strains break down lactose. A mass spectrometer acts like an “electronic nose” to reveal which volatile compounds generate our olfactory response to food. A device called a rheometer tracks how a cheese responds to physical deformation; part of our response to cheese is based on how it reacts to slicing or chewing. The cheese data creates target baselines of performance that an AI can try to reach with different combinations of plant ingredients.

Using educated guesswork about which plants might perform well as substitutes, Climax food scientists have created more than 5,000 cheese prototypes in the past four years. With the same lab instruments employed on animal cheese, the Climax team performs an analysis that includes roughly 50 different assays for texture and flavor, generating millions of data points in the process. The AI is trained on these prototypes, and the algorithm then suggests mixtures that might perform even better. The team tries them out and keeps iterating. “You vary all the input knobs, you measure the outputs, and then you try to squeeze the difference between the output and your animal target to be as small as possible,” Zahn says. Including small-scale “micro-prototypes,” he estimates, Climax has analyzed roughly 100,000 plant ingredient combinations.

Tasting and subtly adjusting the ingredient blends in so many prototypes by hand would take several thousand years, Zahn says. But starting from zero in early 2020, he and his AI-aided team were able to formulate their first cheese and bring it to market in April 2023.  

The plant constituents of that product, a vegan blue cheese, are hardly exotic. The top four ingredients are pumpkin seeds, coconut oil, lima beans, and hemp protein powder. And yet Dominique Crenn, a Michelin-starred chef, described it as “soft, buttery, and surprisingly rich—beyond imagination for a vegan cheese.”  

Bel Group, the maker of Laughing Cow, has an agreement to license the company’s products, and a second large producer that Zahn cannot yet publicly name has also signed on. He is currently beating the venture capital bushes for a funding round and hopes to begin selling the brie and feta later this year. 

Unlike Watson’s ill-fated attempt to formulate a better Pepsi, the Climax algorithms can pull together ingredients in new ways that seem like alchemy. “There is an interaction of one component with another component that triggers a flavor or sensation that you didn’t expect,” Zahn says. “It’s not like just the sum of the two components—it’s something completely different.”  

One reason to develop alternatives to dairy-based cheese is its environmental cost: by weight, cheese has a higher carbon footprint than either chicken or pork, and humans eat roughly 22 million tons of it each year. For Zahn, the answer is not asking consumers to settle for a rubbery, bland substitute—but offering a plant-based version that tastes as good or better and could cost much less to make.

Andrew Rosenblum’s writing has appeared in New Scientist, Popular Science, Wired, and many other places.

Job titles of the future: AI prompt engineer

24 April 2024 at 05:00

The role of AI prompt engineer attracted attention for its high-six-figure salaries when it emerged in early 2023. Companies define it in different ways, but its principal aim is to help a company integrate AI into its operations. 

Danai Myrtzani of Sleed, a digital marketing agency in Greece, describes herself as more prompter than engineer. She joined the company in March 2023 as one of two experts on its new experimental-AI team.

Go-to AI experts: Since joining Sleed, Myrtzani has helped develop a tool that generates personalized LinkedIn posts for clients. The tool works with OpenAI’s ChatGPT platform, which automates the writing process using sets of built-in prompts. Myrtzani’s job is to ensure that users get the results they are looking for. She also teaches other employees how to use generative AI tools, hosts workshops, and writes an internal newsletter dedicated to AI. Her employers “want pretty much everyone to be able to use AI,” she says, because these tools have the potential to automate trivial tasks, making more time for work that requires creative thinking. She refers to her department as “the support team for AI.”

An education in language: Myrtzani came to Sleed with experience experimenting with generative AI tools as well as a university education in social anthropology. Her studies gave her an expertise in human language systems that the company thought would be especially valuable in the job. “The more qualified you are at using language, the easier it is to create prompts,” she says.

More than prompt writers: Many writers have been concerned that generative AI could make their jobs obsolete. Prompt engineers are especially vulnerable: demand for their services could disappear if the software becomes better at understanding users’ prompts. But Myrtzani says her own position demands much more than just prompt writing, including identifying and integrating AI-based solutions for business challenges. “The higher tiers of prompt engineering are where the enduring and evolving aspects of the role lie,” she says. 

How we transform to a fully decarbonized world

24 April 2024 at 05:00

In 1856, Napoleon III commissioned a baby rattle for his newborn son, to be made from one of the most precious metals known at the time: light, silvery, and corrosion-resistant aluminum. Despite its abundance—it’s the third most common element in Earth’s crust—the metal wasn’t isolated until 1824, and the complexity and cost of the process made the rattle a gift fit for a prince. It wasn’t until 1886 that two young researchers, on opposite sides of the Atlantic, developed the method that is still used for refining aluminum commercially. The Hall-Héroult process is extraordinarily energy intensive: the chemically modified ore is dissolved into a high-temperature bath of molten minerals, and an electrical current is passed through it to separate the metallic aluminum. It’s also intrinsically energy intensive: part of the reason the metal was isolated only relatively recently is because aluminum atoms bind so tightly to oxygen. No amount of clever engineering will change that physical reality. The astronomical growth in worldwide aluminum production over the last century was made possible by the build-out of the energy infrastructure necessary to power commercial refineries, and to do so in a way that was economically viable. In the US, that was facilitated by the massive hydroelectricity projects built by the federal government as part of Franklin D. Roosevelt’s New Deal, closely followed by World War II and the immense mobilization of resources it entailed: aluminum was the material of choice for the thousands and thousands of aircraft that rolled off wartime assembly lines as fast as others were shot down. Within a century, the metal went from precious and rare to ubiquitous and literally disposable.

Just as much as technological breakthroughs, it’s that availability of energy that has shaped our material world. The exponential rise in fossil-fuel usage over the past century and a half has powered novel, energy-intensive modes of extracting, processing, and consuming matter, at unprecedented scale. But now, the cumulative environmental, health, and social impacts—in economics terms, the negative externalities—of this approach have become unignorable. We can see them nearly everywhere we look, from the health effects of living near highways or oil refineries to the ever-growing issue of plastic, textile, and electronic waste. 

We’re accustomed to thinking about the energy transition as a way of solving the environmental problem of climate change. We need energy to meet human needs—for protection from the elements (whether as warmth or cooling), fuel for cooking, artificial light, social needs like mobility and communication, and more. Decarbonizing our energy systems means meeting these needs without burning fossil fuels and releasing greenhouse gases into the atmosphere. Largely as a result of public investment in clean-energy research and development, a world powered by electricity from abundant, renewable, nonpolluting sources is now within reach.

Just as much as technological breakthroughs, it’s the availability of energy that has shaped our material world

What is much less appreciated is that this shift also has the potential to power a transformation in our relationship with matter and materials, enabling us to address the environmental problem of pollution and waste. That won’t happen by accident, any more than the growth of these industries in the 20th century was an accident. In order to reach this future, we need to understand, research, invest in, and build it. Every joule of electricity that comes from fossil fuels means paying for what’s burned to produce it. In fact, because of the inefficiency of thermal generation, it means paying for many more joules of heat. 

Energy generation from renewable sources has capital and operating costs, of course, but minimal, incremental ones. That’s because the input energy arrives as wind or sunlight, not as boxcars of coal. In the big picture, this means that in a fully decarbonized world, all energy will be closer to hydroelectricity in its economics: while it may never quite be “too cheap to meter,” it may indeed be too cheap to reliably generate a profit on an open energy market. This is a problem for investor-owned energy infrastructure, but it’s potentially transformative for community-owned systems (including public utilities, nonprofit electricity cooperatives, or local microgrids), where cheaper and more abundant energy can power a just transition and a new economy.

Twentieth-century investments in energy infrastructure, like the New Deal’s Rural Electrification Act of 1936 and its counterparts worldwide, formed the basis for the global industrial economy. If we can achieve a similar scale of commitment to renewable energy—prioritizing abundance and access over profit—it will lead to another jump in what’s possible in the material world, where what was previously unthinkably expensive becomes quotidian reality. For example, just like refining aluminum, desalinating seawater is intrinsically energy intensive. But in a world with cheap, clean electricity, residents of coastal cities could get a reliable supply of drinking water from oceanside water treatment plants instead of contested freshwater sources. 

Desalination is not the only energy-intensive process that would become viable. Aluminum, glass, and steel are among the most recycled materials in part because so much energy is needed to make them from their raw precursors that recovery is economically worthwhile. In contrast, plastics—in their near infinite variety—don’t lend themselves to mechanical recycling except in a handful of cases. Effectively recycling plastics means breaking them down into their chemical building blocks, ready to be put together into new forms. And since most plastics will burn to produce heat, going in the opposite direction—reassembling those carbon atoms into new plastics—requires a significant input of energy. It’s always been easier, cheaper, and more profitable to just dump the waste into landfills and make new plastics out of freshly extracted oil and gas. But if the energy came from inexpensive renewables, the whole economic equation of making plastics could change. Carbon dioxide could be pulled from the air and transformed into useful polymers using energy from the sun, with the waste plastic decomposed into raw materials so the process could begin again. 

If this sounds familiar, it’s because it’s how plants work. But, just like Hall and Héroult’s breakthrough for aluminum, new processes would require both energy and technological innovation. Decades of research have gone into creating new kinds of plastics from fossil fuels, and only a proportionally tiny amount into what happens to those plastics at the end of their lives. But now numerous companies, including Twelve, are building on new research to do just this kind of transformation, using renewably sourced energy to turn water and atmospheric carbon dioxide back into hydrocarbons, in the form of fuel and materials.

Prioritizing abundance and access over profit will lead to another jump in what’s possible.

Finally, it’s not just about plastic. If we succeed in building a world of even cheaper and more abundant energy but we again use it to supercharge extraction, consumption, and disposal, then we might “solve” the pressing crisis around energy while worsening the multiple environmental crises posed by pollution. Instead, we can think about community-led investments in energy infrastructure as spinning up a new industrial system in which clean, inexpensive renewable energy makes it possible to recover a broad range of materials. That would cut out the enormous costs of primary extraction and disposal, including environmental depredation and geopolitical conflict. 

Building momentum as fast as we can will limit the materials bill for the huge changes that decarbonization will entail, like replacing combustion-powered vehicles with their electric equivalents. This is already happening with companies like Ascend Elements, currently building a facility in Hopkinsville, Kentucky, to produce materials for new batteries from recycled lithium batteries. It’s financed by more than half a billion dollars of recent private investment that builds on $480 million in Department of Energy grants, and the work is based on fundamental research that was supported by the National Science Foundation. As more and more clean, renewable energy comes online, we need to continue with policies that support research and development on the new technologies required to recover all kinds of materials—together with regulations that account for the true costs of extraction and disposal. This will facilitate not just an energy transition but also a matter transition, ensuring that the industrial sector aligns with the health of our planet.

Deb Chachra is a professor of engineering at Olin College of Engineering in Needham, Massachusetts, and the author of How Infrastructure Works: Inside the Systems That Shape Our World (Riverhead, 2023).

Quartz, cobalt, and the waste we leave behind

24 April 2024 at 05:00

Some time before the first dinosaurs, two supercontinents, Laurasia and Gondwana, collided, forcing molten rock out from the depths of the Earth. As eons passed, the liquid rock cooled and geological forces carved this rocky fault line into Pico Sacro, a strange conical peak that sits like a wizard’s hat near the northwestern corner of Spain.

Today, Pico Sacro is venerated as a holy site and rumored, in the local mythology, to be a portal to hell. But this magic mountain has also become valued in modern times for a very different reason: the quartz deposits that resulted from these geological processes are some of the purest on the planet. Today, it is a rich source of the silicon used to build computer chips. From this dusty ground, the mineral is plucked and transformed into an inscrutable black void of pure inorganic technology, something that an art director could have dreamed up to stand in for aliens or the mirror image of earthly nature.

Ed Conway, a columnist for the Times of London, catches up with this rock’s “epic odyssey” in his new book, Material World: The Six Raw Materials That Shape Modern Civilization.

In a warehouse just a few miles from the peak, he finds a dazzling pile of fist-size quartz chunks ready to be shoveled into a smoking coal-fired furnace running at 1,800 °C, where they are enveloped in a powerful electrical field. The process is not what he expected—more Lord of the Rings than Bay Area startup—but he relishes every near-mystical step that follows as quartz is coaxed into liquid silicon, drawn into crystals, and shipped to the cleanest rooms in the world.

Conway’s quest to understand how chips are made confronts the reality that no one person, “even those working on the supply chain itself,” can really explain the entire process. Conway soon discovers that even an industrial furnace can be a scene of sorcery and wonder, partly because of the electrical current that passes through the quartz and coal. “Even after more than a hundred years of production, there are still things people don’t understand about what’s happening in this reaction,” he is told by Håvard Moe, an executive at the Norwegian company Elkem, one of Europe’s biggest silicon producers.

Conway explains that the silicon “wafers” used to make the brains of our digital economy are up to 99.99999999% pure: “for every impure atom there are essentially 10 billion pure silicon atoms.” The silicon extracted from around Pico Sacro leaves Spain already almost 99% pure. After that, it is distilled in Germany and then sent to a plant outside Portland, Oregon, where it undergoes what is perhaps its most entrancing transformation. In the Czochralski or “CZ” process, a chamber is filled with argon gas and a rod is dipped repeatedly into molten refined silicon to grow a perfect crystal. It’s much like conjuring a stalactite at warp speed or “pulling candy floss onto a stick,” in Conway’s words. From this we get “one of the purest crystalline structures in the universe,” which can begin to be shaped into chips.

Material World is one of a spate of recent books that aim to reconnect readers with the physical reality that underpins the global economy. Conway’s mission is shared by Wasteland: The Secret World of Waste and the Urgent Search for a Cleaner Future, by Oliver Franklin-Wallis, and Cobalt Red: How the Blood of the Congo Powers Our Lives, by Siddharth Kara. Each one fills in dark secrets about the places, processes, and lived realities that make the economy tick.

Conway aims to disprove “perhaps the most dangerous of all the myths” that guide our lives today: “the idea that we humans are weaning ourselves off physical materials.” It is easy to convince ourselves that we now live in a dematerialized “ethereal world,” he says, ruled by digital startups, artificial intelligence, and financial services. Yet there is little evidence that we have decoupled our economy from its churning hunger for resources. “For every ton of fossil fuels,” he writes, “we exploit six tons of other materials—mostly sand and stone, but also metals, salts, and chemicals. Even as we citizens of the ethereal world pare back our consumption of fossil fuels, we have redoubled our consumption of everything else. But, somehow, we have deluded ourselves into believing precisely the opposite.”

""
Quartz
""
Cobalt

Conway delivers rich life stories of the resources without which our world would be unrecognizable, covering sand, salt, iron, copper, oil, and lithium. He buzzes with excitement at every stage, with a correspondent’s gift for quick-fire storytelling, revealing the world’s material supply chains in an avalanche of anecdote and trivia. The supply chain of silicon, he shows, is both otherworldly and incredibly fragile, encompassing massive, anonymous industrial giants as well as terrifyingly narrow bottle­necks. Nearly the entire global supply of specialized containers for the CZ dipping process, for example, is produced by two mines in the town of Spruce Pine, North Carolina. “What if something happened to those mines? What if, say, the single road that winds down from them to the rest of the world was destroyed in a landslide?” asks Conway. “Short answer: it would not be pretty. ‘Here’s something scary,’ says one veteran of the sector. ‘If you flew over the two mines in Spruce Pine with a crop duster loaded with a very particular powder, you could end the world’s production of semiconductors and solar panels within six months.’” (Conway declines to print the name of the substance.)

Yet after such an impressive journey through deep time and the world economy, how long will any electronic gadget last? The useful life of our electronics and many other products is likely to be a short blip before they return to the earth. As Oliver Franklin-Wallis writes in Wasteland, electronic waste is one stubborn part of the 2 billion tons of solid waste we produce globally each year, with the average American discarding more than four pounds of trash each day.

Wasteland begins with a trip to Ghazipur, India, the “largest of three mega-landfills that ring Delhi.” There, amid an aromatic fug of sticky-sweet vapors, Franklin-Wallis stomps through a swamp-like morass of trash, following his guide, a local waste picker named Anwar, who helps him recognize solid stepping-stones of trash so that he may safely navigate above the perilous system of subterranean rivers that rush somewhere unseen below his feet. Like the hidden icy currents that carve through glaciers, these rivers make the trash mountain prone to cleaving and crumbling, leading to around 100 deaths a year. “Over time, [Anwar] explains, you learn to read the waste the way sailors can read a river’s current; he can intuit what is likely to be solid, what isn’t. But collapses are unpredictable,” Franklin-Wallis writes. For all its aura of decay, this is also a living landscape: there are tomato plants that grow from the refuse. Waste pickers eat the fruits off the vine.

Wasteland is best when excavating the stories buried in the dump. In 1973, academics at the University of Arizona, led by the archaeologist William Rathje, turned the study of landfills into a science, labeling themselves the “garbologists.” “Trash, Rathje found, could tell you more about a neighborhood—what people eat, what their favorite brands are—than cutting-­edge consumer research, and predict the population more accurately than a census,” Franklin-Wallis writes. “Unlike people,” he adds, “garbage doesn’t lie.”

Wasteland leaves a lasting impression of the trash-worlds that we make. Most horrifying of all, the contents of landfills don’t decompose the way we expect. By taking geological cores from landfills, Rathje found that even decades later, our waste remains a morbid museum: “onion parings were onion parings, carrot tops were carrot tops. Grass clippings that might have been thrown away the day before yesterday spilled from bulky black lawn and leaf bags, still tied with twisted wire.”

Simply shifting to “sustainable” or “cleaner” technologies doesn’t eliminate the industrial fallout from our consumption.

Franklin-Wallis’s histories help tell us where we as a civilization began to go wrong. In ancient Rome, waste from public latrines was washed away with wastewater from the city’s fountains and bathhouses, requiring a “complex underground sewer system crowned by the Cloaca Maxima, a sewer so great that it had its own goddess, Cloacina.” But by the Victorian age, the mostly circular economy of waste was coming to an end. The grim but eco-friendly job of turning human effluent into farm fertilizer (so-called “nightsoil”) was made obsolete by the adoption of the home flushing toilet, which pumped effluent out into rivers, often killing them. Karl Marx identified this as the beginning of a “metabolic rift” that—later turbocharged by the development of disposable plastics—turned a sustainable cycle of waste reuse into a conveyor between city and dump.

This meditation on trash can be fascinating, but the book never quite lands on a big idea to draw its story forward. While trash piles can be places of discovery, our propensity to make waste is no revelation; it’s an ever-present nightmare. Many readers will arrive in search of answers that Wasteland isn’t offering. Its recommendations are ultimately modest: the author resolves to buy less, learns to sew, appreciates the Japanese art of kintsugi (mending pottery with precious metals to highlight the act of repair). A handful of other lifestyle decisions follow.

As Franklin-Wallis is quick to acknowledge, a journey through our own waste can feel hopeless and overwhelming. What we’re lacking are viable ways to steer our societies from the incredibly resource-­intensive paths they are on. This thought, taken up by designers and activists driving the Green New Deal, is aiming to turn our attention away from dwelling on our personal “footprint”—a murky idea that Franklin-Wallis traces to industry groups lobbying to deflect blame from themselves. 

Reframing both waste and supply chains as matters that are political and international, rather than personal, could guide us away from guilt and move us toward solutions. Instead of looking at production and waste as separate problems, we can think of them as two aspects of one great challenge: How do we build homes, design transport systems, develop technology, and feed the world’s billions without creating factory waste upstream or trash downstream?

 view of the cobalt-copper Shabara artisanal mine
The Shabara artisanal cobalt mine near Kolwezi, Democratic Republic of Congo.
ARLETTE BASHIZI/FOR THE WASHINGTON POST VIA GETTY IMAGES

Simply shifting to “sustainable” or “cleaner” technologies doesn’t eliminate the industrial fallout from our consumption, as Siddharth Kara reveals in Cobalt Red. Cobalt is a part of just about every rechargeable device—it is used to make the positively charged end of lithium batteries, for example, and each electric vehicle requires 10 kilograms (22 pounds) of cobalt, 1,000 times the quantity in a smartphone.

Half the world’s reserves of the element are found in Katanga, in the south of the Democratic Republic of Congo (DRC), which puts this resource-rich region at the center of the global energy transition. In Kara’s telling, the cobalt rush is another chapter in an age-old story of exploitation. In the last two centuries, the DRC has been a center not only for the bloody trade in enslaved humans but also for the colonial extraction of rubber, copper, nickel, diamonds, palm oil, and much more. Barely a modern catastrophe has unfolded without resources stolen from this soil: copper from the DRC made the bullets for two world wars; uranium made the bombs dropped on Hiroshima and Nagasaki; vast quantities of tin, zinc, silver, and nickel fueled Western industrialization and global environmental crises. In return, the DRC’s 100 million people have been left with little by way of lasting benefits. The country still languishes at the foot of the United Nations development index and now faces disproportionate impacts from climate change.

In Cobalt Red, Congo’s history plays out in vignettes of barbarous theft perpetrated by powerful Western-backed elites. Kara, an author and activist on modern slavery, structures the book as a journey, drawing frequent parallels to Joseph Conrad’s 1899 Heart of Darkness, with the city of Kolwezi substituting for Kurtz’s ivory-trading station, the destination in the novella. Kolwezi is the center of Katanga’s cobalt trade. It is “the new heart of darkness, a tormented heir to those Congolese atrocities that came before—colonization, wars, and generations of slavery,” Kara writes. The book provides a speedy summary of the nation’s history starting with the colonial vampirism of the Belgian king Leopold’s “Free State,” described by Conrad as the “vilest scramble for loot that ever disfigured the history of human conscience.” The king’s private colony forced its subjects to collect rubber under a system of quotas enforced by systematic execution and disfigurement; forced labor continued well into the 20th century in palm oil plantations that supplied the multinational Unilever company.        

These three books offer to connect the reader to the feel and smell and rasping reality of a world where materials still matter.

Kara’s multiyear investigation finds the patterns of the past repeating themselves in today’s green boom. “As of 2022, there is no such thing as a clean supply chain of cobalt from the Congo,” he writes. “All cobalt sourced from the DRC is tainted by various degrees of abuse, including slavery, child labor, forced labor, debt bondage, human trafficking, hazardous and toxic working conditions, pathetic wages, injury and death, and incalculable environmental harm.” Step by step, Kara’s narrative moves from the fringes of Katanga’s mining region toward Kolwezi, documenting the free flow of minerals between two parallel systems supposedly divided by a firewall: the formal industrial system, under the auspices of mining giants that are signatories to sustainability pacts and human rights conventions, and the “artisanal” one, in which miners with no formal employer toil with shovels and sieves to produce a few sacks of cobalt ore a day.

We learn of the system of creuseurs and négociants—diggers and traders—who move the ore from denuded fields into the formal supply chain, revealing that an unknown percentage of cobalt sold as ethical comes from unregulated toil. If Material World tells a neat story of capitalism’s invisible hand, the force that whisks resources around the planet, Cobalt Red documents a more brutal and opaque model of extraction. In Kara’s telling, the artisanal system is grueling and inefficient, involving countless middlemen between diggers and refineries who serve no purpose except to launder ore too low-grade for industrial miners and obscure its origins (while skimming off most of the earnings).

Everywhere Kara finds artisanal mining, he finds children, including girls, some with babies on backs, who huddle together to guard against the threat of sexual assault. There is no shortage of haunting stories from the frontlines. Cobalt ore binds with nickel, lead, arsenic, and uranium, and exposure to this metal mixture raises the risk of breast, kidney, and lung cancers. Lead poisoning leads to neurological damage, reduced fertility, and seizures. Everywhere he sees rashes on the skin and respiratory ailments including “hard metal lung disease,” caused by chronic and potentially fatal inhalation of cobalt dust.

One woman, who works crushing 12-hour days just to fill one sack that she can trade for the equivalent of about 80 cents, tells how her husband recently died from respiratory illness, and the two times she had conceived both resulted in miscarriage. “I thank God for taking my babies,” she says. “Here it is better not to be born.” The book’s handful of genuinely devastating moments arrive like this—from the insights of Congolese miners, who are too rarely given the chance to speak.

All of which leaves you to question Kara’s strange decision to mold the narrative around the 125-year-old Heart of Darkness. It has been half a century since the Nigerian novelist Chinua Achebe condemned Conrad’s novella as a “deplorable book” that dehumanized its subjects even as it aimed to inspire sympathy for them. Yet Kara doubles down by mirroring Conrad’s storytelling device and style, from the first sentence (featuring “wild and wide-eyed” soldiers wielding weapons). When Kara describes how the “filth-caked children of the Katanga region scrounge at the earth for cobalt,” who is the object of disgust: the forces of exploitation or the miners and their families, often reduced to abstract figures of suffering?

Following Conrad, Cobalt Red becomes, essentially, a story of morality—an “unholy tale” about the “malevolent force” of capital—and reaches a similarly moralistic conclusion: that we must all begin to treat artisanal miners “with equal humanity as any other employee.” If this seems like an airy response after the hard work of detailing the intricacies of cobalt’s broken supply chain, it is doubly so after Kara documents both the past waves of injustice and the moral crusades that have brought the Free State and old colonial structures to an end. Such calls for humanistic fairness toward Congo have echoed down the ages.

""
Material World: The Six Raw Materials That
Shape Modern Civilization

Ed Conway
Cobalt Red: How the Blood of the Congo Powers Our Lives
Siddharth Kara
Wasteland: The Secret World of Waste and the Urgent Search for a Cleaner Future
Oliver Franklin-Wallis

All three books offer to connect the reader to the feel and smell and rasping reality of a world where materials still matter. But in Kara’s case, such a strong focus on documenting firsthand experience edges out a deeper understanding. There is little space given to the numerous scholars from across the African continent who have made sense of how politics, commerce, and armed groups together rule the DRC’s deadly mines. The Cameroonian historian Achille Mbembe has described sites like Katanga not only as places where Western-style rule of law is absent but as “death-worlds” constructed and maintained by rich actors to extract resources at low cost. More than simply making sense of the current crisis, these thinkers address the big questions that Kara asks but struggles to answer: Why do the resources and actors change but exploitation remains? How does this pattern end?

Matthew Ponsford is a freelance reporter based in London.

Building momentum

By: Mat Honan
24 April 2024 at 05:00

One of the formative memories of my youth took place on a camping trip at an Alabama state park. My dad’s friend brought an at-the-time gee-whiz gadget, a portable television, and we used it to watch the very first space shuttle launch from under the loblolly pines. It was thrilling. And it was hard not to believe, watching that shuttle go up (and, a few days later, land), that we were entering an era when travel into the near reaches of space would become common. 

But as it turns out, that’s not the future we built.

This is our Build issue, and although it’s certainly about creating the future we want, in many ways this issue is also about a future that never arrived. Interplanetary space stations. Friendly robots. Even (if you squint and accept a generous definition) terraforming an increasingly uninhabitable Earth. 

Building is a popular tech industry motif—especially in Silicon Valley, where “Time to build” has become something of a call to arms following an influential essay by Marc Andreessen that lamented America’s seeming inability to build just about anything. That essay was published four years ago, at the apex of the country’s disastrous response to covid-19, when masks, PPE, and even hospital beds were in short supply. (As were basic necessities of day-to-day life like eggs, flour, and toilet paper.) It’s an alluring argument. 

Yet the future is built brick by brick from the imperfect decisions we make in the present. We don’t often recognize that the seeming steps forward we are taking today could be seen as steps back in the years to come. This could very well be how we come to view some of the efforts we are making in terms of climate remediation. Xander Peters (accompanied by some incredible photography from Virginia Hanusik) writes about Louisiana’s attempts to protect communities against increased flooding—and wonders if perhaps a managed retreat might not be the better course of action.  

Sometimes the things we don’t do, or the steps we skip, have bigger implications than the actions we do take. For the space program, the decision to race to the moon rather than to first build a way station—as was originally envisioned by some of the pioneers of space travel—may have had the long-term effect of keeping us more earthbound than we might otherwise be. David W. Brown looks at the fallout of those skipped steps and recounts the race to build a new, privately operated space station before the International Space Station comes plummeting back to Earth around 2030. 

Other times, we’re just held back because we haven’t figured out how to do things yet. Simply put: the tech just isn’t quite there. For our cover story on home robots, Melissa Heikkilä looks at how the intersection of robotics and artificial intelligence, and especially large language models, could at last be ushering in the era of helper robots that we’ve been dreaming of since the days of The Jetsons. It’s such a fertile area of development, with action from both big industry incumbents like Google and highly specialized, sometimes secretive startups, that there is far more than we could get into in a single story.

“There was an entire interview with Meta that I didn’t end up using,” Melissa told me. “They have a team working on ‘embodied AI,’ which believes that true general intelligence needs a physical element to it, such as robots or glasses. They’ve built an entire mock apartment in one of their offices, including a full-size living room, kitchen, dining room, and so on, in which they conduct experiments with robots and virtual reality. It’s pretty cool!”

Look for us to keep that reporting going at technologyreview.com

And there’s much more, too—including a zinger of a story from Annalee Newitz that takes on the history of brainwashing, a feature on building accountability into police body cameras, and a wild report on designing vegan cheese with generative AI. We hope you find something to take away and build on. 

Thanks for reading,

Mat Honan

“I wanted to work on something that didn’t exist”

23 April 2024 at 17:00

In 2017 Polina Anikeeva, PhD ’09, was invited to a conference in the Netherlands to give a talk about magnetic technologies that she and her team had developed at MIT and how they might be used for deep brain stimulation to treat Parkinson’s disease. After sitting through a long day of lectures, she was struck by one talk in particular, in which a researcher brought up the idea that Parkinson’s might be linked to pathogens in the digestive system. And suddenly Anikeeva, who had pioneered the development of flexible, multifunction brain probes, found herself thinking about how she might use these probes to study the gut.

While the idea of switching gears might give some researchers pause, Anikeeva thrives on venturing beyond her academic comfort zones. In fact, the path that led to her becoming the Matoula S. Salapatas Professor in Materials Science and Engineering—as well as a professor of brain and cognitive sciences, associate director of MIT’s Research Laboratory of Electronics, and an associate investigator at MIT’s McGovern Institute for Brain Research—was rarely a clear or obvious one. There is, however, one constant in everything she does: an indefatigable curiosity that pushes her toward the edge of risk—or, as she likes to call it, “the intellectual abyss.”

After the conference in the Netherlands, she soon dove into studying the human gut, a system that doesn’t simply move nutrients through the body but also has the capacity to interpret or send information. In fact, she has come to think of it as a largely uncharted “distributed nervous system.” In 2022, she became the director of the newly launched K. Lisa Yang Brain-Body Center at MIT, where she’s directing research into the neural pathways beyond the brain—work that could shed light on the processes implicated in aging and pain, the mechanisms behind acupuncture, and the ways digestive issues might be linked not just to Parkinson’s but to autism and other conditions.

Although she hadn’t heard of it before that conference in the Netherlands, the hypothesis that piqued Anikeeva’s interest in studying the brain-body connection was first posed by the German anatomist Heiko Braak in 2003. He and colleagues posited that a type of Parkinson’s disease has environmental origins: a pathogen that enters the body through the mouth or the nasal cavity and ends up in the digestive tract,where it triggers the formation of abnormal, possibly toxic clumps of protein within neurons. That condition, known as Lewy pathology, is the hallmark of the disease.

“The reason the environmental hypothesis came about is because those Lewy bodies actually have been found in the GI tract of patients with Parkinson’s,” Anikeeva explains. “But what’s more striking is that if you go back in the medical history, Parkinson’s patients—many of them, like 80% or so—have been diagnosed with GI dysfunction, most commonly constipation, years before they get a Parkinson’s diagnosis.”

Functions, behaviors, and diseases long thought to originate in the brain might be influenced by signals from other parts of the body.

Researchers have debated the hypothesis and have yet to make definite causal connections between the ingestion of pathogens and the progression of Parkinson’s disease. But Anikeeva was intrigued. 

“It’s quite controversial and it has seen some attempts at testing, but nothing conclusive,” she says. “I thought that my lab had a unique tool kit to start testing this hypothesis.”  

""
Anikeeva examines the microscopic gut-brain interfaces her team developed.
GRETCHEN ERTL

At the time, Anikeeva’s lab was focused on flexible polymer-fiber probes that can interface with the brain and spinal cord. Having developed these fibers, she and her team were testing them in mice, both to stimulate neurons and to record their signals so they could study the ways in which those signals underlie behavior. The lab had also been working on using magnetic nanomaterials to stimulate neurons so their activity could be regulated remotely—without needing to run fibers to a mouse’s brain at all.  

Braak’s hypothesis made Anikeeva wonder: Could similar multifunctional probes be used to explore the digestive system? Could she and her team engineer gut-­specific tools to study the neurons that make up what’s known as the enteric nervous system, which regulates sensing, moving, absorbing, and secreting—the tasks that the gastrointestinal tract must perform to digest food? And for that matter, could they study any of the body’s peripheral systems?

“I started thinking about interfacing not only with the central nervous system, but also with other organ systems in the body, and about how those organ systems contribute to brain function,” she explains.

Ultimately, this interface could help researchers understand the way the body communicates with the brain and vice versa, and to pinpoint where diseases, including Parkinson’s, originate.

“For many years neuroscience has essentially considered the brain in a vacuum. It was this beautiful thing floating, disconnected,” Anikeeva says. “Now we know that it’s not in a vacuum … The body talks back to the brain. It’s not a strictly downward information flow. Whatever we think—our personality, our emotions—may not only come from what we perceive as the conscious brain.” In other words, functions, behaviors, and neurodegenerative diseases long thought to originate in the brain—perhaps even the act of thinking itself—might be influenced by signals from other parts of the body. “Those are the signals that I’m very excited about studying,” she says. “And now we have the tools to do that.”

“It’s opened technological floodgates into these neuroscience questions,” she adds. “This is a new frontier.”


Anikeeva grew up in Saint Petersburg, Russia, the child of engineers, and showed brilliance from an early age. She was admitted to a selective science magnet school, but she briefly considered pursuing a career in art.

“I was about 15 years old when I was choosing between professional art and professional physics, and I didn’t want to be poor,” she says with a laugh. “Being good at watercolor doesn’t help with leaving Russia, which was my objective. I grew up in a very unstable political environment, a very unstable economic environment. Nobody becomes an artist if they can do something else that’s more practical.” She chose science and earned her undergraduate degree in biophysics at Saint Petersburg State Polytechnic. 

But Anikeeva says her artistic brain, along with the mind-clearing avocations of climbing and long-distance running, helps her with her work today: “I use that way of thinking, the imagination, to think conceptually about how a device might come together. The idea comes first as an image.”

After graduating, Anikeeva got an internship in physical chemistry with the Los Alamos National Laboratory in New Mexico and worked on solar cells using quantum dots. In 2004, she arrived at MIT to begin her PhD in materials science and engineering.

""
Duke University postdoc Laura Rupprecht, MIT graduate student Atharva Sahasrabudhe (holding a fiber gut probe), and MIT postdoc Sirma Orguc, SM ’16, PhD ’21, in the lab.
PHOTO COURTESY OF THE RESEARCHERS

As a graduate student, Anikeeva helped develop quantum-dot ­LED display technology that’s now used by television manufacturers and sold in stores around the world. She has coauthored two papers on that research with her primary advisor, Vladimir Bulović, the Fariborz Maseeh (1990) Professor of Emerging Technology, associate dean for innovation at the School of Engineering, and director of MIT.nano, and seven with Bulović and Nobel Prize winner Moungi Bawendi, MIT’s Lester Wolfe Professor of Chemistry.

But after earning her PhD in 2009, Anikeeva says, she got bored—as she frequently does. “I wanted to work on something that didn’t exist,” she says.

That led her to seek out a postdoctoral fellowship in neuroscience at Stanford University in the lab of Karl Deisseroth, one of the inventors of optogenetics, which uses laser light to activate proteins in genetically modified brain cells. Optogenetic tools make it possible to trigger or inhibit neurons in test rodents, creating an on/off switch that lets researchers study how the neurons work. 

“I was really fortunate to be hired into that lab, despite the fact that my PhD, ultimately, was not in neuroscience but in optical electronics,” she says. “I saw all these animals running around with optical cables coming out of their heads, and it was amazing. I wanted to learn how to do that. That’s how I came to neuroscience.”

Realizing that the tools neuroscientists used to study complex biological phenomena in the brain were inadequate, she started to develop new ones. In Deisseroth’s lab, she found a way to improve upon the fiber-optic probes they were using. Her version incorporated multiple electrodes, allowing them to better capture neuronal signals. 

Probing the brain is challenging because it’s very soft—“like pudding,” as she puts it—and the tools researchers used then were rigid and potentially damaging. So when Anikeeva returned to MIT as an assistant professor, her lab collaborated with Yoel Fink, PhD ’00, a professor of materials science and engineering as well as electrical engineering and computer science and director of MIT’s Research Laboratory of Electronics, to create very thin, highly flexible fibers that can enter the brain and the spinal cord without doing any harm (see “A Better Way to Probe the Brain,” MIT News, May/June 2015). Unlike the bulky hardware that Deisseroth was using to deliver light for optogenetics, Anikeeva’sfibers are multifunctional. They’re made of an optical core surrounded by polycarbonate and lined with electrodes and microfluidic channels, all of which are heated and then stretched in production. “You pull, pull, pull and you get kilometers of fiber that are pretty tiny,” Anikeeva explains. “Ultimately it gets drawn down to about a hair-thin structure.”

Using these ultrathin fibers, researchers can record neuronal signals and send their own signals to neurons in the brain and spinal cord of genetically engineered mice to turn them on and off. The fibers offered a new way to investigate neural responses—and earned Anikeeva a spot on our 2015 list of 35 Innovators Under 35. They also proved to be a useful therapeutic tool for drug delivery using the fibers’ microfluidic channels.

As this work hummed along, Anikeeva heard about Braak’s hypothesis in 2017 and set out to find resources to investigate the gut-brain connection. “I promptly wrote an NIH grant, and I promptly got rejected,” she says. 

But the idea persisted.

Later that year, neural engineers studying brain interfaces at Duke invited Anikeeva to give a talk. As she had gotten in the habit of doing during her travels to other universities, she looked up researchers working on GI systems there. She found the gut-brain neuroscientist Diego Bohórquez.

While the brain is extraordinarily complex, from an engineering and research standpoint it’s much more convenient to study than the digestive tract.

“I told him that I’m really interested in the gut, and he told me that they were … studying nutrient absorption in the gut and how it affects brain function,” Anikeeva recalls. “They wanted to use optogenetics for that.”

But the glass fibers he’d been trying to use for optogenetics in the gut could do serious damage to the fragile GI system. So Anikeeva proposed a trade of sorts.

“I thought that we can easily solve Diego’s problems,” she says. “We can make devices that are highly flexible, basically in exchange for Diego teaching us everything about the gut and how to work in that really fascinating system.”

Bohórquez remembers their first meeting, the beginning of a fruitful collaboration, in some detail. “She said, ‘I see that you are doing some really interesting work in sensations and the gut. I’m sure that you’re probably trying to do something with behavior,’” he says. “And then she pulls out these fibers and said, ‘I have this flexible fiber. Do you think that you can do something with it?’”

hands holding a small device
A multifunctional fiber-based brain interface.
""
Lee Maresco fabricates stretchable organ probes under a microscope.

She returned to MIT and, she says, began to “take this lab that is a rapidly moving aircraft carrier and start reorienting it from working on the brain to working on the gut.”

The move may have surprised colleagues, but Anikeeva refuses to do anything if it loses her interest—and while the brain is extraordinarily complex, from an engineering and research standpoint it’s much more convenient to study than the digestive tract. “The gut wall is about 300 microns or so,” Anikeeva says. “It’s like three to four hairs stuck together. And it’s in continuous motion and it’s full of stuff: bile, poop, all the things.” The challenges of studying it, in other words, are nothing short of daunting.

The nervous system in the gut, Anikeeva explains, can be thought of as two socks, one inside the other. The one on the outside is the myenteric plexus, which regulates peristalsis—the rhythmic contraction of muscles that enables food to move along the gastrointestinal tract, a process known as motility. The one on the inside is the submucosal plexus, which is closer to the mucosa (the mucus-coated inner lining) and facilitates sensing within the gut. But the roles of the plexuses are not fully understood. “That’s because we can’t just implant the gut full of hardware the same way we do in the brain,” Anikeeva says. “All the methods, like optogenetics and any kind of electrical physiology—all of that was pretty much impossible in the gut. These were almost intractable problems.”


Anikeeva’s work developing tools for the brain had been so successful and groundbreaking that it was difficult for her to find financial support for her pivot to other parts of the body. But then, she says, came “another fateful meeting.”

In 2018, she gave a presentation at a McGovern Institute board meeting, conveying her latest ideas about studying Parkinson’s disease and engineering tools to explore the GI system. Lisa Yang, a board member, mentioned that many people with autism also suffer from GI dysfunction—from motility disorders to food sensitivities. Yang was already deeply interested in autism, and she and her husband had just launched the McGovern Institute’s Hock E. Tan (’75, SM ’75) and K. Lisa Yang Center for Autism Research the year before. 

“She was interested in this gut-brain connection,” Anikeeva remembers. “I was brought into the Center for Autism Research for a small project, and that small project kind of nucleated my ability to do this research—to begin developing tools to study the gut-brain connection.”

As that effort got underway, a number of colleagues at MIT and elsewhere who were also interested in brain-body pathways were drawn to the new research.

A white plastic model of the mouse stomach and devices for studying brain-organ communication in various stages of design.
STEPH STEVENS

“As our tools started to mature, I started meeting more people and it became clear to me that I’m not the only person interested in this area of inquiry at MIT,” she says. “The tools opened this frontier, and the Brain-Body Center bubbled up from that.” 

To launch into their work on the gut-brain connection, Anikeeva and her team had to completely rethink the fibers they had designed previously to study the brain. 

In brain probes, all the functional features sit at the tip of the fiber, and when that fiber is threaded into the skull, the light-emitting tip faces downward, allowing researchers a view of everything under it. That doesn’t work with the GI system. “It’s not how you want to interface with the gut,” Anikeeva says. “The gut is a lumenal organ—it’s a sock—and the nervous system is distributed in the wall.”

In other words, if the probe is looking downward, all it will see is matter passing through the gut. To research the GI tract, Anikeeva and her colleagues needed these features to sit laterally, along the length of the fiber. So with this fabrication challenge in mind, Anikeeva again approached Fink, a longtime mentor and collaborator—and a fellow TR35 veteran. 

Mice “would normally eat ferociously” when given access to food after fasting. “But if you stimulate those cells in the gut, they would feel full.”

Together they developed a way to distribute microelectronic components—LEDs for optogenetic stimulation, temperature sensors, and microfluidic channels that can deliver drugs, nutrients, and genetic material—along the fiber by essentially creating a series of pockets to contain them. Grad student Atharva Sahasrabudhe put in countless hours to make it happen and optimized the process with the help of technician Lee Maresco, Anikeeva says. Then, with Anantha P. Chandrakasan, dean of MIT’s School of Engineering, the Vannevar Bush Professor of EECS, and head of MIT’s Energy-Efficient Circuits and Systems Group, the team designed a wireless, battery-powered unit that could communicate with all those components.

The result was a fiber, about half a millimeter by one-third of a millimeter wide, made out of a rubbery material that can bend and conform to a mouse’s gut yet withstand its harsh environment. And all the electronic components housed within it can be controlled wirelessly via Bluetooth. 

“We had all the materials engineers, and then we collaborated with our wireless colleagues, and we made this device that could be implanted in the gut. And then, of course, similar principles can also be used in the brain,” Anikeeva explains. “We could do experiments both in the brain and the gut.”

""
Anikeeva consults in the lab with postdoc Taylor Cannon, who is working on extending fiber technology to biological imaging applications.
GRETCHEN ERTL

In one of the first experiments with the new fibers, Anikeeva worked with Bohórquez and his team, who had determined that sensory cells in the GI tract, called neuropods, send signals to the brain that control sensations of satiety. Using mice whose cells are genetically engineered to respond to light, the MIT and Duke researchers used the specialized fibers to optically stimulate these cells in the gut.

“We could take mice that are hungry, that have been fasting for about 18 hours, and we could put them in a cage with access to food, which they would normally eat ferociously,” Anikeeva says. “But if you stimulate those cells in the gut, they would feel full even though they were hungry, and they would not eat, or not as much.”

This was a breakthrough. “We knew that the technology works,” she says, “and that we can control gut functions from the gut.”

Next Anikeeva’s team wanted to explore how these neural connections between the gut and the brain can influence a mouse’s perception of reward or pleasure. They put the new fiber into the area of the brain where reward perception is processed. It’s packed with neurons that release dopamine—the “happy hormone”—when activated.

Then they ran tests in which mice had a choice between two compartments in a cage; each time a mouse entered a particular one, the researchers stimulated its dopamine neurons, causing the mouse to prefer it. 

To see if they could replicate that reward-seeking behavior through the gut, the researchers used the gut-specific fibers’ microfluidic channels to infuse sucrose into the guts of the mice whenever they entered a particular compartment—and watched as dopamine neurons in the brain began firing rapidly in response. Those mice soon tended to prefer the sucrose-associated compartment. 

But Anikeeva’s group wondered if they could control the gut without any sucrose at all. In collaboration with Bohórquez and his team at Duke, the researchers omitted the sucrose infusion and simply stimulated the gut neurons when the mice entered a designated compartment. Once again, the mice learned to seek out that compartment.

“We didn’t touch the brain and we stimulated nerve endings in the gut, and the mice developed the exact same type of preference—they felt happy just when we stimulated the nerve endings in their small intestines using our technology,” Anikeeva says. “This, of course, was a technical demonstration that it is now possible to control the nervous system of the gut.”

The new tools will make it possible to study how different cells in the gut send information to the brain, and ultimately the researchers hope to understand the origins not only of digestive diseases, like obesity, but of autism and neurodegenerative diseases such as Parkinson’s.

Researchers at the Brain-Body Center are already exploring those connections.  “We’re particularly interested in the gut-brain connection in autism,” Anikeeva says. “And we’re also interested in more affective disorders, because there is a big genetic link, for instance, between anxiety and IBS [or irritable bowel syndrome].”

In the future, the technology also could lead to new therapies that can control gut function more precisely and effectively than drugs, including semaglutides like Ozempic, which have made headlines in the past year for weight control.

Now that Anikeeva has developed and tested the device in the GI system and solved a lot of technical challenges, other peripheral systems in the body could be next.

“The gut is innervated, but so is every organ in the body. Now we can start asking questions: What is the connection to the immune system? The connection to the respiratory system?” she says. “All of these problems are now becoming tractable. This is the beginning.”


Probing the mind-body connection

Founded in 2022, the K. Lisa Yang Brain-Body Center at MIT is focusing on four major lines of research for its initial projects.

GUT-BRAIN:

Polina Anikeeva’s group is expanding a toolbox of new technologies and applying these tools to examine major neurobiological questions about gut-brain pathways and connections in the context of autism spectrum disorders, Parkinson’s disease, and affective disorders.

AGING:

CRISPR pioneer Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT and an investigator at the McGovern Institute, is leading a group in developing molecular tools for precision epigenomic editing and erasing accumulated “errors” of time, injury, or disease in various types of cells and tissues.

PAIN:

The lab of Fan Wang, an investigator at the McGovern Institute and professor of brain and cognitive sciences, is designing new tools and imaging methods to study autonomic responses, activity of the sympathetic and parasympathetic neurons, and interactions between the brain and the autonomic nervous system, including how pain influences these interactions.

ACUPUNCTURE:

Wang is also collaborating with Kelly Metcalf Pate’s group in MIT’s Division of Comparative Medicine, to advance techniques for documenting changes in brain and peripheral tissues induced by acupuncture in mouse models. If successful, these techniques could help make it possible to better understand the mechanisms involved in acupuncturespecifically, how the treatment stimulates the nervous system and restores function. 

Part of the goal of the Brain-Body Center, Wang says, is to dissect how the circuits of the central nervous system interact with the peripheral autonomic system to generate emotional responses to pain. She says her research has led her to a deeper understanding of the two responses to painsensory and emotional. The latter, a function requiring the autonomic nervous system, is what leads to a sense of suffering. If researchers can prevent the autonomic responses elicited by pain, she explains, then the same stimulus may produce “a sensation without pain.” The idea is to develop devices to manipulate autonomic responses in mice, and then ultimately develop devices that can help humans.  —Julie Pryor and Georgina Gustin

A walking antidote to political cynicism

23 April 2024 at 17:00

Burhan Azeem ’19 had never been to a city council meeting before he showed up to give a public comment on an affordable-­housing bill his senior year. Walking around Cambridge, he saw a “young, dynamic, racially diverse city,” but when he stepped inside City Hall, most of the others who had arrived to present comments were retirees reflecting a much narrower—and older—demographic.

Less than a year later, Azeem set out to shift the balance in who gets to make decisions on behalf of the city by running for city council himself.

A materials science and engineering major, Azeem had long been civically engaged, volunteering for Ayanna Pressley’s campaign for the US Congress as a junior. But what really set him on the path to local politics was his curiosity about why living in Cambridge is so expensive. He’d experienced the problems that arise from a lack of access to affordable housing as a kid in New York, and he wanted to understand what was contributing to that problem in the city where he’d chosen to live as an adult.

He launched his campaign a month before graduation—encouraged by Marc McGovern, himself a council member and at the time the city’s mayor, whom he’d met while campaigning for Pressley. (In Cambridge, the council chooses the mayor from within its ranks.) Azeem lost by a hundred votes, but he outperformed a candidate who’d raised more than $40,000, while he himself had raised less than $7,000. That made him think it might be worth another try. So in 2021 he ran again, and he won by 200 votes. At age 24, he was the youngest Cambridge city councilor ever elected. 

He quickly set to work trying to make Cambridge a better city, passing bills focused on housing, transit, and climate initiatives. Those successes set him up not just to win reelection in November 2023, but to garner more votes than any other council member but the mayor. 

“We passed a lot of policy—way more than an average term,” he says. “What’s cool about city council is that even though we don’t have as big a scope as Congress or the state house, we have absolute power where we do have power. Over our roads and housing zoning policy, even the president cannot tell me what to do. I think that’s why I’ve had so much success: I’m very narrowly focused on the places where we can make a really big change.”

Azeem in front of Cambridge City Hall
Azeem won reelection in November 2023, garnering more votes than any other council member but the mayor.
TOAN TRINH

If Azeem didn’t have an average first term, maybe it’s because there’s very little about him that’s average. In addition to serving on the city council, he’s also employed full-time at Tandem, a startup offering pop-up veterinary clinics, a pharmacy, and telehealth for pets that he helped get off the ground with former classmates from MIT, among others. As the company’s head of AI engineering, Azeem has led an effort to use AI to suggest medications and is working on developing tools that could potentially help vets with diagnoses. The founding team is the same one with which he helped build DayToDay Health, a startup that offers digital tools and live chat to support human patients before and after medical procedures. Having served as an EMT with MIT’s Emergency Medical Services as an undergrad, Azeem found working for DayToDay especially meaningful during the pandemic, since it gave him a way to serve his fellow citizens when everyone was in lockdown at home. DayToDay scaled from eight people to over 400 and was sold just before Azeem was elected to his first term.

“He’s like a Swiss Army knife. It doesn’t matter what the challenge is—he’s the person you want to keep with you.”

Prem Sharma ’18, CEO and cofounder, Tandem and DayToDay

As if that weren’t enough, Azeem is also one of the cofounders and a current board member and treasurer of Abundant Housing Massachusetts, a nonprofit seeking to address the state’s housing shortage and legacy of housing segregation. The organization, which started in 2020 as a group of volunteers meeting in an MIT classroom, now has six full-time employees and a million-dollar annual budget. In addition to pushing for laws aimed at increasing the housing supply, it also creates tools and resources to help grassroots groups take advantage of existing legislation like the MBTA Communities Act, a zoning reform bill meant to help Massachusetts add more than 280,000 homes near existing public transit.

“I tell him all the time, ‘I don’t know how you do it,’” says Prem Sharma ’18, CEO and cofounder of Tandem and DayToDay, who’s called Azeem a coworker and friend for years. Though Azeem has lots going on, Sharma insists that he “delivers results” at work and “his output is always quality … he’s one of our top people.” 

“He’s like a Swiss Army knife,” Sharma adds. “It doesn’t matter what the challenge is—he’s the person you want to keep with you.”

Policy priorities from personal experience

Azeem was born in Multan, Pakistan, and moved to Staten Island, New York, with his family in 2001, when he was four. His parents had immigrated after winning the visa lottery, in pursuit of financial options that might help them pay down medical debt that had arisen from his sister’s premature birth. Money was tight, so they moved in with family friends.

“There were 11 of us living in this three-bedroom. We were too many people to be legal, so we would hide out in closets whenever the landlord came over,” he recalls. “We were very nervous about being caught, which is a big reason I skipped pre-K and kindergarten.”

The family moved often from one place to another within Staten Island over the next decade. Though in some ways it was a tough place to grow up as a Pakistani immigrant kid, especially in the years after 9/11, Azeem considers himself “very lucky” in that he was naturally gifted enough at science and math to get into a science and technology high school. That paved the way for him to eventually attend MIT on a full scholarship.

His experience growing up “very poor,” as he describes it, has informed his policy priorities as an adult. When he considers what he wants to accomplish in office, he’s looking for things that can ease the burden of day-to-day life for citizens who face the kinds of challenges his family did. Those struggles aren’t all just distant memories, either—in the middle of his first term, as he was pushing to pass affordable-housing legislation, he ran into his own difficulties finding an apartment he could afford to rent. Even as someone with a decent salary who was willing to share with roommates, he often found himself competing with upwards of 50 applicants for a single unit in an apartment search process he describes as “horrific.”

“I will do whatever needs to be done. I just don’t want to waste my life.”

“The way that I think about politics is by asking: What are the most expensive things for people [that I can take on as a city councilor]?” he says. “Number one is housing. Number two is child care. And number three is transit. So how can we make those better?”

Azeem has prioritized bills that address all of the above, plus climate policy, another issue he cares deeply about. In his first term, he wrote the bill that made Cambridge the first city in New England to abolish the requirement that new construction include a certain number of parking spaces, which can make housing prohibitively expensive to build. He also played a key role in pushing through amendments to an existing law that pave the way for taller buildings to be built for affordable housing, among other initiatives.

Azeem on the streets of Cambridge
For his second term, Azeem has ideas for bills to improve public transit, make streets safer for all citizens, and increase access to affordable housing.
TOAN TRINH

“I don’t know that he always gets a ton of the credit, but he’s probably been one of the most, if not the most, prominent councilors on a lot of the housing issues that have been worked on over the last term,” says Cambridge city manager Yi-An Huang, an appointed official who works with the city council.

Azeem worked to update Cambridge’s Building Energy Use Disclosure Ordinance (BEUDO) so that it requires large nonresidential buildings, like those on MIT and Harvard’s campuses, to reach net zero emissions by 2035. He also helped pass the “specialized stretch energy code,” which requires all new construction and major renovations to rely entirely on electricity or be wired to transition to such a system in the future, and advocated for the buildout of 25 miles of protected bike lanes in the city. But while he’s pushing for more affordable housing, he’s also working to block a proposal that would ban lab development in Cambridge. Although its proponents say the ban is meant to preserve space for housing, he says a lot of developments include both lab space and housing, so it’s not one or the other. And he sees the research that goes on in the city’s labs as essential to its economic vibrancy.

He credits his success in part to being “really good at the boring technical stuff,” as he puts it. “I write my own policy and I go through all the details of the bills,” he says, noting that not every local politician is willing or able to do that. “There’s lots of stuff that people just don’t enjoy doing, and if you can find a way to enjoy it, then there’s lots of work to be done.”

Huang says Azeem’s tendency to pore over every detail makes him stand out, as do his “listening very well” and his collaborative approach. “He’s impressively in the weeds on policy,” he says. “He does his homework and understands the issues and really grapples with the nuance.”

A lifetime to go

Though young people are notorious for skipping local elections, Azeem sees his experience as a testament to the remarkable power of hyperlocal politics—and to why his peers shouldn’t sit them out. 

“[The city council] has a roughly $1.25 billion budget. Divide that by nine [council members], and it’s over $100 million per person. Each of us gets elected on 2,000-ish votes. So it’s almost $60,000 per vote. That’s your impact,” he says. “I lost by 100 votes in my first election and won by 200 in my second. If you had taken the person who came in 10th, and replaced them with me, more than 100 million dollars would have gone in a different direction than they did. That’s crazy to think about: 200 citizens decided where $100 million went.”

In his second term, Azeem hopes to influence where another $100 million–plus will go. He already has ideas for bills that he thinks will increase public transit options, help Cambridge fight climate change while adapting to its impact, make it easier for citizens to afford basic necessities like housing, and make streets safer for cyclists, pedestrians, and all citizens. 

He acknowledges that public service is not always the easiest choice to make as a young person. Despite his remarkable work ethic and ambition, Azeem is still a twentysomething who wants to enjoy his life. Going out with his friends for a night of dancing can be a bit odd when it ends with people approaching him and asking, “Are you my city council member?” He even got recognized once when he was using a dating app.

From Sharma’s perspective, the best way to understand Azeem’s seemingly boundless drive is through the lens of “immigrant psychology,” which Sharma in many ways shares. “When I was starting this new company, he wanted to join,” he recalls, “and I was like, ‘How will you do all of this? Starting a new company is demanding. You cannot do both that [and be on city council]. He said, ‘I will do whatever needs to be done. I just don’t want to waste my life.’” 

With reelection in the bag, and with a fresh influx of funding at Tandem, Azeem is finding himself in a more stable position than he’s been in for a long time, which is affording him new space to think about the future. He’s grateful that he’s been able to both work in local politics and be part of two successful startups, but he knows that down the line he may have to choose one path or the other.

He hasn’t decided yet which will win out. But what he does know for sure is that he wants to leave a legacy he can be proud of—and he’ll be happy to let his work speak for itself.

“A lot of people feel like they need to be in the spotlight because they feel like they’re the ‘main character,’” he says. “But five to 10 years from now, when I’m looking back, I just want to see that the things I did are still around and having a positive impact.”

Raman to go

For a harried wastewater manager, a commercial farmer, a factory owner, or anyone who might want to analyze dozens of water samples, and fast, it sounds almost miraculous. Light beamed from a central laser zips along fiber-optic cables and hits one of dozens of probes waiting at the edge of a field, or at the mouth of a sewage outflow, or wherever it’s needed. In turn, these probes return nearly instant chemical analysis of the water and its contaminants—fertilizer concentration, pesticides, even microplastics. No need to walk around taking samples by hand, or wait days for results from a lab. 

This networked system of pen-size probes is the brainchild of Nili Persits, a final-year doctoral candidate in electrical engineering at MIT. Persits, who sports a collection of tattoos and a head of bouncy curls, seems to radiate energy, much like the powerful lasers she works with. She hopes that her work to develop a highly sensitive probe will help a technology known as Raman spectroscopy step beyond the rarefied realm of laboratory settings and out into the real world. These spectrometers—which use a blast of laser light to analyze an object’s chemical makeup—have proved their utility in fields ranging from medical research to art restoration, but they come with frustrating drawbacks. 

raman setup on a media cart
KEN RICHARDSON AND REBECCA RODRIGUEZ

In a cluttered room full of dangling cables and winking devices in MIT’s Building 26, it’s easy to see the problem. A line of brushed-aluminum boxes stretching eight or so feet across a table makes up the conventional Raman spectrometer. It costs at minimum $70,000—in some cases, more than twice that amount—and the vibration-damping table it sits on adds another $15,000 to the tab. Even now, after six years of practice, it takes Persits most of a day to set it up and calibrate it before she can begin to analyze anything. “It’s so bulky, so expensive, so limited,” she says. “You can’t take it anywhere.” 

Elsewhere in the lab, two other devices hint at the future of Raman spectroscopy. The first is a system about the size of a desk. Although this version is too big and too sensitive to be moved, it can support up to 100 probes connected to it by fiber-­optic cables, making it possible to analyze samples kilometers away. 

The typical Raman system is “so bulky, so expensive, so limited. You can’t take it anywhere.”

The second is a truly portable Raman device, a laser about the size and shape of a Wi-Fi router, with just one probe and a cell-phone-size photodetector (a device that converts photons into electrical signals) attached. While other portable Raman systems do exist, Persits says their resolution and sensitivity leave a lot to be desired. And this one delivers results on par with those of bigger and pricier versions, she says. Whereas the bigger device is intended for large-scale operations such as chemical manufacturing facilities or wastewater monitoring, this one is suited for smaller uses such as medical studies. 

Persits has spent the last several years perfecting these devices and their attached probes, designing them to be easy to use and more affordable than traditional Raman systems. This new technology, she says, “could be used for so many different applications that Raman wasn’t really a possibility for before.” 

A molecular photograph with a hefty price tag 

All Raman spectrometers, big or small, take advantage of a quirk in the way that light behaves. If you shine a red laser at a wall, you’ll see a red dot. Of the photons that bounce off the wall and hit your retina, nearly all of them remain red. But for a precious few photons—one in 100 million—something strange happens. The springlike molecular bonds of the materials in the wall jangle the photon, which absorbs or loses energy on the rebound. This changes its wavelength, thereby changing its color. The color change corresponds to whatever type of molecule the photon collided with, whether it’s the polymers in the wall’s latex paint or the pigments that create its hue. 

This phenomenon, called Raman scattering, is happening right now, all around you. But you can’t see this color-shifted photon confetti—it’s far too faint, so looking for it is like trying to see a distant star on a sunny day. 

A traditional Raman spectrometer separates out this faint signal by guiding it through an obstacle course of mirrors, lenses, and filters. After the light of a powerful, single-color laser is beamed at a sample, the scattered light is directed through a filter to remove the returning photons that retained their original hue. The color-­shifted photons then go through a diffraction grating—a series of prisms—that separates them by color before they hit a detector that measures their wavelength and intensity. This detector, Persits says, is essentially the same as a digital camera’s light sensor. 

""
Raman probes designed by Nili Persits sit atop a cart, but the coiled fiberoptic cables allow them to be used on samples far away.
1. A mounted probe can be used to study non-liquid, uncontained samples like plants.
2. A probe encased in a protective sleeve is immersed in a liquid sample.
3. An optical receiver detects Raman photons collected by a probe and relayed by a fiber-optic cable.
4. A probe to measure small-volume liquids in a cuvette.
KEN RICHARDSON AND REBECCA RODRIGUEZ

At the end of the spectroscopy process, a researcher is left with something akin to a photograph—not of an object’s appearance, but of its molecular makeup. This allows researchers to study the chemical components of DNA, detect contaminants in food, or figure out if an antique painting is authentic or a modern counterfeit, among many other uses. What’s more, Raman spectroscopy makes it possible to analyze samples without grinding them up, dissolving them, or dousing them in chemicals.  

“The problem with spectrometers is that they have this intrinsic trade-off,” Persits says. The more light that goes into the spectrometer itself—specifically, into the color-separating diffraction grating and the detector—the harder it is to separate photons by wavelength, lowering the resolution of the resulting chemical snapshot. And because Raman light is so weak, researchers like Persits need to gather as much of it as possible, particularly when they’re searching for chemicals that occur in minute concentrations. One way to do this is to make the detector much bigger—even room-size, in the case of astrophysics applications. This, however, makes the setup “exponentially more expensive,” she says. 

Raman spectroscopy on the go

In 2013, Persits had bigger things to worry about than errant photons and unwieldy spectrometers. She was living in Tel Aviv with her husband, Lev, and their one-year-old daughter. She’d been working in R&D at a government defense agency—an easy, predictable job she describes as “engineering death”—when a thyroid cancer diagnosis ground her life to a halt. 

As Persits recovered from two surgeries and radiation therapy, she had time to take stock of her life. She resolved to complete her stalled master’s degree and, once that was done, begin a PhD program. Her husband encouraged her to apply beyond Israel, to the best institutions in the United States. In 2017, when her MIT acceptance letter arrived, it was a shock to Persits, but not to her husband. “That man has patience,” she says with a laugh, recalling Lev’s unflagging support. “He believes in me more than me.”

The family moved to Massachusetts that fall, and soon after, Persits joined the research group of Rajeev Ram, a professor of electrical engineering who specializes in photonics and electronics. “I’m looking for people who are willing to take risks and work on a new area,” Ram says. He saw particular promise in Persits’s keen interest in research outside her sphere of expertise. He put her to work learning the ins and outs of Raman spectroscopy, beginning with a project to analyze the metabolic components of blood plasma. 

“The first couple of years were pretty stressful,” Persits says. In 2016, she and her husband had welcomed their second child, another girl, making the pressures of grad school even more acute. The night before her quantum mechanics exam, she recalls, she was awake until 3 a.m. with a vomiting child. On another occasion, a sprinkler in the lab malfunctioned, ruining the Raman spectrometer she’d inherited from a past student. 

“We can have real-time assessment of what’s going on. Are our plants happy?”

Persits persevered, and things started to settle into place. She began to build on the earlier work of Ram and optical engineer Amir Atabaki, a former postdoc in the Ram lab who is now a research fellow at the Lawrence Berkeley National Laboratory in California. Atabaki had figured out a fix for that fundamental Raman trade-off—the brighter the light, the lower the resolution of the chemical snapshot—by using a tunable laser that emits a range of different colors, instead of a fixed laser limited to a single hue. Persits compares the process to photographing a rainbow. A traditional Raman spectrometer is like a camera that takes a picture of all the rainbow’s colors simultaneously; the updated system, in contrast, takes snapshots of only one color at a time.

This tunable laser eliminates the need for the bulkiest, costliest parts of a Raman spectrometer—those that diffract light and collect it in a photon-gathering sensor. This makes it possible to use miniaturized and “very simple” silicon photodetectors, Persits says, which “cost nothing” compared with the standard detectors.  

close-up of the device
One of Persits’s probes shines a red laser dot on a small-volume sample in a 0.5-milliliter cuvette.
KEN RICHARDSON AND REBECCA RODRIGUEZ

Persits’s key innovation was an exceptionally sensitive probe that’s the size of a large marker and is connected to the laser via a fiber-optic cable. These cables can be as long (even kilometers long) or short as needed. Armed with a tunable laser, simple photodetectors, and her robust, internet­-enabled probes, Persits was able to develop both her handheld Raman device and the larger, nonportable version. This second system is more expensive, with a vibration-damping table needed for its sensitive laser, but it can support dozens of different probes, in essence offering multiple Raman systems for the price of one. It also has a much broader spectral range, allowing it to distinguish a greater variety of chemicals. 

These probes open up a remarkable host of possibilities. Take biologics, a class of drugs generated by genetically engineered cells, which account for more than half of all modern cancer treatments. For drug manufacturers, it’s important to make sure these cells are happy, healthy, and producing the desired compounds. But the mere act of checking in on them—cracking open the bioreactors in which they grow to remove a sample—stresses them out and introduces the risk of contamination. Persits’s probes can be left in vessels to monitor how much the cells are eating and what chemicals they’re secreting, all without any disturbance. 

Persits is particularly excited about the technology’s potential to simplify water monitoring. First, though, she and her team had to make sure that water testing was even feasible. “A lot of techniques don’t work in water,” she says. Last summer, an experiment with hydroponic bok choy proved the technology’s mettle. The team could watch, day by day, as the plants sucked up circulating nitrate fertilizer until none remained in the water. “We can actually have real-time assessment of what’s going on,” Persits says. “Are our plants happy? Are they getting enough nutrients?” 

In the future, this may allow for precision dosing of fertilizers on large commercial farms, saving farmers money and reducing the hazardous runoff of nitrates into local waterways. The technology can also be adapted for a range of other watery uses, such as monitoring chemical leakage from factories and refineries or searching for microplastics and other pollutants in drinking water. 

With graduation at the end of May, Persits has set her sights on the next phase of her career. Last year, funding and support from the Activate fellowship helped her launch her own company, Dottir Labs. Dottir—which stands for “digital optical technology” and also alludes to her two daughters, now 12 and eight—aims to bring her Raman systems to market. “Dottir is really focusing on the larger-scale applications where there are few alternatives to this type of chemical sensing,” Persits says. 

Like the subject of one of her tattoos, which shows a lotus growing from desert ground, Persits’s research career has been defined by surprising transformation—photons that change color after a glancing blow, bulky machines that she shrank down and supplemented with a web of probes. These transformations could nudge the world in a new direction as well, leading to cleaner water, safer drugs, and a healthier environment for all of us downstream.

Taking on climate change, Rad Lab style

23 April 2024 at 17:00

When I last wrote, the Institute had just announced MIT’s Climate Project. Now that it’s underway, I’d like to tell you a bit more about how we came to launch this ambitious new enterprise. 

In the fall of 2022, as soon as I accepted the president’s job at MIT, several of my oldest friends spontaneously called to say, in effect, “Can you please fix the climate?”

And once I arrived, I heard the same sentiment, framed in local terms: “Can you please help us organize ourselves to help fix the climate?” 

Everyone understood that MIT brought tremendous strength to that challenge: More than 20% of our faculty already do leading-edge climate work. And everyone understood that in a place defined by its decentralization, focusing our efforts in this way would require a fresh approach. This was my kind of challenge—creating the structures and incentives to help talented people do much more together than they could do alone, so we could direct that collective power to help deliver climate solutions to the world, in time.

My first step was to turn to Vice Provost Richard Lester, PhD ’80, a renowned nuclear engineer with a spectacular record of organizing big, important efforts at MIT—including the Climate Grand Challenges. Working with more than 100 faculty, over the past year Richard led us to define the hardest climate problems where MIT could make the most substantial difference—our six Climate Missions:

  • Decarbonizing Energy and Industry
  • Restoring the Atmosphere, Protecting the Land and Oceans
  • Empowering Frontline Communities
  • Building and Adapting Healthy, Resilient Cities
  • Inventing New Policy Approaches
  • Wild Cards

Each mission will be a problem-solving community, focused on the research, translation, outreach, and innovation it will take to get emerging ideas out of the lab and deployed at scale. We are unabashedly focused on outcomes, and the faculty leaders we are recruiting for each mission will help develop their respective roadmaps.

In facing this vast challenge, we’re consciously building the Climate Project in the spirit of MIT’s Rad Lab, an incredible feat of cooperative research which achieved scientific miracles, at record speed, with an extraordinary sense of purpose. With the leadership and ingenuity of the people of MIT, and our partners around the globe, we aim for the Climate Project at MIT to do the same. 

Sally Kornbluth
March 20, 2024

I went to COP28. Now the real work begins.

23 April 2024 at 17:00

As an international student at MIT, I find that the privileges I’ve experienced in the States have made me even more conscious of my nation’s struggles. Brief visits home remind me that in Jamaica, I can’t always count on what I often take for granted in Massachusetts: water flowing through the faucet, timely public transportation, a safe neighborhood to live in. And after working hard in school for years so my family and I won’t have to struggle so much to meet our basic needs, I’ve recently been challenging myself to think about the needs of nations too. Being from a developing nation, I am very aware of the urgent need for sustainable development, which the UN defines as “development that meets the needs of the present, without compromising the ability of future generations to meet their own needs.” 

Jamaica is among the countries least responsible for the acceleration of global warming, yet it is already facing some of its worst effects. Many Jamaicans can’t afford air-conditioning to cope with the extreme heat, and in my city, many of the trees that once provided shade are being cut down to build apartments, leaving people sweltering in a concrete jungle. Even if ambitious net-zero emissions targets are met, these severe consequences may continue to worsen for some years. 

Runako Gentles leaning against a fence overlooking the ocean
At home in Jamaica, Gentles has seen the impact of climate change firsthand.
COURTESY OF RUNAKO GENTLES

Beyond significantly lowering the standard of living for the poor and lower-middle classes, climate change is also threatening agriculture and tourism, two major sources of Jamaica’s GDP. Given that the country is already struggling with crime and widespread poverty, what’s going to happen as climate change continues causing droughts to worsen, beaches to shrink, and energy bills to rise?  

My MIT degree could definitely help me migrate to another country with a higher standard of living. But if young people like me leave these critical problems for someone else to solve, then what will the future look like for my family, friends, and neighbors? 

I grew up wanting to be a physician, but at MIT I became significantly more interested in the health of communities, the planet, and the economy. I decided to major in environmental engineering as a step toward addressing the social, economic, and environmental dimensions of issues like climate change, pollution, and water management. Then I took advantage of opportunities to attend conferences where I could gather with experts, industry leaders, and other young people eager to tackle these issues. Last fall I was elated to be selected as one of MIT’s six student delegates to COP28, the 28th Conference of the Parties to the UN Framework Convention on Climate Change. Some 84,000 attendees would converge in the United Arab Emirates over the course of two weeks in November and December for the world’s largest global climate conference. I would be among those attending the second half. 

We can’t wait for someone else to address the crises affecting not only our generation but also those to come.

After a 12-hour nonstop flight, I landed in the UAE around 7:30 p.m. local time and woke up early the next morning ready to get down to business. I was tired, but it was go time. Having attended the Global Youth Climate training program and MIT’s pre-COP28 sessions, I had spent a lot of time thinking about how to make the most of the conference. There were hundreds of plenary meetings, pavilions, side events, and booths to choose from. I combed through the COP schedule each day, noting events with themes relevant to developing nations and those in which I would likely find the leaders I wanted to connect with. 

I spent the week zipping from building to building in the enormous Dubai Exhibition Centre, listening to panels, presentations, and press conferences, as well as questioning speakers, observing negotiations, taking copious notes on my iPad, and networking. A highlight was getting to interview some of the senior Jamaican delegates. I shared with them my long-term plan to help the Caribbean adapt to climate change and develop sustainably. UnaMay Gordon, one of Jamaica’s leading climate-change specialists, gave me a memorable piece of advice: Be present, represent youth, and bring other young people along to engage with these issues. I was glad to receive the Jamaican delegates’ insights—and their contact information. I took full advantage of the opportunity to approach experts and introduce myself as an MIT undergraduate. It was my first COP, and I was a man on a mission. 

I left the UAE even more determined to support sustainable development, eager to bring about positive change in the MIT community during my final semester on campus—and feeling I had a lot of work to do before graduation. Progress toward becoming a more sustainable society cannot just rely on the relatively slow process of persuading governments to pass laws that enact COP agreements. Individual COP attendees play a pivotal role in supporting the sustainability transition by helping their communities take action. 

For my last semester, I decided I could have the most impact by helping implement a campus sustainability initiative, sharing my knowledge and experiences, and encouraging more undergraduates to get involved in sustainability efforts. I started by attending the Sustainability Connect 2024 meeting run by the MIT Office of Sustainability (MITOS), which led to my joining the MIT Food Waste Fighters and working to address the need for better separation of garbage in our campus dorms to help produce biofuels and reduce methane emissions from food waste in landfills. This gave me experience implementing on-the-ground strategy to take on a problem that is also very relevant to developing nations. 

Runako Gentles speaking at TEDxMIT
Gentles speaks at TEDx MIT in April.
JOHN WERNER

Meanwhile, I dove into organizing a student-led series of sustainability talks hosted by my department’s civil engineering society, Chi Epsilon, in collaboration with MITOS and the MIT Climate and Sustainability Consortium (MCSC). As an MCSC scholar, I worked on writing an opinion piece and a research article on my work analyzing earthquakes induced by carbon dioxide sequestration. I was also chosen to give a talk at TEDx MIT in April on how MIT can equip undergrads so they’re ready to seize opportunities to support the sustainability transition.

It was a lot to tackle on top of my classes, but I really wanted to do all I could in my last few months to galvanize the MIT community. And at the same time, I wanted to remind everyone of the importance of having empathy for those who are most vulnerable to—and least responsible for—the consequences of unsustainable behavior and of innovation that doesn’t factor in sustainability. 

I hope my work empowers more MIT undergraduates to step up and help tackle the many obstacles to achieving sustainable development while setting the stage for a more just society. We can’t wait for someone else to address the crises affecting not only our generation but also those to come. We need more minds and hands to work on ensuring that the places we live remain livable.

Runako Gentles ’24 plans to return to Jamaica upon graduation and will begin a master’s program in environmental engineering at Stanford in the fall.

The silver-platter season

In the spring of 1974, I was new to both MIT and rugby football. As a Course 2 graduate student, I shared a basement office with several other students, including two players on the Tech rugby club who encouraged me to join them. Being both an Anglophile and a beer drinker, I was pretty easily talked into participating in this sport, with its British roots and after-match parties.

I played mainly on the squad’s B side that season but was among those asked to join the A side players in the annual tournament of the New England Rugby Football Union (NERFU), held at UMass Amherst. We needed extra men for the exhausting tournament schedule, in which players from both the A and B sides would be combined in various ways for different matches. Today NERFU has many more teams and several divisions of competition. But in 1974 it had just one division and held a single annual tournament.  

Institute records show rugby being played as early as 1882, making the Tech club the oldest in NERFU and one of the oldest in the nation. In 1974, it fielded two 15-man sides that practiced twice a week and played every Saturday during the spring and fall seasons. (There was no women’s side then.) Our school-supplied uniforms were classics of a bygone era—striped long-sleeve jerseys with collars and rubber buttons.

Rugby matches are grueling affairs involving continuous running and tackling and (for forwards like me, who make up half the team) pushing in organized scrums and ad hoc rucks. (In both scrums and rucks, players grab teammates’ shirts, binding together to push against the opposing team while attempting to gain possession of a ball on the ground with their feet.) In 1974, substitution was allowed only in cases of injury. Usually, one match per week was all a player would play. Making it to the tournament’s championship match would require playing four or five in two days, so some players would need to sit out some of the matches. 

group photo of the 1974 rugby champions
The storied MIT rugby club of 1974. The author is in the back row, third from the right.
MIT RUGBY FOOTBALL CLUB

Unlike now, in the 1970s there were few (if any) US high school or under-19 rugby teams, so American college teams were generally inexperienced. However, the 1974 MIT club had several international players who had been playing since grade school in England, Scotland, New Zealand, France, Argentina, or Japan. It also included grad students and an assistant professor (Ron Prinn, ScD ’71), which raised the average age of the team. MIT was thus not a typical college team, although we might have been mistaken for one. Undoubtedly some club teams in the 1974 tournament rested their best players when scheduled to play us. 

Our coach was Serge Gallant, a savvy, bearded Frenchman and former scrum half forced by concussions to retire from playing. Shin Yoshida ’76, our fly half, was our star player. Shin would kick high-arching punts downfield, accurately positioned to allow our team to immediately tackle opponents receiving them, or occasionally to recover the ball ourselves. Much like a fast-break offense from a basketball team with smaller players, this helped neutralize the height and power of bigger teams.    

The 1974 NERFU tournament, held on May 11 and 12, pitted 24 teams against each other in five rounds of single-elimination matches. The MIT club had some role in the seeding, so we managed to get a first-round bye and the prospect of an easy opponent in the second round. However, the remaining matches promised to be very difficult.

Our first match on Saturday was in the second round against Springfield, whom we beat handily, 13–0. Our last match of the day was against Charles River, a club that had beaten us the week before. We eked out a 16–12 victory in double overtime. 

Since we’d advanced to the semifinal round to be held on Sunday, arrangements were made for our team to pile into a few rooms of an Amherst motel for the night. But first most of us went out to a local restaurant. Despite our camaraderie and shared joy over having won our first two matches, our celebration was subdued, with none of the usual libations and rugby songs. We were pleasantly surprised when a former MIT rugby player turned businessman pick up our meal tab. 

At the restaurant we exchanged friendly banter with a well-known forward on the Providence city club, our next opponent. During the meal he playfully growled at us while chomping on a handful of spring onions. However, he did not play against us in the semifinals on Sunday. He was rested for the finals match he never got to play.

During the Providence match, their sideline people kept yelling “Get the foot,” meaning to target Yoshida and take him out of the game. But our “enforcers” took care of theirs, and he was not hurt. We went on to win, 6–3. 

I had played in the third- and fourth-round matches and was exhausted. So when our coach asked me to play in the finals, I begged off. My spot was taken by Mark Sneeringer ’76, PhD ’82, an amiable sophomore from Gettysburg, Pennsylvania. Because I wasn’t playing, I was picked to serve as a line judge.

For the championship match Tech faced off against the Beacon Hill club, which had won the year before. This was another tight and grueling game that went into double overtime. In the first overtime, our forwards were gasping for breath. Roger Simmonds, PhD ’78 (an Englishman and our most experienced player), lifted spirits and energy levels with an impromptu pep talk noting how well the forwards were playing and how worn out the Beacon Hill squad was.    

In the second overtime, team captain Paul Dwyer, SM ’73, finally scored the game-winning try. Because I was a line judge, my jumping for joy with a cloth in my hand caused temporary confusion. That was soon resolved when I explained that my action was not an officiating signal. We’d bested Beacon Hill, 7–3. 

Our reward for winning the championship was a silver platter. In those days, beer was always on hand after rugby matches, so while still on the pitch, we awkwardly drank beer from the platter as if it were a trophy cup. 

Having pulled off a major upset in the NERFU tournament, MIT was no longer a dark horse in the 1974 fall season, and other teams made sure to give us their best efforts. The loss of Yoshida, Dwyer, and other key players from the spring season weakened our fall A side, to which I was promoted. We began the fall season with two wins and two losses and then lost the rest of our matches, including one in which the Boston club thoroughly overpowered and crushed us. 

Nevertheless, Tech reigned as the NERFU champion until the next tournament. NERFU would eventually add a college division to its annual competition, so to this day, MIT’s rugby club remains the only college side ever to capture the top-tier NERFU title.

After retiring from a long career in mechanical and nuclear engineering, Dan Guzy, MechE ’75, has written four books and many articles on local history.

What’s one memento you kept from your time at MIT?

23 April 2024 at 17:00

Alumni leave MIT armed with knowledge and a whole lot of memories. During Tech Reunions in 2023, the MIT Alumni Association asked returning alums what else they had held onto since leaving campus. Here are just a few of their responses. 

Diane Marie McKnight ’75, SM ’78, PhD ’79, kept a bronze oarlock used for securing an oar on a boat. “I sand-casted it myself as part of my last class in mechanical engineering, and I learned how to use a lathe,” she said.

Amy (Schonsheck) Simpkins ’03 got her Institute keepsake early— a “cheap hoodie sweatshirt that was on special at the Coop the first week of my freshman year.” She still wears it almost every day.

Alan Paul Lehotsky ’73 said that in addition to his brass rat, he still has the Groucho glasses he wore to graduation. He admitted that the mustache has not held up very well.

Elliot Owen ’18, SM ’20, still has the precision-machined aluminum flexures that he used for his graduate research. “It is easy to create structures with a low stiffness in the direction of travel and high stiffness in all other directions,” he said. “I keep them on my bookshelf and show them off when I have people over. Most people are very surprised to see a solid piece of metal flex and move so easily and without friction.”

Walt Gibbons ’73, SM ’75, had the most popular response, provided by 22 of the 69 alums interviewed. He named his MIT brass rat.

“I kept a propeller from one of the first planes I ever built,” said Morgan Ferguson ’23. “It was a spare propeller from a plane that I worked on as part of a team of undergraduate and graduate students at MIT that develop aircraft for the annual AIAA [American Institute of Aeronautics and Astronautics] Design/Build/Fly competition. I continue to work on these planes.” His latest aircraft is shown above.

Jeanne Yu ’13 said, “The one thing I kept from MIT was my sense of resilience.”

Check out the recent MIT alumni video about physical objects grads have kept—and why they kept them—at bit.ly/MITMemento.

An invisibility cloak for would-be cancers

23 April 2024 at 17:00

One of the immune system’s roles is to detect and kill cells that have acquired cancerous mutations. However, some early-stage cancer cells manage to survive. A new study on colon cancer from MIT and the Dana-Farber Cancer Institute has identified one reason why: they turn on a gene called SOX17, which renders them essentially invisible to immune surveillance.

The researchers focused on precancerous growths called polyps that often form as mutations accumulate in the intestinal stem cells, whose job is to continually regenerate the lining of the intestines. Using a technique they had developed for growing mini colon tumors in a lab dish and then implanting them in mice, they engineered tumors to express mutations that are often found in human colon cancers.

In the mice, the researchers observed a dramatic increase in the tumors’ expression of SOX17. This gene encodes a transcription factor that is normally active only during embryonic development, when it helps control development of the intestines and the formation of blood vessels.

The experiments revealed that when SOX17 is turned on in cancer cells, it helps them create an immunosuppressive environment. Among its effects, SOX17 prevents cells from synthesizing the receptor that normally detects interferon gamma, one of the immune system’s primary weapons against cancer cells. Without those receptors, cancerous and precancerous cells can simply ignore messages from the immune system, which would normally direct them to die off.

The absence of this signaling also lets cancer cells minimize their production of molecules called MHC proteins, which display cancerous antigens to the immune system, and prevents them from producing molecules called chemokines, which normally recruit T cells that would help destroy the cancerous cells.

When the researchers generated colon tumor organoids with SOX17 knocked out, and implanted those into mice, their immune system was able to attack them much more effectively. This suggests that blocking the gene or the pathway that it activates could offer a new way to treat early-stage cancers before they grow into larger tumors.

“Just by turning off SOX17 in fairly complex tumors, we were able to essentially obliterate the ability of these tumor cells to persist,” says MIT research scientist Norihiro Goto, the lead author of a paper on the work.

But transcription factors such as the one encoded by the SOX17 gene are considered difficult to target using drugs, in part because of their structure. The researchers now plan to identify other proteins that this transcription factor interacts with, in hopes that it might be easier to block some of those interactions. They also plan to investigate what triggers SOX17 to turn on in precancerous cells.

“Activation of the SOX17 program in the earliest innings of colorectal cancer formation is a critical step that shields precancerous cells from the immune system,” says Ömer Yilmaz, an MIT associate professor of biology, a member of the Koch Institute for Integrative Cancer Research, and one of the study’s senior authors. “If we can inhibit the SOX17 program, we might be better able to prevent colon cancer, particularly in patients that are prone to developing colon polyps.”

The energy transition’s effects on jobs

23 April 2024 at 17:00

A county-by-county analysis by MIT researchers shows the places in the US that stand to see the biggest economic changes from the switch to cleaner energy because their job markets are most closely linked to fossil fuels. 

While many of those places have intensive drilling and mining operations, the researchers find, areas that rely on industries such as heavy manufacturing could also be among the most significantly affected—a reality that policies intended to support American workers during the energy transition may not be taking into account, given that some of these communities don’t qualify for federal assistance under the Inflation Reduction Act.

""
This map shows which US counties have the highest concentration of jobs that could be affected by a transition to renewable energy. Counties in blue are less likely to be affected, and counties in red are more likely.
COURTESY OF THE RESEARCHERS

“The impact on jobs of the energy transition is not just going to be where oil and natural gas are drilled,” says Christopher Knittel, an economist at the MIT Sloan School of Management and coauthor of the paper. “It’s going to be all the way up and down the value chain of things we make in the US. That’s a more extensive, but still focused, problem.” 

Using several data sources measuring energy consumption by businesses, as well as detailed employment data from the US Census Bureau, Knittel and Kailin Graham, a master’s student in the Technology and Policy Program, calculated the “employment carbon footprint” of every county in the US.

“Our results are unique in that we cover close to the entire US economy and consider the impacts on places that produce fossil fuels but also on places that consume a lot of coal, oil, or natural gas for energy,” says Graham. “This approach gives us a much more complete picture of where communities might be affected and how support should be targeted.”

He adds, “It’s important that policymakers understand these economy-­wide employment impacts. Our aim in providing these data is to help policymakers incorporate these considerations into future policies.”

A linguistic warning sign for dementia

23 April 2024 at 17:00

Older people with mild cognitive impairment, especially when characterized by episodic memory loss, are at increased risk for dementia due to Alzheimer’s disease. Now a study by researchers from MIT, Cornell, and Massachusetts General Hospital has identified a key deficit unrelated to memory that may help reveal the condition early—when any available treatments are likely to be most effective.

The issue has to do with a subtle aspect of language processing: people with amnestic mild cognitive impairment (aMCI) struggle with certain ambiguous sentences in which pronouns could refer to people not referenced in the sentences themselves.For instance, in “The electrician fixed the light switch when he visited the tenant,” it is not clear without context whether “he” refers to the electrician or some other visitor. But in “He visited the tenant when the electrician repaired the light switch,” “he” and “the electrician” cannot be the same person. And in “The babysitter emptied the bottle and prepared the formula,” there is no reference to a person beyond the sentence.

The researchers found that people with aMCI performed significantly worse than others at producing sentences of the first type. “It’s not that aMCI individuals have lost the ability to process syntax or put complex sentences together, or lost words; it’s that they’re showing a deficit when the mind has to figure out whether to stay in the sentence or go outside it to figure out who we’re talking about,” explains coauthor Barbara Lust, a professor emerita at Cornell and a research affiliate at MIT. 

“While our aMCI participants have memory deficits, this does not explain their language deficits,” adds MIT linguistics scholar Suzanne Flynn, another coauthor. The findings could steer neuroscience studies on dementia toward brain regions that process language. “The more precise we can become about the neuronal locus of deterioration,” she says, “that’s going to make a big difference in terms of developing treatment.”

This solar giant is moving manufacturing back to the US

By: Zeyi Yang
23 April 2024 at 10:39

Whenever you see a solar panel, most parts of it probably come from China. The US invented the technology and once dominated its production, but over the past two decades, government subsidies and low costs in China have led most of the solar manufacturing supply chain to be concentrated there. The country will soon be responsible for over 80% of solar manufacturing capacity around the world.

But the US government is trying to change that. Through high tariffs on imports and hefty domestic tax credits, it is trying to make the cost of manufacturing solar panels in the US competitive enough for companies to want to come back and set up factories. The International Energy Agency has forecast that by 2027, solar-generated energy will be the largest source of power capacity in the world, exceeding both natural gas and coal—making it a market that already attracts over $300 billion in investment every year.

To understand the chances that the US will succeed, MIT Technology Review spoke to Shawn Qu. As the founder and chairman of Canadian Solar, one of the largest and longest-standing solar manufacturing companies in the world, Qu has observed cycle after cycle of changing demand for solar panels over the last 28 years. 

CANADIAN SOLAR

After decades of mostly manufacturing in Asia, Canadian Solar is pivoting back to the US because it sees a real chance for a solar industry revival, mostly thanks to the Inflation Reduction Act (IRA) passed in 2022. The incentives provided in the bill are just enough to offset the higher manufacturing costs in the US, Qu says. He believes that US solar manufacturing capacity could grow significantly in two to three years, if the industrial policy turns out to be stable enough to keep bringing companies in. 

How tariffs forced manufacturing capacity to move out of China

There are a few important steps to making a solar panel. First silicon is purified; then the resulting polysilicon is shaped and sliced into wafers. Wafers are treated with techniques like etching and coating to become solar cells, and eventually those cells are connected and assembled into solar modules.

For the past decade, China has dominated almost all of these steps, for a few reasons: low labor costs, ample supply of proficient workers, and easy access to the necessary raw materials. All these factors make made-in-China solar modules extremely price-competitive. By the end of 2024, a US-made solar panel will still cost almost three times as much as one produced in China, according to researchers at BloombergNEF. 

The question for the US, then, is how to compete. One tool the government has used since 2012 is tariffs. If a solar module containing cells made in China is imported to the US, it’s subject to as much as a 250% tariff. To avoid those tariffs, many companies, including Canadian Solar, have moved solar cell manufacturing and the downstream supply chain to Southeast Asia. Labor costs and the availability of labor forces are “the number one reason” for that move, Qu says.

When Canadian Solar was founded in 2001, it made all its solar products in China. By early 2023, the company had factories in four countries: China, Thailand, Vietnam, and Canada. (Qu says it used to manufacture in Brazil and Taiwan too, but later scaled back production in response to contracting local demand.)

But that equilibrium is changing again as further tariffs imposed by the US government aim to force supply chains to move out of China. Starting in June 2024, companies importing silicon wafers from China to make cells outside the country will also be subject to tariffs. The most likely solution for solar companies would be to “set up wafer capacity or set up partnerships with wafer makers in Southeast Asia,” says Jenny Chase, the lead solar analyst at BloombergNEF.

Qu says he’s confident the company will meet the new requirements for tariff exemption after June. “They gave the industry about two years to adapt, so I believe most of the companies, at least the tier-one companies, will be able to adapt,” he says.

The IRA, and moving the factories to the US

While US policies have succeeded in turning Southeast Asia into a solar manufacturing hot spot, not much of the supply chain has actually come back to the US. But that’s slowly changing thanks to the IRA, introduced in 2022. The law will hand out tax credits for companies producing solar modules in the US, as well as those installing the panels. 

The credits, Qu says, are enough to make Canadian Solar move some production from Southeast Asia to the US. “According to our modeling, the incentives provided just offset the cost differences—labor and supply chain—between Southeast Asia and the US,” he says.

Jesse Jenkins, an assistant professor in energy and engineering at Princeton University, has come to the same conclusion through his research. He says that the IRA subsidies and tax credits should offset higher costs of manufacturing in the US. “That should drive a significant increase in demand for made-In-America solar modules and subcomponents,” Jenkins says. And the early signs point that way too: since the introduction of the IRA, solar companies have announced plans to build over 40 factories in the US.

In 2023, Canadian Solar announced it would build its first solar module plant in Mesquite, Texas, and a solar cell plant in Jeffersonville, Indiana. The Texas factory started operating in late 2023, while the Indiana one is still in the works. 

The remaining challenges

While the IRA has brought new hope to American solar manufacturing, there are still a few obstacles ahead.

Qu says one big challenge to getting his Texas factory up and running is the lack of experienced workers. “Let’s face the reality: there was almost no silicon-based solar manufacturing in the US, so it takes time to train people,” he says. That’s a process that he expects to take at least six months. 

Another challenge to reshoring solar manufacturing is the uncertainty about whether the US will keep heavily subsidizing the clean energy industry, especially if the White House changes hands after the election this year. “The key is stability,” Qu says, “Sometimes politicians are swayed by special-interest groups.”

“Obviously, if you build a factory, then you do want to know that the incentives to support that factory will be there for a while,” says Chase. There are some indications that support for the IRA won’t necessarily be swayed by the elections. For example, jobs created in the solar industry would be concentrated in red states, so even a Republican administration would be motivated to maintain them. But there’s no guarantee that US policies won’t change course.

The Download: the future of geoengineering, and how to make stronger, lighter materials

23 April 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why new proposals to restrict geoengineering are misguided

—Daniele Visioni is a climate scientist and assistant professor at Cornell University

The public debate over whether we should consider intentionally altering the climate system is heating up, as the dangers of climate instability rise and more groups look to study technologies that could cool the planet.

Such interventions, commonly known as solar geoengineering, may include releasing sulfur dioxide in the stratosphere to cast away more sunlight, or spraying salt particles along coastlines to create denser, more reflective marine clouds.  

The growing interest in studying the potential of these tools has triggered corresponding calls to shut down the research field, or at least to restrict it more tightly. But such rules would hinder scientific exploration of technologies that could save lives and ease suffering as global warming accelerates—and they might also be far harder to define and implement than their proponents appreciate. Read the full story.

This architect is cutting up materials to make them stronger and lighter

As a child, Emily Baker loved to make paper versions of things. It was a habit that stuck. Years later, studying architecture in graduate school, she was playing around with some paper and scissors when she made a striking discovery.

By making a series of cuts and folds in a sheet of paper, Baker found she could produce two planes connected by a complex set of thin strips. Without the need for an adhesive, this pattern created a surface that was thick but lightweight. Baker named her creation Spin-Valence. 

Structural tests later showed that an individual tile made this way, and rendered in steel, can bear more than a thousand times its own weight. Baker envisions using the technique to make shelters or bridges that are easier to transport and assemble following a natural disaster—or to create lightweight structures that could be packed with supplies for missions to outer space. Read the full story.

—Sofi Thanhauser

This story is for subscribers only, and is from the next magazine issue of MIT Technology Review, set to go live tomorrow, on the theme of Build. If you don’t already, subscribe now to get a copy when it lands.

Three things we learned about AI from Emtech Digital London

Last week, MIT Technology Review held its inaugural Emtech Digital conference in London. It was a great success, full of brain-tickling insights about where AI is going next. 

Here are the three main things Melissa Heikkilä, our senior AI reporter, took away from the conference.

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US child protection agencies are inundated with AI-created abuse images
And their systems are struggling to spot real children who could be helped. (WP $)
+ A new report is urging tech platforms to improve how such material is reported. (The Verge)
+ Legislation that could overhaul problems in the reporting pipelines is in motion. (WSJ $)

2 A startup edited human DNA using generative AI 
It aims to make the new wave of CRISPR faster and more powerful. (NYT $)
+ Forget designer babies. Here’s how CRISPR is really changing lives. (MIT Technology Review)

3 Amazon is shutting down one of its drone delivery programs in California
Just two years after it launched. (The Verge)

4 There’s no room in China’s tech sector for over-35s
Ageism is rife as companies overlook workers they worry may have home commitments. (FT $)
+ One of China’s most successful cultural exports? Bubble tea. (Bloomberg $)

5 Measuring ocean waves and currents is hard
Luckily, a new kind of sensor-rich buoy that communicates with satellites is one solution. (IEEE Spectrum)

6 Recycling plastic has been a colossal failure
Can ‘advanced recycling’ finally crack it? (New Scientist $)
+ Think that your plastic is being recycled? Think again. (MIT Technology Review)

7 How to make your home as energy-efficient as possible
Appliances are much better than they used to be, but you may still have to make sacrifices. (Vox)

8 Captchas are getting tougher to solve
Machines are getting better at cracking them, so the bar is raised for humans. (WSJ $)
+ Death to captchas. (MIT Technology Review)

9 Good luck getting a restaurant reservation these days
Pesky bots and convoluted online booking systems are wrecking our dinners. (New Yorker $)

10 Muting annoying accounts makes social media so much better
Seriously, try it and thank me later. (The Guardian)
+ How to log off. (MIT Technology Review)

Quote of the day

“I, for one, welcome our new Taylor Swift overlords.” 

—A member of a Reddit community for typewriter enthusiasts jokes about how the group might swell rapidly after Taylor Swift referenced the machines in her new album, 404 Media reports.

The big story

This town’s mining battle reveals the contentious path to a cleaner future

January 2024

In June last year, Talon, an exploratory mining company, submitted a proposal to Minnesota state regulators to begin digging up as much as 725,000 metric tons of raw ore per year, mainly to unlock the rich and lucrative reserves of high-grade nickel in the bedrock.

Talon is striving to distance itself from the mining industry’s dirty past, portraying its plan as a clean, friendly model of modern mineral extraction. It proclaims the site will help to power a greener future for the US by producing the nickel needed to manufacture batteries for electric cars and trucks, but with low emissions and light environmental impacts.

But as the company has quickly discovered, a lot of locals aren’t eager for major mining operations near their towns. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ What a wonderful piece of music!
+ Weighted blanket devotees swear by them—but what does the science say?
+ Donald Nelson is on a mission to restore sharks’ reputations following decades of persecution.
+ Meanwhile, a British boy has won a European championship with his uncanny impression of a seagull.

Why new proposals to restrict geoengineering are misguided

23 April 2024 at 06:00

The public debate over whether we should consider intentionally altering the climate system is heating up, as the dangers of climate instability rise and more groups look to study technologies that could cool the planet.

Such interventions, commonly known as solar geoengineering, may include releasing sulfur dioxide in the stratosphere to cast away more sunlight, or spraying salt particles along coastlines to create denser, more reflective marine clouds.  

The growing interest in studying the potential of these tools, particularly through small-scale outdoor experiments, has triggered corresponding calls to shut down the research field, or at least to restrict it more tightly. But such rules would halt or hinder scientific exploration of technologies that could save lives and ease suffering as global warming accelerates—and they might also be far harder to define and implement than their proponents appreciate.

Earlier this month, Tennessee’s governor signed into law a bill banning the “intentional injection, release, or dispersion” of chemicals into the atmosphere for the “express purpose of affecting temperature, weather, or the intensity of the sunlight.” The legislation seems to have been primarily motivated by debunked conspiracy theories about chemtrails. 

Meanwhile, at the March meeting of the United Nations Environmental Agency, a bloc of African nations called for a resolution that would establish a moratorium, if not a ban, on all geoengineering activities, including outdoor tests. Mexican officials have also proposed restrictions on experiments within their boundaries.

To be clear, I’m not a disinterested observer but a climate researcher focused on solar geoengineering and coordinating international modeling studies on the issue. As I stated in a letter I coauthored last year, I believe that it’s important to conduct more research on these technologies because it might significantly reduce certain climatic risks. 

This doesn’t mean I support unilateral efforts today, or forging ahead in this space without broader societal engagement and consent. But some of these proposed restrictions on solar geoengineering leave vague what would constitute an acceptable, “small” test as opposed to an unacceptable “intervention.” Such vagueness is problematic, and its potential consequences would have far more reach than the well-intentioned proponents of regulation might wish for.

Consider the “intentional” standard of the Tennessee bill. While it is true that the intentionality of any such effort matters, defining it is tough. If knowing that an activity will affect the atmosphere is enough for it to be considered geoengineering, even driving a car—since you know its emissions warm up the climate—could fall under the banner. Or, to pick an example operating on a much larger scale, a utility might run afoul of the bill, since operating a power plant produces both carbon dioxide that warms up the planet and sulfur dioxide pollution that can exert a cooling effect.

Indeed, a single coal-fired plant can pump out more than 40,000 tons of the latter gas a year, dwarfing the few kilograms proposed for some stratospheric experiments. That includes the Harvard project recently scrapped in light of concerns from environmental and Indigenous groups. 

Of course, one might say that in all those other cases, the climate-altering impact of emissions is only a side effect of another activity (going somewhere, producing energy, having fun). But then, outdoor tests of solar geoengineering can be framed as efforts to gain further knowledge for societal or scientific benefit. More stringent regulations suggest that, of all intentional activities, it is those focused on knowledge-seeking that need to be subjected to the highest scrutiny—while joyrides, international flights, or bitcoin mining are all fine.

There could be similar challenges even with more modest proposals to require greater transparency around geoengineering research. In a submission to federal officials in March, a group of scholars suggested, among other sensible updates, that any group proposing to conduct outdoor research on weather modification anywhere in the world should have to notify the National Oceanic and Atmospheric Administration in advance.

But creating a standard that would require notifications from anyone, anywhere who “foreseeably or intentionally seeks to cause effects within the United States” could be taken to mean that nations can’t modify any kind of emissions (or convert forests to farmland) before consulting with other countries. For instance, in 2020, the International Maritime Organization introduced rules that cut sulfate emissions from the shipping sector by more than 80%, all at once. The benefits for air quality and human health are pretty clear, but research also suggested that the change would unmask additional global warming, because such pollution can reflect away sunlight either directly or by producing clouds. Would this qualify?

It is worth noting that both those clamoring for more regulations and those bristling to just go out and “do something” claim to have, as their guiding principle, a genuine concern for the climate and human welfare. But again, this does not necessarily justify a “Ban first—ask questions later” approach,  just as it doesn’t justify “Do something first—ask permission later.” 

Those demanding bans are right in saying that there are risks in geoengineering. Those include potential side effects in certain parts of the world—possibilities that need to be better studied—as well as vexing questions about how the technology could be fairly and responsibly governed in a fractured world that’s full of competing interests.

The more recent entrance of venture-backed companies into the field, selling dubious cooling credits or playing up their “proprietary particles,” certainly isn’t helping its reputation with a public that’s rightly wary of how profit motives could influence the use of technologies with the power to alter the entire planet’s climate. Nor is the risk that rogue actors will take it upon themselves to carry out these sorts of interventions. 

But burdensome regulation isn’t guaranteed to deter bad actors. If anything, they’ll just go work in the shadows. It is, however, a surefire way to discourage responsible researchers from engaging in the field. 

All those concerned about “meddling with the climate” should be in favor of open, public, science-informed strategies to talk more, not less, about geoengineering, and to foster transparent research across disciplines. And yes, this will include not just “harmless” modeling studies but also outdoor tests to understand the feasibility of such approaches and narrow down uncertainties. There’s really no way around that. 

In environmental sciences, tests involving dispersing substances are already performed for many other reasons, as long as they’re deemed safe by some reasonable standard. Similar experiments aimed at better understanding solar geoengineering should not be treated differently just because some people (but certainly not all of them) object on moral or environmental grounds. In fact, we should forcefully defend such experiments both because freedom of research is a worthy principle and because more information leads to better decision-making.

At the same time, scientists can’t ignore all the concerns and fears of the general public. We need to build more trust around solar geoengineering research and confidence in researchers. And we must encourage people to consider the issue from multiple perspectives and in relation to the rising risks of climate change.

This can be done, in part, through thoughtful scientific oversight efforts that aim to steer research toward beneficial outcomes by fostering transparency, international collaborations, and public engagement without imposing excessive burdens and blanket prohibitions.

Yes, this issue is complicated. Solar geoengineering may present risks and unknowns, and it raises profound, sometimes uncomfortable questions about humanity’s role in nature. 

But we also know for sure that we are the cause of climate change—and that it is exacerbating the dangers of heat waves, wildfires, flooding, famines, and storms that will inflict human suffering on staggering scales. If there are possible interventions that could limit that death and destruction, we have an obligation to evaluate them carefully, and to weigh any trade-offs with open and informed minds. 

Daniele Visioni is a climate scientist and assistant professor at Cornell University.

Three things we learned about AI from EmTech Digital London

23 April 2024 at 05:55

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last week, MIT Technology Review held its inaugural EmTech Digital conference in London. It was a great success! I loved seeing so many of you there asking excellent questions, and it was a couple of days full of brain-tickling insights about where AI is going next. 

Here are the three main things I took away from the conference.

1. AI avatars are getting really, really good

UK-based AI unicorn Synthesia teased its next generation of AI avatars, which are far more emotive and realistic than any I have ever seen before. The company is pitching these avatars as a new, more engaging way to communicate. Instead of skimming through pages and pages of onboarding material, for example, new employees could watch a video where a hyperrealistic AI avatar explains what they need to know about their job. This has the potential to change the way we communicate, allowing content creators to outsource their work to custom avatars and making it easier for organizations to share information with their staff. 

2. AI agents are coming 

Thanks to the ChatGPT boom, many of us have interacted with  an AI assistant that can retrieve information. But the next generation of these tools, called AI agents, can do much more than that. They are AI models and algorithms that can autonomously make decisions by themselves in a dynamic world. Imagine an AI travel agent that can not only retrieve information and suggest things to do, but also take action to book things for you, from flights to tours and accommodations. Every AI lab worth its salt, from OpenAI to Meta to startups, is racing to build agents that can reason better, memorize more steps, and interact with other apps and websites.  

3. Humans are not perfect either 

One of the best ways we have of ensuring that AI systems don’t go awry is getting humans to audit and evaluate them. But humans are complicated and biased, and we don’t always get things right. In order to build machines that meet our expectations​ and complement our limitations, we should account for human error from the get-go. In a fascinating presentation, Katie Collins, an AI researcher at the University of Cambridge, explained how she found that allowing people to express how certain or uncertain they are—for example, by using a percentage to indicate how confident they are in labeling data—leads to better accuracy for AI models overall. The only downside with this approach is that it costs more and takes more time.

And we’re doing it all again next month, this time at the mothership. 

Join us for EmTech Digital at the MIT campus in Cambridge, Massachusetts, on May 22-23, 2024. I’ll be there—join me! 

Our fantastic speakers include Nick Clegg, president of global affairs at Meta, who will talk about elections and AI-generated misinformation. We also have the OpenAI researchers who built the video-generation AI Sora, sharing their vision on how generative AI will change Hollywood. Then Max Tegmark, the MIT professor who wrote an open letter last year calling for a pause on AI development, will take stock of what has happened and discuss how to make powerful systems more safe. We also have a bunch of top scientists from the labs at Google, OpenAI, AWS, MIT, Nvidia and more. 

Readers of The Algorithm get 30% off with the discount code ALGORITHMD24.

I hope to see you there!


Now read the rest of The Algorithm

Deeper Learning

Researchers taught robots to run. Now they’re teaching them to walk.

Researchers at Oregon State University have successfully trained a humanoid robot called Digit V3 to stand, walk, pick up a box, and move it from one location to another. Meanwhile, a separate group of researchers from the University of California, Berkeley, have focused on teaching Digit to walk in unfamiliar environments while carrying different loads, without toppling over. 

What’s the big deal: Both groups are using an AI technique called sim-to-real reinforcement learning, a burgeoning method of training two-legged robots like Digit. Researchers believe it will lead to more robust, reliable two-legged machines capable of interacting with their surroundings more safely—as well as learning much more quickly. Read more from Rhiannon Williams

Bits and Bytes

It’s time to retire the term “user”
The proliferation of AI means we need a new word. Tools we once called AI bots have been assigned lofty titles like “copilot,” “assistant,” and “collaborator” to convey a sense of partnership instead of a sense of automation. But if AI is now a partner, then what are we? (MIT Technology Review

Three ways the US could help universities compete with tech companies on AI innovation
Empowering universities to remain at the forefront of AI research will be key to realizing the field’s long-term potential, argue Ylli Bajraktari, Tom Mitchell, and Daniela Rus. (MIT Technology Review

AI was supposed to make police body cams better. What happened?
New AI programs that analyze bodycam recordings promise more transparency but are doing little to change culture. This story serves as a useful reminder that technology is never a panacea for these sorts of deep-rooted issues. (MIT Technology Review

The World Health Organization’s AI chatbot makes stuff up
The World Health Organization launched a “virtual health worker“ to help people with questions about things like mental health, tobacco use, and healthy eating. But the chatbot frequently offers outdated information or just simply makes things up, a common issue with AI models. This is a great cautionary tale of why it’s not always a good idea to use AI chatbots. Hallucinating chatbots can lead to serious consequences when they are applied to important tasks such as giving health advice. (Bloomberg

Meta is adding AI assistants everywhere in its biggest AI push
The tech giant is rolling out its latest AI model, Llama 3, in most of its apps including Instagram, Facebook, and WhatsApp. People will also be able to ask its AI assistants for advice, or use them to search for information on the internet. (New York Times

Stability AI is in trouble
One of the first new generative AI unicorns, the company behind the open-source image-generating AI model Stable Diffusion, is laying off 10% of its workforce. Just a couple of weeks ago its CEO, Emad Mostaque, announced that he was leaving the company. Stability has also lost several high-profile researchers and struggled to monetize its product, and it is facing a slew of lawsuits over copyright. (The Verge

This architect is cutting up materials to make them stronger and lighter

23 April 2024 at 05:00

As a child, Emily Baker loved to make paper versions of things: cameras, a spaceship cockpit, buildings for a town in outer space.

It was a habit that stuck. Years later, studying architecture in graduate school at the Cranbrook Academy of Art in Michigan, she was playing around with some paper and scissors. It was 2010, and the school was about to buy a CNC plasma cutter, a computer-controlled machine capable of cutting lines into sheets of steel. As she thought about how she might experiment with it, she made a striking discovery.

To develop Spin-Valence, a novel structural system, Emily Baker created prototypes by making cuts and folds in sheets of paper before shifting to digitally cut steel.

By making a series of cuts and folds in a sheet of paper, Baker found she could produce two planes connected by a complex set of thin strips. Without the need for any adhesive like glue or tape, this pattern created a surface that was thick but lightweight. Baker named her creation Spin-Valence. Structural tests later showed that an individual tile made this way, and rendered in steel, can bear more than a thousand times its own weight. 

Emily Baker
Baker in her fabrication lab at the University of Arkansas.
BROOKE BIERHAUS

In chemistry, spin valence is a theory dealing with molecular behavior. Baker didn’t know of the existing term when she named her own invention—“It was a total accident,” she says. But diagrams related to chemical spin valence theory, she says, do “seem to have a network of patterns that are very similar to the tilings I’m working with.” 

Soon, Baker began experimenting with linking individual tiles together to produce a larger plane. There are perhaps thousands of geometric cutting patterns that can create these multiplane structures, and she has so far discovered only some of them. Certain patterns are stronger than others, and some are better at making curved planes. 

Baker uses software to explore each pattern type but continues to work with cut paper to model possibilities. The Form Finding Lab at Princeton is now testing various tiles under tension and compression loads, and the results have already proved incredibly strong. 

Baker is also exploring ways to use Spin-Valence in architecture and design. She envisions using the technique to make shelters or bridges that are easier to transport and assemble following a natural disaster, or to create lightweight structures that could be packed with supplies for missions to outer space. (Closer to home, her mother has begun passing along ideas to her quilting group; the designs bear a strong resemblance to quilt patterns.)

“What I find most exciting about the system is the way it adds stiffness to something that was previously very flexible,” says Isabel Moreira de Oliveira, a PhD candidate in civil engineering at Princeton, who is writing her dissertation on Spin-Valence and testing which shapes work best for specific applications. “It entirely changes the behavior of something without adding material to it.” Plus, she adds, “you can ship this flat. The assembly information is embedded in how it’s cut.” This could help reduce transportation costs and lower carbon emissions generated from shipping. 

Baker grew up in Alabama and Arkansas, the daughter of a librarian and a chemical engineer at Eastman Kodak. Everybody in the family made things by hand—her mother taught her how to sew, and her father taught her how to work with wood. In high school, she took some classes in the school’s agricultural program, including welding, where she had a particularly supportive teacher. “I’ll tell you who the best two welders in the class are gonna be right now,” she recalls him saying, as he pointed at her and the only other female student. And, she says, “it was true. We picked it up a little faster than the guys. It was really empowering.”

Baker went on to study chemical engineering at the University of Arkansas in Fayetteville before she switched to architecture, drawn to the more tactile work. After five years at a small architecture firm in Jackson, Mississippi, she enrolled at Cranbrook, where she sensed she would have the space and tools to experiment. She now teaches in the architecture program at the University of Arkansas.

No doubt her experience in high school welding class aided in a more recent collaboration. Together with her UA colleague Edmund Harriss, an assistant professor of mathematics and art, she has developed Zip-Form—a system for welding and bending two sheets of steel together to make complex 3D curves using low-cost tools and easily learned skills. 

As a process, it is “a physical manifestation of integrating differential properties of the curve,” Harriss says. “The way the mathematical theory links to manufacturing process in Zip-Form is incredibly clean and elegant.” He explains that Baker’s willingness to engage seriously with the mathematics sets her apart from other architects he has worked with: “I think often people get intimidated by the mathematics and try to fall back on their expertise to say where the mathematics isn’t working.” Baker wasn’t like that. 

Like Spin-Valence, Zip-Form has potential applications in construction. Shortly after developing the technique, Baker met Mohamed Ismail, now an assistant professor of architecture at the University of Virginia, who was then a PhD student at MIT. He was working on low-cost, low-­carbon structural systems for housing in developing countries. When Ismail learned that Baker and Harris had found a way to make complex 3D structures out of flat sheets, his mind immediately went to concrete. A system like this, he says, is “exactly the kind of thing that is necessary when you’re trying to build complex concrete formwork [molds to pour the concrete into] in places where you don’t have a robotic arm or 3D printer.” 

In a project they worked on together, Baker and Ismail used Zip-Form to create a mold for a 16-foot prototype curved beam that’s more environmentally sustainable than a traditional beam, reducing the total carbon emissions associated with resource extraction, production, transport and other stages in the typical life cycle of a beam by 40%. 

While most concrete buildings use vastly more material than is structurally necessary, curved beams save concrete by using material only where it is needed to bear a structural load. Concrete is responsible for approximately 8% of total global carbon emissions, but it is also desperately needed to build housing, especially in places like India and Africa, where the population is forecast to grow rapidly in the next 20 years. Zip-Form demands more labor than more automated processes, but the equipment it uses is more affordable. 

Ismail and Baker are now working with a fabrication company in Kenya to demonstrate to real estate developers and an African housing nonprofit that this technology is competitive on price with traditional methods, and thus has a key role to play in affordable construction. The construction industry in the US can be mind-numbingly slow to adopt new techniques, Baker and Ismail both say, but they believe Zip-Form can easily be brought into building projects, using tools and materials that are already available. 

""
Baker began exploring digital fabrication as a graduate student at the Cranbrook Academy of Art, a path that has resulted in steel prototypes like this one.
Baker and Harriss standing in one of their sculptures
Baker and Edmund Harriss, a mathematician, developed Zip-Form, a system for welding and bending two sheets of steel together to make complex 3D shapes.

Zip-Form has creative potential, too. Danielle Hatch, an artist known for large-scale fabric installations, is using the system to make a public sculpture in Arkansas inspired by the movement of ribbons on the dancers she saw at a Hispanic cultural festival. “How could I evoke that sense of lightness and play, with metal?” she wondered. Zip-Form allowed her to make steel that she describes as “lyrical.” 

Baker has been inspired by the work of R. Buckminster Fuller, the polymath known for popularizing the geodesic dome—and for turning his mind to everything from affordable housing and transportation to renewable energy. She has studied his story closely, especially reflecting on the gaps between the broad scope of his thought—which often sought to revolutionize entire systems—and the limited real-world changes that resulted from his ideas. “Is there something I should have learned from his life and experience?” she wonders. 

Like Fuller, whose work extended far beyond architecture to consider the ways people relate to one another and to materials, Baker doesn’t think just about physical forms but about how people build, live, and manufacture—and the hierarchies that determine who does what. She thinks of architecture as always being in conversation with the body of the builder. A brick, she points out, is an excellent example because it’s the perfect size for a worker to hold while slathering it with mortar. Baker wants the tools she creates to be just as practical.

Sofi Thanhauser is a writer, artist, and musician based in Brooklyn, New York, and the author of Worn: A People’s History of Clothing.

A Grammy for Miguel Zenón

22 April 2024 at 10:04

Nobel Prizes and other scientific honors are nearly routine at MIT, but a Grammy Award is something we don’t see every year. That’s what Miguel Zenón, an assistant professor of music and theater arts, has won: El Arte Del Bolero Vol. 2, which he recorded with the pianist and composer Luis Perdomo, received the Grammy for best Latin jazz album in February.

Miguel holding his Grammy
MAARTEN DE BOER

“I’m incredibly happy and honored with this Grammy win,” says Zenón, a renowned saxophonist. “We’ve been making albums for a long time, so it’s extremely rewarding to earn this recognition.”

“The Latin American Songbook is so vast and varied that it naturally lends itself to limitless explorations,” Zenón wrote in the album’s liner notes. “We purposely looked beyond the Caribbean (exploring composers from México, Venezuela, and Panamá, for example) because we wanted to emphasize the point that these songs deserved to be explored and recognized for what they are, beyond labels, categories, and regionalisms.” 

Born and raised in San Juan, Puerto Rico, Zenón has recorded and toured with musicians including Charlie Haden, Fred Hersch, David Sánchez, Danilo Pérez, Kenny Werner, Bobby Hutcherson, and the SF Jazz Collective. He joined the MIT faculty in 2023, and his many accolades include 12 Grammy nominations and a 2008 MacArthur “genius” grant.

The Download: saving seals with artificial snow, and AI’s effects on politics

22 April 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

These artificial snowdrifts protect seal pups from climate change

For millennia, during Finland’s blistering winters, wind drove snow into meters-high snowbanks along Lake Saimaa’s shoreline, offering prime real estate from which seals carved cave-like dens to shelter from the elements and raise newborns.

But in recent decades, these snowdrifts have failed to form in sufficient numbers, as climate change has brought warming temperatures and rain in place of snow, decimating the seal population.

For the last 11 years, humans have stepped in to construct what nature can no longer reliably provide. Human-made snowdrifts, built using handheld snowplows, now house 90% of seal pups. They are the latest in a raft of measures that have brought Saimaa’s seals back from the brink of extinction. Read the full story.

—Matthew Ponsford

Matthew’s story is from the next magazine issue of MIT Technology Review, set to go live this Wednesday April 24, on the theme of Build. If you don’t already, subscribe now to get a copy when it lands.

Politics in the AI era

2024 is a banner year for elections across the world, and it arrives just as AI advances come thick and fast. This collision of events raises a crucial question: how will the rise of AI change politics?

Join MIT Technology Review Editor in Chief Mat Honan and Executive Editor Amy Nordrum for a LinkedIn Live event where they’ll explore the impact of political influencers and deepfakes, and unpack industry insights and predictions. Register here to tune in at 1pm ET tomorrow.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Inside the movement to create AI models without guardrails 
These ‘anti-woke’ systems often introduce more problems than solutions. (WSJ $)
+ Do AI systems need to come with safety warnings? (MIT Technology Review)

2 California wants to force Google and Meta to compensate news publishers
Unsurprisingly, they’re not taking the so-called ‘link tax’ lying down. (WP $)
+ Japan’s regulators have accused Google of anticompetitive behavior. (Bloomberg $)

3 China is planning on becoming the global leader for flying cars
Its regulators are beavering away to green-light projects as quickly as possible. (FT $)
+ The aviation industry is still weathering the backlash over Boeing’s issues. (Vox)
+ These aircraft could change how we fly. (MIT Technology Review)

4 TikTok’s top lawyer is stepping down
Amid the company’s highly-publicized legal tussle with the US government. (The Information $)
+ The US Senate is expected to vote on its proposed ban bill this week. (The Guardian)

5 A huge cyberattack revealed Finnish people’s psychotherapy records
The fallout was likened to the trauma of a terrorist attack. (Bloomberg $)

6 A UK sex offender has been banned from using AI tools
In the first known legal case of its kind. (The Guardian)
+ Catching bad content in the age of AI. (MIT Technology Review)

7 The internet is rife with scams
They’re so convincing, even experts are falling for them. (NYT $)
+ How culture drives foul play on the internet. (MIT Technology Review)

8 The future of AI gadgets is probably just phones
The Ai Pin’s savage reviews look like an omen. (The Verge)

9 Spare a thought for Nvidia’s engineers
A million dollars doesn’t go too far these days, according to one worker. (Insider $)

10 This camera produced AI-generated poetry instead of photos
Is a picture really worth a thousand words? (TechCrunch)
+ A Salvador Dalí AI lobster telephone has gone on display in Florida. (Insider $)

Quote of the day

“Politics is being treated as a four-letter word and pushed out of the public square.”

—Eric Wilson, managing partner at Republican campaign tech incubator Startup Caucus, laments Meta’s decision to treat politics as less of a priority on its platforms to the Washington Post.

The big story

Cops built a shadowy surveillance machine in Minnesota after George Floyd’s murder 

March 2022

Law enforcement agencies in Minnesota have been carrying out a secretive, long-running surveillance program targeting civil rights activists and journalists in the aftermath of the murder of George Floyd in May 2020.

Run under a consortium known as Operation Safety Net, the program was set up in spring 2021, ostensibly to maintain public order as Minneapolis police officer Derek Chauvin went on trial for Floyd’s murder.

But an investigation by MIT Technology Review reveals that the initiative expanded far beyond its publicly announced scope to include expansive use of tools to scour social media, track cell phones, and amass detailed images of people’s faces. Read the full story.

—Tate Ryan-Mosley & Sam Richards

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These cats have a bright pottery-making career ahead of them.
+ You just can’t escape British workwear these days.
+ It’s never too late to take up something you love.
+ The first-ever model of Star Trek’s USS Enterprise NCC-1701 has been returned to the family of series creator Gene Roddenberry.

❌
❌