Normal view

There are new articles available, click to refresh the page.
Before yesterdayMIT Technology Review

The Download: AI comics, and US tensions with China over EVs

6 March 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

I used generative AI to turn my story into a comic—and you can too

—Will Douglas Heaven

Thirteen years ago, as an assignment for a journalism class, I wrote a stupid short story about a man who eats luxury cat food. This morning, I sat and watched as a generative AI platform called Lore Machine brought my weird words to life.

Lore Machine analyzed the text, extracted descriptions of the characters and locations mentioned, and then handed those bits of information off to an image-generation model. An illustrated storyboard popped up on the screen. As I clicked through vivid comic-book renderings of my half-forgotten characters, my heart was pounding.

What sets Lore Machine apart from its rivals is how easy it is to use. Between uploading my story and downloading its storyboard, I clicked maybe half a dozen times. That makes it one of a new wave of user-friendly tools that hide the stunning power of generative models behind a one-click web interface—and heralds the arrival of one-click AI. Read the full story.

Chinese EVs have entered center stage in US-China tensions

So far, electric vehicles have mostly been discussed in the US through a scientific, economic, or environmental lens. But all of a sudden, they have become highly political. 

Last Thursday, the Biden administration announced it would investigate the security risks posed by Chinese-made smart cars, which could “collect sensitive data about our citizens and our infrastructure and send this data back to the People’s Republic of China,”

While many other technologies from China have been scrutinized because of security concerns, EVs have largely avoided that sort of attention until now. But US-China relations have been at a low point since the Trump years and the pandemic, and it seems like only a matter of time before any trade or interaction between the two countries falls under security scrutiny. Now it’s EVs’ turn. Read the full story.

—Zeyi Yang

This story is from China Report, our weekly newsletter giving you the inside track on all things happening in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk wanted to merge OpenAI with Tesla 
OpenAI is airing all its dirty laundry to counter Musk’s lawsuit. (The Verge)
+ Musk and Sam Altman used to love each other. What happened? (WSJ $)

2 Is Bitcoin really back?
It’s enjoying yet another major surge—but remains as volatile as ever. (NYT $)
+ It looks like the US regulator’s approval is to blame/thank. (The Guardian)
+ What is crypto for, exactly? (The Atlantic $)
+ Cryptomania is proving hard to kill. (Reuters)

3 Google is determined to kill off AI clickbait
Good luck with that. (Wired $)
+ We are hurtling toward a glitchy, spammy, scammy, AI-powered internet. (MIT Technology Review)

4 Self-driving cars are in crisis mode 
The bubble has burst, and no one knows where to go from here. (NY Mag $)
+ What’s next for robotaxis in 2024. (MIT Technology Review)

5 Weight loss drugs are for the brain, not the gut
Our original scientific understanding of how they work is all wrong. (The Atlantic $)
+ Weight-loss injections have taken over the internet. But what does this mean for people IRL?  (MIT Technology Review)

6 The world’s biggest EV maker is rewriting the rules of batteries
China’s BYD is betting big on sodium ion cells. (IEEE Spectrum)
+ Why hydrogen is losing the race to power cleaner cars. (MIT Technology Review)

7 Inside the unstoppable rise of China’s retail giant Temu
US investors are rushing to finance its rapid expansion into the West, while turning a blind eye to how it’s run. (FT $)
+ The war over fast fashion is heating up. (MIT Technology Review)

8 Would you live in a 3D-printed home?
Non-profit Citizen Robotics seems to think so. (Insider $)
+ Meet the designers printing houses out of salt and clay. (MIT Technology Review)

9 Fly me to the moon, again 🌕
Lunar explorations are finally underway, after decades of inaction. (New Yorker $)
+ In other news, China and Russia want to build a nuclear power plant on the moon. (Bloomberg $)
+ What’s next for the moon. (MIT Technology Review)

10 We’re not going to define the Anthropocene after all|
And scientists are far from happy about it. (New Scientist $)

Quote of the day

“I’ve been influenced! To buy any brand but Tarte.” 

—A TikTok user is not impressed by beauty brand Tarte Cosmetics’ decision to take 30 influencers on a lavish trip to Bora Bora in the current economic climate, the New York Times reports.

The big story

Inside the messy ethics of making war with machines

August 2023

In recent years, intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—have become a matter of serious concern. Giving an AI system the power to decide matters of life and death would radically change warfare forever.

Intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. 

However, these systems have become sophisticated enough to raise novel questions—ones that are surprisingly tricky to answer. What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill? Read the full story.

—Arthur Holland Michel

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)+ If you haven’t already seen it, kick back and admire the completely bonkers decision to splice beer adverts into the original Star Wars trilogy when it was aired on Chilean TV.
+ These pictures documenting the great British package holiday tradition are fantastic.
+ Brace yourself: a Manchester City docuseries is on the way. ⚽
+ What does it take to build the world’s tallest towers? Experience.

Emissions hit a record high in 2023. Blame hydropower.

7 March 2024 at 06:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Hydropower is a staple of clean energy—the modern version has been around for over a century, and it’s one of the world’s largest sources of renewable electricity.

But last year, weather conditions caused hydropower to fall short in a major way, with generation dropping by a record amount. In fact, the decrease was significant enough to have a measurable effect on global emissions. Total energy-related emissions rose by about 1.1% in 2023, and a shortfall of hydroelectric power accounts for 40% of that rise, according to a new report from the International Energy Agency.

Between year-to-year weather variability and climate change, there could be rocky times ahead for hydropower. Here’s what we can expect from the power source and what it might mean for climate goals. 

Drying up

Hydroelectric power plants use moving water to generate electricity. The majority of plants today use dams to hold back water, creating reservoirs. Operators can allow water to flow through the power plant as needed, creating an energy source that can be turned on and off on demand. 

This dispatchability is a godsend for the grid, especially because some renewables, like wind and solar, aren’t quite so easy to control. (If anyone figures out how to send more sunshine my way, please let me know—I could use more of it.) 

But while most hydroelectric plants do have some level of dispatchability, the power source is still reliant on the weather, since rain and snow are generally what fills up reservoirs. That’s been a problem for the past few years, when many regions around the world have faced major droughts. 

The world actually added about 20 gigawatts of hydropower capacity in 2023, but because of weather conditions, the amount of electricity generated from hydropower fell overall.

The shortfall was especially bad in China, with generation falling by 4.9% there. North America also faced droughts that contributed to hydro’s troubles, partly because El Niño brought warmer and drier conditions. Europe was one of the few places where conditions improved in 2023—mostly because 2022 was an even worse year for drought on the continent.

As hydroelectric plants fell short, fossil fuels like coal and natural gas stepped in to fill the gap, contributing to a rise in global emissions. In total, changes in hydropower output had more of an effect on global emissions than the post-pandemic aviation industry’s growth from 2022 to 2023. 

A trickle

Some of the changes in the weather that caused falling hydropower output last year can be chalked up to expected yearly variation. But in a changing climate, a question looms: Is hydropower in trouble?

The effects of climate change on rainfall patterns can be complicated and not entirely clear. But there are a few key mechanisms by which hydropower is likely to be affected, as one 2022 review paper outlined

  • Rising temperatures will mean more droughts, since warmer air sucks up more moisture, causing rivers, soil, and plants to dry out more quickly. 
  • Winters will generally be warmer, meaning less snowpack and ice, which often fills up reservoirs in the early spring in places like the western US. 
  • There’s going to be more variability in precipitation, with periods of more extreme rainfall that can cause flooding (meaning water isn’t stored neatly in reservoirs for later use in a power plant).

What all this will mean for electricity generation depends on the region of the world in question. One global study from 2021 found that around half of countries with hydropower capacity could expect to see a 20% reduction in generation once per decade. Another report focused on China found that in more extreme emissions scenarios, nearly a quarter of power plants in the country could see that level of reduced generation consistently. 

It’s not likely that hydropower will slow to a mere trickle, even during dry years. But the grid of the future will need to be prepared for variations in the weather. Having a wide range of electricity sources and tying them together with transmission infrastructure over wide geographic areas will help keep the grid robust and ready for our changing climate. 

Related reading

Droughts across the western US have been cutting into hydropower for years. Here’s how changing weather could affect climate goals in California.

While adaptation can help people avoid the worst impacts of climate change, there’s a limit to how much adapting can really help, as I found when I traveled to El Paso, Texas, famously called the “drought-proof city.”

Drought is creating new challenges for herders, who have to handle a litany of threats to their animals and way of life. Access to data could be key in helping them navigate a changing world.

road closed blockade
STEPHANIE ARNETT/MITTR | ENVATO

Another thing

Chinese EVs have entered center stage in the ongoing tensions between the US and China. The vehicles could help address climate change, but the Biden administration is wary of allowing them into the market. There are two major motivations: security and the economy. Read more in my colleague Zeyi Yang’s latest newsletter here

Keeping up with climate  

A new satellite that launched this week will be keeping an eye on methane emissions. Tracking leaks of the powerful greenhouse gas could be key in addressing climate change. (New York Times)

→ This isn’t our first attempt at tracking greenhouse gases from space—but here’s how MethaneSAT is different from other methane-detecting satellites. (Heatmap)

Smarter charging of EVs could be essential to the grid of the future, and California is working on a new program to test it out. (Canary Media)

The magnets that power wind turbines nearly always wind up in a landfill. A new program aims to change that by supporting new methods of recycling. (Grist)

→ One company wants to do without the rare earth metals that are used in today’s powerful magnets. (MIT Technology Review)

Data centers burn through water to keep machinery cool. As more of the facilities pop up, in part to support AI tools like ChatGPT, they could stretch water supplies thin in some places. (The Atlantic)

No US state has been more enthusiastic about heat pumps than Maine. While it might seem an unlikely match—the appliances can lose some of their efficiency in the cold—the state is a success story for the technology. (New York Times)

New rules from the US Securities and Exchange Commission would require companies to report their emissions and expected climate risks. The final version is watered down from an earlier proposal, which would have included a wider variety of emissions. (Associated Press)

The Download: hydropower’s rocky path ahead, and how to reverse falling birth rates

7 March 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Emissions hit a record high in 2023. Blame hydropower.

Hydropower is one of the world’s largest sources of renewable electricity.

But last year, weather conditions caused hydropower to fall short in a major way, with generation dropping by a record amount. In fact, the decrease was significant enough to have a measurable effect on global emissions. 

Total energy-related emissions rose by just over 1% in 2023, and a shortfall of hydroelectric power accounts for 40% of that rise, according to a new report from the International Energy Agency.

Between year-to-year weather variability and climate change, there could be rocky times ahead for hydropower. Here’s what we can expect from the power source and what it might mean for climate goals. Read the full story.

—Casey Crownhart

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

How reproductive technology can reverse population decline

Back in October, we held a subscriber-only exclusive Roundtables discussion on how innovations from the lab could affect the future of families. Antonio Regalado, our biotechnology editor, sat down with entrepreneur Martín Varsavsky, founder of fertility clinic Prelude Fertility, to explore the cause of plummeting birth rates worldwide, and much more.

If you missed it the first time round, subscribers can watch a recording of the discussion here—and if you’re not already a subscriber, why not become one?

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Microsoft’s AI image generator could pose a danger to society 
At least, that’s what an engineer at the company has alleged. (CNBC)
+ He claims that Microsoft never responded to his warnings. (Ars Technica)
+ Text-to-image AI models can be tricked into generating disturbing images. (MIT Technology Review)

2 The EU is investigating why Apple kicked Epic Games from its App Store
Epic likened the move to a “feudal lord mounting the skulls of their former enemies on their castle walls.” (FT $)
+ The EU will determine whether Apple has broken the law. (Bloomberg $)

3 SpaceX is planning another Starship launch for next week
Regulatory approval is still pending, though. (TechCrunch)
+ Russia’s primary rocket is looking pretty old these days. (Ars Technica)
+ Future satellites could be launched into space courtesy of giant balloons. (Economist $)

4 We’re witnessing the birth of the AI device era
Where software leads, hardware will follow. (The Atlantic $)
+ It turns out AI models are better prompters than humans. (IEEE Spectrum)
+ Meet the jobseekers trying to outwit AI interviewers. (The Guardian)

5 The US office that oversees AI is falling apart
Workers are contending with black mold and leaks alongside the pressure to keep the country safe. (WP $)

6 Microsoft’s Bing search engine is thriving in China
By kotowing to Beijing’s demands that it sanitizes its results. (Bloomberg $)
+ A Google engineer has been indicted over allegedly stealing trade secrets for China. (The Verge)

7 A German man received 217 covid jabs in two and a half years
And he’s absolutely fine. (Wired $)
+ The findings suggest that overexposure to vaccines may not affect immune response. (The Atlantic $)
+ Scientists are finding signals of long covid in blood. (MIT Technology Review)

8 Inside China’s emerging psychedelic scene  
The country’s pervasive surveillance system is reflected in drug-induced visions. (Vox)
+ Mind-altering substances are being overhyped as wonder drugs. (MIT Technology Review)

9 It’s time to ditch your wallet for good
Gen Z pays on their smartphones. (NYT $)
+ A word to the wise: never ask a young person how old they think you are. (Vox)

10 Speed dating is making a major comeback
Dating app fatigue is real, and IRL matchmaking is having a renaissance. (WP $)
+ Here’s how the net’s newest matchmakers help you find love. (MIT Technology Review)

Quote of the day

“Opened TikTok to a video of Mark Wahlberg asking me to pray with him… and I cannot think of a thing I want to do less, actually.”

—X user Brandi Howard makes her feelings about Wallberg’s pay-to-pray app Hallow clear, the New York Times reports. 

The big story

This fuel plant will use agricultural waste to combat climate change

February 2022

A startup called Mote plans to build a new type of fuel-producing plant in California’s fertile Central Valley that would, if it works as hoped, continually capture and bury carbon dioxide, starting from 2024. 

It’s among a growing number of efforts to commercialize a concept first proposed two decades ago as a means of combating climate change, known as bioenergy with carbon capture and sequestration, or BECCS.

It’s an ambitious plan. However, there are serious challenges to doing BECCS affordably and in ways that reliably suck down significant levels of carbon dioxide. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ That’s one seriously smart pooch.
+ Yum—Polish food is having a moment in the spotlight.
+ If you’ve ever harbored a sneaking suspicion that major authors’ plots are kind of the same, you’re not alone.
+ This hyper-realistic portrait tattoo of Jake Gyllenhaal is unbelievable.

How open source voting machines could boost trust in US elections

7 March 2024 at 11:36

While the vendors pitched their latest voting machines in Concord, New Hampshire, this past August, the election officials in the room g­­asped. They whispered, “No way.” They nodded their heads and filled out the scorecards in their laps. Interrupting if they had to, they asked every kind of question: How much does the new scanner weigh? Are any of its parts made in China? Does it use the JSON data format?

The answers weren’t trivial. Based in part on these presentations, many would be making a once-in-a-decade decision.

These New Hampshire officials currently use AccuVote machines, which were made by a company that’s now part of Dominion Voting Systems. First introduced in 1989, they run on an operating system no longer supported by Microsoft, and some have suffered extreme malfunctions; in 2022, the same model of AccuVote partially melted during an especially warm summer election in Connecticut.

Many towns in New Hampshire want to replace the AccuVote. But with what? Based on past history, the new machines would likely have to last decades — while also being secure enough to satisfy the state’s election skeptics. Outside the event, those skeptics held signs like “Ban Voting Machines.” Though they were relatively small in number that day, they’re part of a nationwide movement to eliminate voting technology and instead hand count every ballot — an option election administrators say is simply not feasible.

Against this backdrop, more than 130 election officials packed into the conference rooms on the second floor of Concord’s Legislative Office Building. Ultimately, they faced a choice between two radically different futures.

The first was to continue with a legacy vendor. Three companies — Dominion, ES&S, and Hart InterCivic — control roughly 90 percent of the U.S. voting technology market. All three are privately held, meaning they’re required to reveal little about their financial workings and they’re also committed to keeping their source code from becoming fully public.

The second future was to gamble on VotingWorks, a nonprofit with only 17 employees and voting machine contracts in just five small counties, all in Mississippi. The company has taken the opposite approach to the Big Three. Its financial statements are posted on its website, and every line of code powering its machines is published on GitHub, available for anyone to inspect.

“Why in 2023 are we counting votes with any proprietary software at all?”

At the Concord event, a representative for ES&S suggested that this open-source approach could be dangerous. “If the FBI was building a new building, they’re not going to put the blueprints out online,” he said. But VotingWorks co-founder Ben Adida says it’s fundamental to rebuilding trust in voting equipment and combatting the nationwide push to hand count ballots. “An open-source voting system is one where there are no secrets about how this works,” Adida told the audience. “All the source code is public for the world to see, because why in 2023 are we counting votes with any proprietary software at all?”

Others agree. Ten states currently use VotingWorks’ open-source audit software, including Georgia during its hand count audit in 2020. Other groups are exploring open-source voting technology, including Microsoft, which recently piloted voting software in Franklin County, Idaho. Bills requiring or allowing for open-source voting technology have recently been introduced in at least six states; a bill has also been introduced at the federal level to study the issue further. In New Hampshire, the idea has support from election officials, the secretary of state, and even diehard machine skeptics.

VotingWorks is at the forefront of the movement to make elections more transparent. “Although the voting equipment that we’ve been using for the last 20, 30 years is not responsible for this crisis,” Adida said, “it’s also not the equipment that’s going to get us out of this crisis.” But can an idealist nonprofit really unseat industry juggernauts — and restore faith in democracy along the way?


For years, officials have feared that America’s voting machines are vulnerable to attack. During the 2016 election, Russian hackers targeted election systems in all 50 states, according to the Senate Intelligence Committee. The committee found no evidence that any votes were changed, but it did suggest that Russia could be cataloging options “for use at a later date.”

In 2017, the Department of Homeland Security designated election infrastructure as “critical infrastructure,” noting that “bad cyber actors — ranging from nation states, cyber criminals, and hacktivists — are becoming more sophisticated and dangerous.”

Some conservative activists have suggested simply avoiding machines altogether and hand-counting ballots. But doing so is prohibitively slow and expensive, not to mention more error-prone. Last year, for example, one county in Arizona estimated that counting all 105,000 ballots from the 2020 election would require at least 245 people working every day, including holidays, for almost three weeks.

That leaves election administrators dependent on machines to tally up votes. That August day in Concord, VotingWorks and two of the legacy vendors, Dominion and ES&S, were offering the same kind of product: an optical scanner, which is essentially just a counting machine. After a New Hampshire voter fills in a paper ballot by hand, it’s most likely inserted into an optical scanner, which interprets and tallies the marks. This process is how roughly two-thirds of the country votes. A quarter of voters mark their ballots using machines (aptly named “ballot-marking devices”), which are then fed into an optical scanner as well. About 5 percent use direct recording electronic systems, or DREs, which allows votes to be cast and stored directly on the machine. Only 0.2 percent of voters have their ballots counted by hand.

close up view of hands counting stacks of ballots
Workers in Hinsdale, New Hampshire count each of the 1799 ballots cast after the polls closed on election day in 2016. Hand counts of ballots are prohibitively slow and expensive, and less accurate than machines.
KRISTOPHER RADDER/THE BRATTLEBORO REFORMER VIA AP

Since the 2020 election, the companies that make these machines have been the subject of intense scrutiny from people who deny the election results. Those companies have also come under fire for what critics on both sides of the political aisle describe as their secrecy, lack of innovation, and obstructionist tendencies.

None of the three companies publicly disclose basic information, including their investors and their financial health. It can also be difficult to even get the prices of their machines. Often, jurisdictions come to depend on these firms. Two-thirds of the industry’s revenue comes from support, maintenance, and services for the machines.

Legacy vendors also fight to maintain their market share. In 2017, Hart InterCivic sued Texas to prevent counties from replacing its machines, which don’t produce a paper trail, with machines that did. “For a vendor to sue to prevent auditable paper records from being used in voting shows that market dynamics can be starkly misaligned with the public interest,” concluded a report by researchers at the University of Pennsylvania in collaboration with Verified Voting, a nonprofit that, according to its mission statement, works to promote “the responsible use of technology in elections.”

The companies tell a different story, pointing out that they do disclose their code to certain entities, including third-party firms and independent labs that work on behalf of the federal government to test for vulnerabilities in the software that could be exploited by hackers. In a statement to Undark, ES&S also said it discloses certain financial information to jurisdictions “when requested” and the company shared approximate prices for its voting machines, although it noted that final pricing depends on “individual customer requirements.”

In Concord, officials from some small towns where ballots are still hand-counted were considering switching to machines. Others were considering whether to stick with Dominion and LHS — the New Hampshire-based company that services the machines — or switch to VotingWorks. It would likely be one of the most expensive, consequential decisions of their careers.

“For a vendor to sue to prevent auditable paper records from being used in voting shows that market dynamics can be starkly misaligned with the public interest.”

Throughout his pitch, the representative for LHS emphasized the continuity between the old AccuVote machines and the new Dominion scanner. Wearing a blazer and a dress shirt unbuttoned at the collar, Jeff Silvestro knew the crowd well. LHS is the only authorized service provider for the entire state’s AccuVote machines, and it’s responsible for offering training for the towns’ staff, delivering memory cards for each election, and weathering a blizzard to come to their poll site and service a broken scanner.

Don’t worry, Silvestro reassured the crowd: The voter experience is the same. “Similarities,” Silvestro told the crowd. “That’s what we’re looking for.”

Just down the hall from Silvestro, Ben Adida laid out a different vision of what voting technology could be. He opened by addressing the “elephant in the room”: the substantial number of people who distrust the elections. VotingWorks could do so, he said, by offering three things: security, simplicity, and transparency.

Adida first started working on election technology in 1997, as a computer science undergraduate at MIT, where he built a voting system for student council elections. After earning a Ph.D. from MIT in 2006, with a specialty in cryptography and information security, he did a few more years of election work as a post-doc at Harvard University and then transitioned to data security and privacy for medical data. Later, he served as director of engineering at Mozilla and Square and vice president of engineering at Clever, a digital learning platform for K-12 schools.

In 2016, Adida considered leaving Clever to do election work again, and he followed the progress of STAR-Vote, an open-source election system proposed by Travis County, Texas, that ultimately didn’t move forward. He decided to stay put, but he couldn’t shake the thought of voting technology. Adida knew it was rare for someone to have his background in both product design and election security. “This is kind of a calling,” he said.

Ben Adida
Ben Adida, who holds a Ph.D. in computer science, with a specialty in cryptography and information security, is the co-founder of VotingWorks, a nonprofit that builds open-source election technology.
a VotingWorks display of at the National Association of Secretaries of State in 2022 showing a voting screen built into a tamper-evident ballot box
The voting machine built by VotingWorks is made from off-the-shelf electronics and open-source software that the company posted on GitHub.

Adida launched VotingWorks in December 2018, with some funding from individuals and Y Combinator, a renowned startup accelerator. The nonprofit is now unique among the legacy voting technology vendors: The group has disclosed everything, from its donors to the prices of its machines. VotingWorks machines are made from off-the-shelf electronics, and in the long-run, according to Adida, are cheaper than their competitors.

The day of the Concord event, Adida wore a T-shirt tucked into his khakis, and sported a thick brown mustache. When he started discussing the specs of his machine, he spoke quickly, bounding around the room and even tripping on an errant wire. At one point, he showed off his machine’s end-of-night election report, printed on an 8 ½ by 11 piece of paper, a far cry from the long strips of paper that are currently used. You don’t have to have “these long CVS receipts.” The room laughed.


Adida and his team are staking out a position in a debate that stretches back to the early days of computing: Is the route to computer security through secrecy, or through total transparency?

Some of the most widely used software today is open-source software, or OSS, meaning anyone can read, modify, and reuse the code. OSS has powered popular products like the operating system Linux and the internet browser Firefox from Mozilla. It’s also used extensively by the Department of Defense.

Proponents of OSS offer three main arguments for why it’s more secure than a locked box model. First, publicly available source code can be scrutinized by anyone, not just a relatively small group of engineers within a company, increasing the chances of catching flaws. Second, because coders know that they can be scrutinized by anyone, they’re incentivized to produce better work and to explain their approach. “You can go and look at exactly why it’s being done this way, who wrote it, who approved it, and all of that,” said Adida.

Third, OSS proponents say that trying to hide source code will ultimately fail, because attackers can acquire it from the supplier or reverse engineer it themselves. Hackers don’t need perfect source code, just enough to analyze for patterns that may suggest a vulnerability. Breaking is easier than building.

Already, there are indications that bad actors have acquired proprietary voting machine code. In 2021, an election official in Colorado allegedly allowed a conspiracy theorist to access county machines, copy sensitive data, and photograph system passwords — the kind of insider attack that, experts warn, could compromise the security of the coming presidential election.

Adida and his team are staking out a position in a debate that stretches back to the early days of computing: Is the route to computer security through secrecy, or through total transparency?

Not everyone is convinced that open-source code alone is enough to ensure a secure voting machine. “You could have had open-source software, and you might not have found all of the problems or errors or issues,” said Pamela Smith, the president of Verified Voting, citing the numerous lines of code that would need to be examined in a limited amount of time.

Adida doesn’t expect anyone to go through the hundreds of thousands of lines of code on the VotingWorks GitHub. But if they’re curious about a specific aspect, like how the scanner handles paper that’s askew, it’s much more manageable: only a few hundred lines of code. Already, a small number of coders from outside the company have made suggestions on how to improve the software, some of which have been accepted. Then, to fully guard against vulnerabilities, the company relies on its own procedures, third-party reviews, and certification testing at the federal level, said Adida.

two poll workers holding long scrolls of receipt paper which has puddled onto the ground
Miami-Dade election workers check voting machines for accuracy by reviewing scrolls of paper that Adida likened to “long CVS receipts.”
JOE RAEDLE/GETTY IMAGES

In addition to security, any new machine also needs to be easy for poll workers to operate — and able to perform reliably under the high-stakes conditions of an election day. In interviews, election officials who use the technology in Mississippi raved about its ease of use.

Some also love how responsive the company is to feedback. “They come to us and say, ‘Tell us in the field what’s going on,’” said Sara Dionne, chairman of the election commission in Warren County, Mississippi, which started using VotingWorks in 2020. “We certainly never had that kind of conversation with ES&S ever.”


To expand VotingWorks’ reach, though, Adida must pitch it in places like New Hampshire, where election officials are navigating tight budgets, fallout from the 2020 election, and misperceptions about voting technology.

New Hampshire is a swing state, and, after the 2020 election, it has a small but vocal faction of election deniers. At the same time, Republican Secretary of State David Scanlan has done little to marshal resources for new machines. Last year, Scanlan opposed a bill that would have allowed New Hampshire towns and cities to apply for funding from a $12 million federal grant for new voting machines; Republicans in the legislature killed the bill. (Asked what cash-strapped jurisdictions should do if they can’t afford new scanners, Scanlan told Undark they could cannibalize parts from old AccuVote machines.)

Some critics also say Scanlan has done little to dispel some conservative activists’ beliefs that New Hampshire can dispense with machines altogether. At the Concord event, a woman told Undark that Manchester, a city with 68,000 registered voters, could hand count all of its ballots in just four hours. Speaking with Undark, Scanlan acknowledged that this estimate wasn’t correct, and that hand counting is less accurate than machines. However, his office hasn’t communicated this message to the public in any formal way. “I definitely think that he is complicit in allowing [misinformation] to continue to flourish,” said Liz Wester, co-founder of 603 Forward, which encourages civic participation in the state.

The VotingWorks model won over some machine skeptics at the Concord event, like Tim Cahill, a Republican in the New Hampshire House of Representatives. Cahill said he’d prefer that all ballots in the state be hand counted but would choose VotingWorks over the other vendors. “Why would you trust something you can’t put your eyes on?” he told Undark. “We have a lot of smart people in this country and people want open source, they want transparency.”

people in an office setting surrounded by stacks of ballots
Poll workers use the Accu-Vote machines to scan absentee ballots in Fairbanks, Alaska.
ERIC ENGMAN/GETTY IMAGES

Open source has found fans in other states, too. Kevin Cavanaugh is a county supervisor in Pinal, Arizona’s third most populous county. He says he started to doubt voting machines after watching a documentary, funded by the election denier Mike Lindell, claiming that the devices have unauthorized software that could change vote totals without detection. In November 2022, Cavanaugh introduced a motion to increase the number of ballots counted by hand in the county, and he told Undark he’d like a full hand count. “But, if we’re using machines,” he added, “then I think it’s important that the source code is available for inspection to experts.”

Back in Concord, Adida appeared to be persuasive to the public at large — or at least those invested enough to attend the event. Of the 201 attendees who filled out a scorecard, VotingWorks was the most popular first choice. But among election officials, the clear preference was Dominion. Some officials were skeptical that open-source technology would mean much to people in their towns. “Your average voter doesn’t care about open source,” said one town clerk.

Still, five towns in New Hampshire have already purchased VotingWorks machines, some of which will be used in upcoming March local elections.


Two main factors determine whether someone has faith in an election, said Charles Stewart III, a political scientist at MIT who has written extensively about trust in elections. The first, which affects roughly 5 to 10 percent of voters, is a negative personal experience at the polls, like long lines, rude poll workers, and problems with machines, which can make the public less willing to trust an election’s outcome.

The second, more influential factor affecting trust is if a voter’s candidate won. That makes it supremely difficult to restore confidence, said Tammy Patrick, a former election official in Maricopa County and the current CEO for programs at the National Association of Election Officials. “The answer on election administration — it’s complex, it’s wonky, it’s not pithy,” she said in a recent press conference. “It’s hard to come back to those emotional pleas with what the reality is.”

Adida agrees with Stewart that VotingWorks alone isn’t going to eliminate election denialism — nor, he said, is that his goal. Instead, he hopes to reach the people who are susceptible to misinformation but haven’t necessarily made up their minds yet, a group he describes as the “middle 80 percent.” Even if they never visit the company’s GitHub, he says, “the fact that we’re putting it all out in the open builds trust.” And when someone says something patently false about the company, Adida can at least ask them to identify the incriminating lines of source code.

Are those two things — rhetorical power and a commitment to transparency — really a match for the disinformation machinery pushing lies across the country? Adida mentioned the myths about legacy vendors’ machines being mis-programmed or incorrectly counting ballots during the 2020 election. “What was the counterpoint to that?” he asked. “It was, ‘Trust us. These machines have been tested.’ I want the counterpoint to be, ‘Hey folks, all the source code is open.’”


Spenser Mestel is a poll worker and independent journalist. His bylines include The New York Times, The Atlantic, The Guardian, and The Intercept.

This article was originally published on Undark. Read the original article.

The many uses of mini-organs

8 March 2024 at 04:00

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week I wrote about a team of researchers who managed to grow lung, kidney, and intestinal organoids from fetal cells floating around in the amniotic fluid. Because these tiny 3D cell clusters come from the fetus and mimic some of the features of a real, full-size organ, they can provide a sneak peek at how the fetus is developing. That’s something nearly impossible to do with existing tools.

An ultrasound, for example, might reveal that a fetus’s kidneys are smaller than they should be, but absent a glaring genetic defect, doctors can’t say why they’re small or figure out a fix. But if they can take a small sample of amniotic fluid and grow a kidney organoid, the problem might become evident, and so might a potential solution.  

Exciting, right? But organoids can do so much more!

Let’s do a roundup of some of the weird, wild, wonderful, and downright unsettling uses that researchers have come up with for organoids.

Organoids could help speed drug development. By some estimates, 90% of drug candidates fail during human trials. That’s because the preclinical testing happens largely in cells and rodents. Neither is a perfect model. Cells lack complexity. And mice, as we all know, are not humans.

Organoids aren’t humans either, but they come from humans. And they have the advantage of having more complexity than a layer of cells in a dish. That makes them a good model for screening drug candidates. When I wrote about organoids in 2015, one cancer researcher told me that studying cells to understand how an organ functions is like studying a pile of bricks to understand the function of a house. Why not just study the house?

Big Pharma appears to agree. In 2022, Roche hired organoid pioneer Hans Clevers to head its Pharma Research and Early Development division. “My belief is that human organoids will eventually complement everything we are currently doing. I’m convinced, now that I’ve seen how the whole drug development process runs, that one can implement human organoids at every step of the way,” Clevers told Nature.

Organoids are trickier to grow than cell lines, but some companies are working to make the process automated. The Philadelphia-based biotech Vivodyne has developed a robotic system that combines organoids with organ-on-a-chip technology. The system grows 20 kinds of human tissue, each containing 200,000 to 500,000 cells, and then doses them with drugs. These “lab-grown human test subjects” provide “huge amounts of complex human data—larger than you could get from any clinical trial,” said Andrei Georgescu, CEO and cofounder of Vivodyne, in a press release.

According to Viodyne’s website, the proprietary machines can test 10,000 independent human tissues at a time, “yielding vivarium-scale output.” Vivarium-scale output. I had to roll this phrase around my brain quite a few times before I understood what they meant: the robot provides the same amount of data as a building full of lab mice.

Organoids could help doctors make medical decisions for individual patients. These mini organs can be grown from stem cells, but they can also be grown from adult cells that have been nudged into a stem-like state. That makes it possible to grow organoids from anyone for any number of uses. In cancer patients, for instance, these patient-derived organoids could be used to help figure out the best therapy.

Cystic fibrosis is another example. Many cystic fibrosis therapies are approved to treat people with specific mutations. But for people who have rarer mutations, it’s not clear which therapies will work. Enter organoids.

Doctors take rectal biopsies from people with the disease, use the cells to create personalized intestinal organoids, and then apply different drugs. If a given treatment works, the ion channels open, water rushes in, and the organoids visibly swell. The results of this test have been used to guide the off-label use of these medications. In one recent case, the test allowed a woman with cystic fibrosis to access one of these drugs through a compassionate use program. 

Organoids are also poised to help researchers better understand how our bodies interact with the microbes that surround (and sometimes infect) us. During the Zika health emergency in 2015, researchers used brain organoids to figure out how the virus causes microcephaly and brain malformations. Researchers have also managed to use organoids to grow norovirus, the pathogen responsible for most stomach flus. Human norovirus doesn’t infect mice, and it has proved especially tricky to culture in cells. That’s probably part of the reason we have no therapies for the illness.  

I’ve saved the weirdest and arguably creepiest applications for last. Some researchers are working to leverage the brain’s unparalleled ability to learn by developing brain organoid biocomputers. The current iterations of these biocomputers aren’t doing any high-level thinking. One clump of brain cells in a dish learned to play the video game Pong. Another hybrid biocomputer maybe managed to decode some audio signals from people pronouncing Japanese vowels. The field is still in extremely early stages, and researchers are wary of overhyping the technology. But given where the field wants to go—full-fledged organoid intelligence—it’s not too early to talk about ethical concerns. Could a biocomputer become conscious? Organoids arise from cells taken from an individual. What rights would that person have? Would the biocomputer have rights of its own? And what about rodents that have had brain organoids implanted in them? (Yes, that’s happening too). 

Last year, researchers reported that human organoids implanted in rat brains expanded into millions of neurons and managed to wire themselves into the animal’s brain. When they blew a puff of air over the rat’s whiskers, they could record an electrical signal zipping through the human neurons.

In a 2017 Stat story on efforts to implant human brain organoids into rodents, the late Sharon Begley talked to legal scholar and bioethicist Hank Greely of Stanford University. During their conversation, he invoked the literary classic Frankenstein as a cautionary and relevant tale: “it could be that what you’ve built is entitled to some kind of respect,” he told her.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

In 2023, scientists reported that brain organoids  hitched to an electronic chip could perform  some very basic speech recognition tasks. Abdullahi Tsanni has the story.

Saima Sidik tells us how organoids created from the uterine lining might reveal the mysteries of menstruation. Here’s her report

When will we be able to transplant mini lungs, livers, or thyroids into people? Ten years …  maybe, said my colleague Jess Hamzelou in this past issue of The Checkup

From around the web

An Alabama bill passed on Wednesday creates a “legal moat” around embryos. Under the new law, providers and recipients of IVF could not be prosecuted or sued for damaging or destroying embryos. But the law doesn’t answer the central question raised by Alabama courts last week: Are embryos people? (NYT)

More legal news. The Senate homeland security committee passed a bill this week that would block certain Chinese biotechs from conducting business in the US. The aim is to keep them from accessing Americans personal health data and genetic information. But some critics have raised supply chain concerns. (Reuters)

Some scientists have expressed concern that too many covid shots could fatigue the immune system and make vaccination less effective. But a man who got a whopping 217 covid vaccines showed no signs of a flagging immune response. (Washington Post)

Buckle up. Norovirus is coming for you. (USA Today).Small studies showing that ibogaine, a psychedelic derived from tree bark, can treat opioid addiction have renewed interest in this illegal drug. But some researchers question whether it could ever be a feasible therapy (NYT)

A plan to bring down drug prices could threaten America’s technology boom

8 March 2024 at 05:00

Forty years ago, Kendall Square in Cambridge, Massachusetts, was full of deserted warehouses and dying low-tech factories. Today, it is arguably the center of the global biotech industry. 

During my 30 years in MIT’s Technology Licensing Office, I witnessed this transformation firsthand, and I know it was no accident. Much of it was the direct result of the Bayh-Dole Act, a bipartisan law that Congress passed in 1980. 

The reform enabled world-class universities like MIT and Harvard, both within a couple of miles of Kendall Square, to retain the patent and licensing rights on discoveries made by their scientists—even when federal funds paid for the research, as they did in nearly all labs. Those discoveries, in turn, helped a significant number of biotechnology startups throughout the Boston area launch and grow.

Before Bayh-Dole, the government retained those patent and licensing rights. Yet while federal agencies like the National Institutes of Health heavily funded basic scientific research at universities, they were ill equipped to find private-sector companies interested in licensing and developing promising but still nascent discoveries. That’s because, worried about accusations of favoritism, government agencies were willing to grant only nonexclusive licenses to companies to develop patented technologies. 

Few companies were willing to license technology on a nonexclusive basis. Nonexclusive licenses opened up the possibility that a startup might spend many millions of dollars on product development only to have the government relicense the patent to a rival firm.

As a result, many taxpayer-financed discoveries were never turned into real-world products. Before the law, less than 5% of the roughly 28,000 patents held by the federal government had been licensed for development by private firms.

The bipartisan lawmakers behind Bayh-Dole understood that these misaligned incentives were impeding scientific and technological progress—and hampering economic growth and job creation. They changed the rules so that patents no longer automatically went to the federal government. Instead, universities and medical schools could hold on to their patents and manage the licensing themselves.

In response, research institutions invested heavily in offices like the one I ran at MIT, which are devoted to transferring technology from academia to private-sector companies.

Today, universities and nonprofit research institutions transfer thousands of discoveries each year, resulting in innovations in all manner of technical fields. Many thousands of entrepreneurial companies—often founded by the researchers who made the discoveries in question—have licensed patents stemming from federally funded research. This technology transfer system has helped create millions of jobs

Google’s search algorithm, for instance, was developed by Sergey Brin and Larry Page with the help of federal grants while they were still PhD students at Stanford. They cofounded Google, licensed their patented algorithm from the school’s technology transfer office, and ultimately built one of the world’s most valuable companies.

All told, the law sparked a national innovation renaissance that continues to this day. In 2002, the Economist dubbed it “possibly the most inspired piece of legislation to be enacted in America over the past half-century.” I consider it so vital that after I retired, I joined the advisory council of an organization devoted to celebrating and protecting it. 

But the efficacy of the Bayh-Dole Act is now under serious threat from a draft framework the Biden administration is currently in the process of finalizing after a months-long public comment period that concluded on February 6.

In an attempt to control drug prices in the US, the administration’s proposal relies on an obscure provision of Bayh-Dole that allows the government to “march in” and relicense patents. In other words, it can take the exclusively licensed patent right from one company and grant a license to a competing firm. 

The provision is designed to allow the government to step in if a company fails to commercialize a federally funded discovery and make it available to the public in a reasonable time frame. But the White House is now proposing that the provision be used to control the ever-rising costs of pharmaceuticals by relicensing brand-name drug patents if they are not offered at a “reasonable” price. 

On the surface, this might sound like a good idea—the US has some of the highest drug prices in the world, and many life-saving drugs are unavailable to patients who cannot afford them. But trying to control drug prices through the march-in provision will be largely ineffective. Many drugs are separately protected by other private patents filed by biotech and pharma companies later in the development process, so relicensing just an early-stage patent will do little to help generate generic alternatives. At the same time, this policy could have an enormous chilling effect on the very beginning of the drug development process, when companies license the initial innovative patent from the universities and research institutions.

If the Biden administration finalizes the draft march-in framework as currently written, it will allow the federal government to ignore licensing agreements between universities and private companies whenever it chooses and on the basis of currently unknown and potentially subjective criteria, such as what constitutes a “reasonable” price. This would make developing new technologies far riskier. Large companies would have ample reason to walk away, and investors in startup companies—which are major players in bringing innovative university technology to market—would be equally reluctant to invest in those firms.

Any patent associated with federal dollars would likely become toxic overnight, since even one cent of taxpayer funding would make the resulting consumer product eligible for march-in on the basis of price. 

What’s more, while the draft framework has been billed as a “drug pricing” policy, it makes no distinction between university discoveries in life sciences and those in any other high-tech field. As a result, investment in IP-driven industries from biotech to aerospace to alternative energy would plummet. Technological progress would stall. And the system of technology transfer established by the Bayh-Dole Act would quickly break down.

Unless the administration withdraws its proposal, the United States will return to the days when the most promising federally backed discoveries never left university labs. Far fewer inventions based on advanced research will be patented, and innovation hubs like the one I watched grow will have no chance to take root.

Lita Nelsen joined the Technology Licensing Office of the Massachusetts Institute of Technology in 1986 and was director from 1992 to 2016. She is a member of the advisory council of the Bayh-Dole Coalition, a group of organizations and individuals committed to celebrating and protecting the Bayh-Dole Act, as well as informing policymakers and the public of its benefits.

The Download: organoid uses, and open source voting machines

8 March 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The many uses of mini-organs

This week, we reported on a team of researchers who managed to grow lung, kidney, and intestinal organoids from fetal cells. Because these tiny 3D cell clusters mimic some of the features of a real, full-size organ, they can provide a sneak peek at how the fetus is developing. That’s something nearly impossible to do with existing tools.

But organoids can do so much more, ranging from weird, wild, and wonderful uses, to the downright unsettling. Read the full story.

—Cassandra Willyard

This story is from The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.

If you’re interested in the wild world of organoids, why not take a look at:

+ Tiny faux organs could crack the mystery of menstruation. Researchers are using organoids to unlock one of the human body’s most mysterious—and miraculous—processes. Read the full story.

+ Human brain cells hooked up to a chip can do speech recognition, showing potential as a new type of hybrid bio-computer.

+ Human brain cells transplanted into baby rats’ brains grow and form connections. When lab-grown clumps of human neurons are transplanted into newborn rats, they grow with the animals. Read the full story.

How open source voting machines could boost trust in US elections

While vendors pitched their latest voting machines in Concord, New Hampshire, this past August, election officials asked every kind of question: How much does the new scanner weigh? Are any of its parts made in China?

The answers weren’t trivial. These machines are a once-in-a-decade purchase and many towns in New Hampshire want to replace their current, shoddy machines. But with what? 

The officials’ first option was to continue with a legacy vendor. The second was to gamble on VotingWorks, a nonprofit with only 17 employees which is at the forefront of the movement to make elections more transparent thanks to its open source approach. But can an idealist nonprofit really unseat industry juggernauts — and restore faith in democracy along the way? Read the full story.

—Spenser Mestel

A plan to bring down drug prices could threaten America’s technology boom

—Lita Nelsen joined the Technology Licensing Office of the Massachusetts Institute of Technology in 1986 and was director from 1992 to 2016.

Forty years ago, Kendall Square in Cambridge, Massachusetts, was full of deserted warehouses and dying low-tech factories. Today, it is arguably the center of the global biotech industry.

During my 30 years in MIT’s Technology Licensing Office, I witnessed this transformation firsthand, and I know it was no accident. Much of it was the direct result of the Bayh-Dole Act, a bipartisan law that Congress passed in 1980.

The reform enabled world-class universities like MIT and Harvard to retain the rights on discoveries made by their scientists—even when federal funds paid for the research. Those discoveries, in turn, helped a significant number of biotechnology startups throughout the Boston area launch and grow. But the efficacy of the Bayh-Dole Act is now under serious threat. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US Congressional offices are swamped with calls from angry TikTok users
It’s part of a campaign mobilized by TikTok itself fighting another potential ban. (Axios)
+ The company sent push notifications urging users to call their representatives. (The Verge)
+ One office received so many calls, they turned their phones off. (The Information $)

2 Criminals are hacking US doctors’ drug-ordering systems 
To order controlled substances, including Fentanyl, and sell them for a profit. (404 Media)
+ Why is it so hard to create new types of pain relievers? (MIT Technology Review)

3 OpenAI’s CTO played a key role in ousting Sam Altman
Mira Murati’s concerns about Altman motivated the board to force him out—shortly before he returned. (NYT $)
+ What’s next for OpenAI. (MIT Technology Review)

4 The White House is betting big on content creators
It hopes influencers spread the word of President Biden’s State of the Union address. (Wired $)

5 A quantum computing firm claims to have achieved “computational supremacy”
Outside observers aren’t so sure. (New Scientist $)
+ Quantum computing is taking on its biggest challenge: noise. (MIT Technology Review)

6 Jensen Huang’s star is in ascendance 💫
After years in relative obscurity, the Nvida CEO is stepping into the spotlight. (The Atlantic $)
+ The company’s worth has eclipsed Google and Amazon’s. (Vox)

7 Amazon is pressing pause on its international ambitions
It’s got to save cash somehow, so website launches are a logical casualty. (The Information $)

8 Could FTX’s victims get their money back after all?
The company’s lawyers seem to think so—which could reduce SBF’s sentence. (Slate $)

9 Designers made a handbag from NASA’s futuristic material 👜
And it looks pretty cool to boot. (Fast Company $)
+ Future space food could be made from astronaut breath. (MIT Technology Review)

10 This terrifying noise machine is the soundtrack to your nightmares
Making navigating creepy video games an even scarier experience. (The Guardian)
+ A Disney director tried—and failed—to use an AI Hans Zimmer to create a soundtrack. (MIT Technology Review)

Quote of the day

“We’re getting a lot of calls from high schoolers asking what a Congressman is.”

—Taylor Hulsey, a communications director for Florida congressman Vern Buchanan, offers an interesting insight into the age demographics of the TikTok users inundating their representatives with calls to prevent a TikTok ban, the Guardian reports.

The big story

The chip patterning machines that will shape computing’s next act

June 2023

When we talk about computing these days, we tend to talk about software and the engineers who write it. But without the hardware and the physical sciences that enabled their creation—disciplines like optics, materials science, and mechanical engineering—modern computing would have been impossible.

Semiconductor lithography, the manufacturing process responsible for producing computer chips, stands at the center of a geopolitical competition to control the future of computing power. And the speed at which new lithography systems and components are developed will shape not only the speed of computing progress but also the balance of power and profits within the tech industry. Read the full story.

—Chris Miller

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ I want to ride my tiny bicycle, I want to ride my bike 🎶
+ Mr Bump is known as Herr Dumpidump in Norwegian, which is frankly adorable.
+ AI imagining luxury homes inspired by great albums? Love, love, love.
+ This short poem is a lovely reminder to make the most of every moment (thanks Charlotte!)

The SEC’s new climate rules were a missed opportunity to accelerate corporate action

8 March 2024 at 14:19

This week, the US Securities and Exchange Commission enacted a set of long-awaited climate rules, requiring most publicly traded companies to disclose their greenhouse-gas emissions and the climate risks building up on their balance sheets. 

Unfortunately, the federal agency watered down the regulations amid intense lobbying from business interests, undermining their ultimate effectiveness—and missing the best shot the US may have for some time at forcing companies to reckon with the rising dangers of a warming world. 

These new regulations were driven by the growing realization that climate risks are financial risks. Global corporations now face climate-related supply chain disruptions. Their physical assets are vulnerable to storms, their workers will be exposed to extreme heat events, and some of their customers may be forced to relocate. There are fossil-fuel assets on their balance sheets that they may never be able to sell, and their business models will be challenged by a rapidly changing planet.

These are not just coal and oil companies. They are utilities, transportation companies, material producers, consumer product companies, even food companies. And investors—you, me, your aunt’s pension—are buying and holding these fossilized stocks, often unknowingly.

Investors, policymakers, and the general public all need clearer, better information on how businesses are accelerating climate change, what they are doing to address those impacts, and what the cascading effects could mean for their bottom line.

The new SEC rules formalize and mandate what has essentially been a voluntary system of corporate carbon governance, now requiring corporations to report how climate-related risks may affect their business.

They also must disclose their “direct emissions” from sources they own or control, as well as their indirect emissions from the generation of “purchased energy,” which generally means their use of electricity and heat. 

But crucially, companies will have to do so only when they determine that the information is financially “material,” providing companies considerable latitude over whether they do or don’t provide those details.

The original draft of the SEC rules would have also required corporations to report emissions from “upstream and downstream activities” in their value chains. That generally refers to the associated emissions from their suppliers and customers, which can often make up 80% of a company’s total climate pollution.  

The loss of that requirement and the addition of the “materiality” standard both seem attributable to intense pressure from business groups. 

To be sure, these rules should help make it clearer how some companies are grappling with climate change and their contributions to it. Out of legal caution, plenty of businesses are likely to determine that emissions are material.

And clearer information will help accelerate corporate climate action as firms concerned about their reputation increasingly feel pressure from customers, competitors, and some investors to reduce their emissions. 

But the SEC could and should have gone much further. 

After all, the EU’s similar policies are much more comprehensive and stringent. California’s emissions disclosure law, signed this past October, goes further still, requiring both public and private corporations with revenues over $1 billion to report every category of emissions, and then to have this data audited by a third party.

Unfortunately, the SEC rules merely move corporations to the starting line of the process required to decarbonize the economy, at a time when they should already be deep into the race. We know these rules don’t go far enough, because firms already following similar voluntary protocols have shown minimal progress in reducing their greenhouse-gas emissions. 

The disclosure system upon which the SEC rules are based faces two underlying problems that have limited how much and how effectively any carbon accounting and reporting can be put to use. 

First: problems with the data itself. The SEC rules grant firms significant latitude in carbon accounting, allowing them to set different boundaries for their “carbon footprint,” model and measure emissions differently, and even vary how they report their emissions. In aggregate, what we will end up with are corporate reports of the previous year’s partial emissions, without any way to know what a company actually did to reduce its carbon pollution.

Second: limitations in how stakeholders can use this data. As we’ve seen with voluntary corporate climate commitments, the wide variations in reporting make it impossible to compare firms accurately. Or as the New Climate Institute argues, “The rapid acceleration in the volume of corporate climate pledges, combined with the fragmentation of approaches and the general lack of regulation or oversight, means that it is more difficult than ever to distinguish between real climate leadership and unsubstantiated greenwashing.”

Investor efforts to evaluate carbon emissions, decarbonization plans, and climate risks through ESG (environmental, social, and governance) rating schemes have merely produced what some academics call “aggregate confusion.” And corporations have faced few penalties for failing to clearly disclose emissions or even meet their own standards. 

All of which is to say that a new set of SEC carbon accounting and reporting rules that largely replicate the problems with voluntary corporate action, by failing to require consistent and actionable disclosures, isn’t going to drive the changes we need, at the speed we need. 

Companies, investors, and the public require rules that drive changes inside companies and that can be properly assessed from outside them. 

This system needs to track the main sources of corporate emissions and incentivize companies to make real investments in efforts to achieve deep emissions cuts, both within the company and across its supply chain.

The good news is that even though the rules in place are limited and flawed, regulators, regions, and companies themselves can build upon them to move toward more meaningful climate action.

The smartest firms and investors are already going beyond the SEC regulations. They’re developing better systems to track the drivers and costs of carbon emissions, and taking concrete steps to address them: reducing fuel use, building energy-efficient infrastructure, and adopting lower-carbon materials, products, and processes. 

It is now just good business to look for carbon reductions that actually save money.

The SEC has taken an important, albeit flawed, first step in nudging our financial laws to recognize climate impacts and risks. But regulators and corporations need to pick up the pace from here, ensuring that they’re providing a clear picture of how quickly or slowly companies are moving as they take the steps and make the investments needed to thrive in a transitioning economy—and on an increasingly risky planet.

Dara O’Rourke is an associate professor and co-director of the master of climate solutions program at the University of California, Berkeley.

An OpenAI spinoff has built an AI model that helps robots learn tasks like humans

11 March 2024 at 09:00

In the summer of 2021, OpenAI quietly shuttered its robotics team, announcing that progress was being stifled by a lack of data necessary to train robots in how to move and reason using artificial intelligence. 

Now three of OpenAI’s early research scientists say the startup they spun off in 2017, called Covariant, has solved that problem and unveiled a system that combines the reasoning skills of large language models with the physical dexterity of an advanced robot.

The new model, called RFM-1, was trained on years of data collected from Covariant’s small fleet of item-picking robots that customers like Crate & Barrel and Bonprix use in warehouses around the world, as well as words and videos from the internet. In the coming months, the model will be released to Covariant customers. The company hopes the system will become more capable and efficient as it’s deployed in the real world. 

So what can it do? In a demonstration I attended last week, Covariant cofounders Peter Chen and Pieter Abbeel showed me how users can prompt the model using five different types of input: text, images, video, robot instructions, and measurements. 

For example, show it an image of a bin filled with sports equipment, and tell it to pick up the pack of tennis balls. The robot can then grab the item, generate an image of what the bin will look like after the tennis balls are gone, or create a video showing a bird’s-eye view of how the robot will look doing the task. 

If the model predicts it won’t be able to properly grasp the item, it might even type back, “I can’t get a good grip. Do you have any tips?” A response could advise it to use a specific number of the suction cups on its arms to give it better a grasp—eight versus six, for example. 

This represents a leap forward, Chen told me, in robots that can adapt to their environment using training data rather than the complex, task-specific code that powered the previous generation of industrial robots. It’s also a step toward worksites where managers can issue instructions in human language without concern for the limitations of human labor. (“Pack 600 meal-prep kits for red pepper pasta using the following recipe. Take no breaks!”)

Lerrel Pinto, a researcher who runs the general-purpose robotics and AI lab at New York University and has no ties to Covariant, says that even though roboticists have built basic multimodal robots before and used them in lab settings, deploying one at scale that’s able to communicate in this many modes marks an impressive feat for the company. 

To outpace its competitors, Covariant will have to get its hands on enough data for the robot to become useful in the wild, Pinto told me. Warehouse floors and loading docks are where it will be put to the test, constantly interacting with new instructions, people, objects, and environments. 

“The groups which are going to train good models are going to be the ones that have either access to already large amounts of robot data or capabilities to generate those data,” he says.

Covariant says the model has a “human-like” ability to reason, but it has its limitations. During the demonstration, in which I could see a live feed of a Covariant robot as well as a chat window to communicate with it, Chen invited me to prompt the model with anything I wanted. When I asked the robot to “return the banana to Tote Two,” it struggled with retracing its steps, leading it to pick up a sponge, then an apple, then a host of other items before it finally accomplished the banana task. 

“It doesn’t understand the new concept,” Chen said by way of explanation, “but it’s a good example—it might not work well yet in the places where you don’t have good training data.”

The company’s new model embodies a paradigm shift rippling through the robotics world. Rather than teaching a robot how the world works manually, through instructions like physics equations and code, researchers are teaching it in the same way humans learn: through millions of observations. 

The result “really can act as a very effective flexible brain to solve arbitrary robot tasks,” Chen said. 

The playing field of companies using AI to power more nimble robotic systems is likely to grow crowded this year. Earlier this month, the humanoid-robotics startup Figure AI announced it would be partnering with OpenAI and raised $675 million from tech giants like Nvidia and Microsoft. Marc Raibert, the founder of Boston Dynamics, recently started an initiative to better integrate AI into robotics.  

This means that advancements in machine learning will likely start translating to advancements in robotics. However, some issues remain unresolved. If large language models continue to be trained on millions of words without compensating the authors of those words, perhaps it will be expected that robotics models will also be trained on videos without paying their creators. And if language models hallucinate and perpetuate biases, what equivalents will surface in robotics?

In the meantime, Covariant will push forward, keen on having RFM-1 continually learn and refine. Eventually, the researchers aim to have the robot train on videos that the model itself creates—the type of meta-learning that not only makes my head spin but also sparks concern about what will happen if errors made by the model compound themselves. But with such a hunger for more training data, researchers see it almost as inevitable.

“Training on that will be a reality,” Abbeel says. “If we talk again a half year from now, that’s what we’ll be talking about.”

The Download: rise of the multimodal robots, and the SEC’s new climate rules

11 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

An OpenAI spinoff has built an AI model that helps robots learn tasks like humans

The news: In the summer of 2021, OpenAI quietly shuttered its mulrobotics team, announcing that progress was being stifled by a lack of data necessary to train robots in how to move and reason using artificial intelligence.

Now three of OpenAI’s early research scientists say the startup they spun off in 2017, called Covariant, has solved that problem. They’ve unveiled a system that combines the reasoning skills of large language models with the physical dexterity of an advanced robot.

How it works: The new model, called RFM-1, was trained on years of data collected from Covariant’s small fleet of item-picking robots, as well as words and videos from the internet. Users can prompt the model using five different types of input: text, images, video, robot instructions, and measurements. The company hopes the system will become more capable and efficient as it’s deployed in the real world. Read the full story.

—James O’Donnell

The SEC’s new climate rules were a missed opportunity to accelerate corporate action

—Dara O’Rourke is an associate professor and co-director of the master of climate solutions program at the University of California, Berkeley.

Last week, the US Securities and Exchange Commission enacted a set of long-awaited climate rules, requiring most publicly traded companies to disclose their greenhouse-gas emissions and the climate risks building up on their balance sheets. 

Unfortunately, the federal agency watered down the regulations amid intense lobbying from business interests, undermining their ultimate effectiveness—and missing the best shot the US may have for some time at forcing companies to reckon with the rising dangers of a warming world. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The British Royal family has been caught up in a photo editing scandal
Photo agencies issued rare kill notices for the heavily manipulated image. (The Verge)
+ The Princess of Wales has admitted to editing the photo. (BBC)
+ The news has sent internet sleuths concerned about the Royal’s whereabouts into overdrive. (New Yorker $)

2 IVF opponents have been waiting for this moment
Following Alabama’s ruling that embryos should be treated as children, the future of IVF in the state is looking increasingly uncertain. (The Atlantic $)
+ Privacy is increasingly under threat across America. (Vox)
+ The first babies conceived with a sperm-injecting robot have been born. (MIT Technology Review)

3 Sam Altman has rejoined OpenAI’s board
He’s been cleared of any wrongdoing by a law firm investigation. (Bloomberg $)
+ Three new female executives have also joined the board. (WP $)

4 Researching AI is seriously expensive
And researchers feel like the biggest players are squeezing them out. (WP $)
+ Companies are turning to AI to help solve internal disputes. (WSJ $)

5 Why Malaysia is emerging at the next great chip hub
After decades spent assembling semiconductors, it’s ready to step into the spotlight. (FT $)
+ Nvidia is keeping a close eye on tech that could be affected by AI. (Insider $)
+ China is experimenting with an AI chatbot for brain surgeons. (Bloomberg $)

6 What even is going viral, anymore?
As the internet becomes more fragmented, it’s becoming harder to track what’s truly trending. (WP $)
+ Gen Z is freaked out by TikTok’s sticky algorithm. (WSJ $)

7  Elon Musk says his AI chatbot is going open source 
It’ll join the likes of Meta and France’s Mistral in making its code available to all. (TechCrunch
+ Unsurprisingly, Elon Musk’s Foundation tends to line the pockets of his own interests. (NYT $)
+ The open-source AI boom is built on Big Tech’s handouts. (MIT Technology Review)

8 It’s time to part ways with oil
Even when the oil industry is the biggest it’s ever been. (Economist $)
+ The world is finally spending more on solar than oil production. (MIT Technology Review)

9 A crypto firm transferred $4.2 million of assets to a reported Russian arms dealer
Jonatan Zimenkov was hit with US sanctions for his role in allegedly assisting the Russian invasion of Ukraine. (The Guardian)+ Crypto has set its sights on Africa. (Economist $)

10 Would you tuck into plants from a pond? 🌱
They’re crisp and juicy-tasting, apparently. (Wired $)
+ These are the biotech plants you can buy now. (MIT Technology Review)

Quote of the day

“Schools are told to use Chinese phones as well, to support Chinese companies.”

—Nong Jiagui, a teacher in rural Yunnan in China, describes the Chinese government’s sweeping campaign to boost the use of native smartphones to the Financial Times.

The big story

How to spot AI-generated text

December 2022

This sentence was written by an AI—or was it? OpenAI’s chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

Since it was released in November 2022, ChatGPT has been used by millions of people. It has the AI community enthralled, and it is clear the internet is increasingly being flooded with AI-generated text.

We’re in desperate need of ways to differentiate between human- and AI-written text in order to counter potential misuses of the technology. But while labs are racing to develop tools tasked with spotting AI-generated text, they’re not always reliable. Read the full story.

—Melissa Heikkilä

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Release your inner music producer with fun beatboxing app Incredibox (thanks Niall!)
+ Can an amethyst survive being coated in molten glass? There’s only one way to find out.
+ Kiki the cockatiel really loves Earth Wind and Fire.
+ Legendary actor Kyle MacLachlan has great taste in films.

VR headsets can be hacked with an Inception-style attack

11 March 2024 at 12:52

In the Christoper Nolan movie Inception, Leonardo DiCaprio’s character uses technology to enter his targets’ dreams to steal information and insert false details into their subconscious. 

A new “inception attack” in virtual reality works in a similar way. Researchers at the University of Chicago exploited a security vulnerability in Meta’s Quest VR system that allows hackers to hijack users’ headsets, steal sensitive information, and—with the help of generative AI—manipulate social interactions. 

The attack hasn’t been used in the wild yet, and the bar to executing it is high, because it requires a hacker to gain access to the VR headset user’s Wi-Fi network. However, it is highly sophisticated and leaves those targeted vulnerable to phishing, scams, and grooming, among other risks. 

In the attack, hackers create an app that injects malicious code into the Meta Quest VR system and then launch a clone of the VR system’s home screen and apps that looks identical to the user’s original screen. Once inside, attackers can see, record, and modify everything the person does with the headset. That includes tracking voice, gestures, keystrokes, browsing activity, and even the user’s social interactions. The attacker can even change the content of a user’s messages to other people. The research, which was shared with MIT Technology Review exclusively, is yet to be peer reviewed.

A spokesperson for Meta said the company plans to review the findings: “We constantly work with academic researchers as part of our bug bounty program and other initiatives.” 

VR headsets have slowly become more popular in recent years, but security research has lagged behind product development, and current defenses against attacks in VR are lacking. What’s more, the immersive nature of virtual reality makes it harder for people to realize they’ve fallen into a trap. 

“The shock in this is how fragile the VR systems of today are,” says Heather Zheng, a professor of computer science at the University of Chicago, who led the team behind the research. 

Stealth attack

The inception attack exploits a loophole in Meta Quest headsets: users must enable “developer mode” to download third-party apps, adjust their headset resolution, or screenshot content, but this mode allows attackers to gain access to the VR headset if they’re using the same Wi-Fi network. 

Developer mode is supposed to give people remote access for debugging purposes. However, that access can be repurposed by a malicious actor to see what a user’s home screen looks like and which apps are installed. (Attackers can also strike if they are able to access a headset physically or if a user downloads apps that include malware.) With this information, the attacker can replicate the victim’s home screen and applications. 

Then the attacker stealthily injects an app with the inception attack in it. The attack is activated and the VR headset hijacked when unsuspecting users exit an application and return to the home screen. The attack also captures the user’s display and audio stream, which can be livestreamed back to the attacker. 

In this way, the researchers were able to see when a user entered login credentials to an online banking site. Then they were able to manipulate the user’s screen to show an incorrect bank balance. When the user tried to pay someone $1 through the headset, the researchers were able to change the amount transferred to $5 without the user realizing. This is because the attacker can control both what the user sees in the system and what the device sends out. 

This banking example is particularly compelling, says Jiasi Chen, an associate professor of computer science at the University of Michigan, who researches virtual reality but was not involved in the research. The attack could probably be combined with other malicious tactics, such as tricking people to click on suspicious links, she adds. 

The inception attack can also be used to manipulate social interactions in VR. The researchers cloned Meta Quest’s VRChat app, which allows users to talk to each other through their avatars. They were then able to intercept people’s messages and respond however they wanted. 

Generative AI could make this threat even worse because it allows anyone to instantaneously clone people’s voices and generate visual deepfakes, which malicious actors could then use to manipulate people in their VR interactions, says Zheng. 

Twisting reality

To test how easily people can be fooled by the inception attack, Zheng’s team recruited 27 volunteer VR experts. The participants were asked to explore applications such as a game called Beat Saber, where players control light sabers and try to slash beats of music that fly toward them. They were told the study aimed to investigate their experience with VR apps. Without their knowledge, the researchers launched the inception attack on the volunteers’ headsets. 

The vast majority of participants did not suspect anything. Out of 27 people, only 10 noticed a small “glitch” when the attack began, but most of them brushed it off as normal lag. Only one person flagged some kind of suspicious activity. 

There is no way to authenticate what you are seeing once you go into virtual reality, and the immersiveness of the technology makes people trust it more, says Zheng. This has the potential to make such attacks especially powerful, says Franzi Roesner, an associate professor of computer science at the University of Washington, who studies security and privacy but was not part of the study.

The best defense, the team found, is restoring the headset’s factory settings to remove the app. 

The inception attack gives hackers many different ways to get into the VR system and take advantage of people, says Ben Zhao, a professor of computer science at the University of Chicago, who was part of the team doing the research. But because VR adoption is still limited, there’s time to develop more robust defenses before these headsets become more widespread, he says. 

LLMs become more covertly racist with human intervention

11 March 2024 at 14:35

Since their inception, it’s been clear that large language models like ChatGPT absorb racist views from the millions of pages of the internet they are trained on. Developers have responded by trying to make them less toxic. But new research suggests that those efforts, especially as models get larger, are only curbing racist views that are overt, while letting more covert stereotypes grow stronger and better hidden.

Researchers asked five AI models—including OpenAI’s GPT-4 and older models from Facebook and Google—to make judgments about speakers who used African-American English (AAE). The race of the speaker was not mentioned in the instructions.

Even when the two sentences had the same meaning, the models were more likely to apply adjectives like “dirty,” “lazy,” and “stupid” to speakers of AAE than speakers of Standard American English (SAE). The models associated speakers of AAE with less prestigious jobs (or didn’t associate them with having a job at all), and when asked to pass judgment on a hypothetical criminal defendant, they were more likely to recommend the death penalty. 

An even more notable finding may be a flaw the study pinpoints in the ways that researchers try to solve such biases. 

To purge models of hateful views, companies like OpenAI, Meta, and Google use feedback training, in which human workers manually adjust the way the model responds to certain prompts. This process, often called “alignment,” aims to recalibrate the millions of connections in the neural network and get the model to conform better with desired values. 

The method works well to combat overt stereotypes, and leading companies have employed it for nearly a decade. If users prompted GPT-2, for example, to name stereotypes about Black people, it was likely to list “suspicious,” “radical,” and “aggressive,” but GPT-4 no longer responds with those associations, according to the paper.

However the method fails on the covert stereotypes that researchers elicited when using African-American English in their study, which was published on arXiv and has not been peer reviewed. That’s partially because companies have been less aware of dialect prejudice as an issue, they say. It’s also easier to coach a model not to respond to overtly racist questions than it is to coach it not to respond negatively to an entire dialect.

“Feedback training teaches models to consider their racism,” says Valentin Hofmann, a researcher at the Allen Institute for AI and a coauthor on the paper. “But dialect prejudice opens a deeper level.”

Avijit Ghosh, an ethics researcher at Hugging Face who was not involved in the research, says the finding calls into question the approach companies are taking to solve bias.

“This alignment—where the model refuses to spew racist outputs—is nothing but a flimsy filter that can be easily broken,” he says. 

The covert stereotypes also strengthened as the size of the models increased, researchers found. That finding offers a potential warning to chatbot makers like OpenAI, Meta, and Google as they race to release larger and larger models. Models generally get more powerful and expressive as the amount of their training data and the number of their parameters increase, but if this worsens covert racial bias, companies will need to develop better tools to fight it. It’s not yet clear whether adding more AAE to training data or making feedback efforts more robust will be enough.

“This is revealing the extent to which companies are playing whack-a-mole—just trying to hit the next bias that the most recent reporter or paper covered,” says Pratyusha Ria Kalluri, a PhD candidate at Stanford and a coauthor on the study. “Covert biases really challenge that as a reasonable approach.”

The paper’s authors use particularly extreme examples to illustrate the potential implications of racial bias, like asking AI to decide whether a defendant should be sentenced to death. But, Ghosh notes, the questionable use of AI models to help make critical decisions is not science fiction. It happens today. 

AI-driven translation tools are used when evaluating asylum cases in the US, and crime prediction software has been used to judge whether teens should be granted probation. Employers who use ChatGPT to screen applications might be discriminating against candidate names on the basis of race and gender, and if they use models to analyze what an applicant writes on social media, a bias against AAE could lead to misjudgments. 

“The authors are humble in claiming that their use cases of making the LLM pick candidates or judge criminal cases are constructed exercises,” Ghosh says. “But I would claim that their fear is spot on.”

How rerouting planes to produce fewer contrails could help cool the planet

12 March 2024 at 06:00

A handful of studies have concluded that making minor adjustments to the routes of a small fraction of airplane flights could meaningfully reduce global warming. Now a new paper finds that these changes could be pretty cheap to pull off as well.

The common climate concern when it comes to airlines is that planes produce a lot of carbon dioxide emissions as they burn fuel. But jets also release heat, water vapor, and particulate matter that can produce thin clouds in the sky, known as “contrails,” in particularly cold, humid, icy parts of the atmosphere.

When numerous flights pass through such areas, these condensation trails can form cirrus clouds that absorb radiation escaping from the surface, acting as blankets floating above the Earth. 

This cirrus-forming phenomenon could account for around 35% of aviation’s total contribution to climate change—or about 1% to 2% of overall global warming, according to some estimates.

A small fraction of overall flights, between 2% and 10%, create about 80% of the contrails. So the growing hope is that simply rerouting those flights could significantly reduce the effect, presenting a potentially high leverage, low cost and fast way of easing warming. 

Last summer, Breakthrough Energy, Google Research, and American Airlines announced some promising results from a research collaboration, as first reported in the New York Times. They employed satellite imagery, weather data, software models, and AI prediction tools to steer pilots over or under areas where their planes would be likely to produce contrails. American Airlines used these tools in 70 test flights over six months, and subsequent satellite data indicated that they reduced the total length of contrails by 54%, relative to flights that weren’t rerouted.

There would, of course, be costs to implementing such a strategy. It generally requires more fuel to steer clear of these areas, which also means the flights would produce more greenhouse-gas emissions (more on that wrinkle in a moment).

More fuel also means greater expenses, and airlines aren’t likely to voluntarily implement such measures if it’s not relatively affordable. 

A new study published in Environmental Research: Infrastructure and Sustainability explored this issue by coupling commercial tools for optimizing flight trajectories with models that simulated nearly 85,000 American Airlines flights, both domestic and international, under various weather conditions last summer and this winter.

In those simulations, the researchers found that reducing the warming effect of contrails by 73% increased fuel costs by just 0.11% and overall costs by 0.08%, when averaged across those tens of thousands of flights. (Only about 14% of the flights needed to be adjusted to avoid forming warming contrails in the simulations.)

“Obviously there’s a trade-off between added fuel and reductions in harmful contrails; that’s real, and it’s one of the biggest challenges to this climate solution,” says Marc Shapiro, a coauthor of the paper and director of the contrails team at Breakthrough Energy, an organization founded by Bill Gates to spur innovation in clean energy and address climate change. “But what we’re showing in this paper is that the added fuel burn is a lot less than we expected.”

Airlines could also use such a commercial trajectory tool to make decisions that balance their financial and climate goals, he says. For example, they could allow some contrail-forming flights when the cost of adjusting the routes would be especially high.

Other research groups and airlines are also evaluating this concept through projects, including a collaboration between Delta and MIT’s Department of Aeronautics and Astronautics. (MIT Technology Review is owned by MIT but is editorially independent.)

There are other approaches to reducing contrail formation, including switching to different types of fuels or continuing to develop more capable electric or hydrogen-powered aircraft. 

But the studies to date suggest that rerouting flights could be one of the simplest ways of substantially reducing contrail-related warming. 

“So far, it’s looking very promising that it will be the cheapest, fastest way to reduce the climate impacts of aviation,” says Steven Barrett, head of the MIT department. 

Finding any way to make near-term progress on aviation is all the more important since it’s still likely to take a long time to develop and implement scalable, affordable ways of addressing the emissions from heavy fuel use, he adds.

But it will take more modeling studies and real-world experiments to demonstrate that “contrails avoidance,” as the approach is known, works as effectively as hoped.

For one thing, Barrett says, researchers still need to test, refine, and engineer systems that can reliably predict, with enough time to reroute planes, when and where contrails will form—all amid shifting weather conditions.

There are also some thorny complications that still need to be resolved, like the fact that cirrus clouds can also reduce warming by reflecting away short-wave radiation from the sun.

The loss of this cooling effect would have to be tallied into any calculation of the net benefit—or, perhaps, avoided. For instance, Shapiro says the initial strategy might be to reroute flights only during the early evening and night, which would eliminate the sunlight-reflecting complication. 

In addition, any decreased warming from contrail avoidance must more than offset the added warming from increased greenhouse-gas pollution. This becomes a trickier question when we weigh whether we care more about short-term or long-term warming: not producing contrails delivers an immediate benefit, but any added carbon dioxide can take decades to exert its full warming effect and may persist for hundreds to thousands of years.

The new study, at least, found that even when additional greenhouse gases are taken into account, reducing contrails cuts net warming over both a 20-year and a 100-year timeline, though less so in the latter scenario. But that, too, would need to be evaluated further through additional studies.

Yet another open question is whether airspace constraints and traffic bottlenecks might limit airlines’ ability to regularly reroute the necessary flights.

As a next step, Breakthrough Energy hopes to work with airlines to explore some of these questions by scaling up real-world flights and observations. 

But even if subsequent studies do continue to indicate that this is a fast, affordable way to ease warming, it’s still not clear whether airlines will do it if regulators don’t force them to. While the fuel costs to make this work may be tiny in percentage terms, they could add up quickly across a fleet and over time.

Still, the study’s authors assert that they’ve shown contrail avoidance could deliver “massive immediate climate benefits at a lower price than most other climate interventions.” In their view, this approach “should become one of aviation’s primary focuses in the coming years.”

Why we need better defenses against VR cyberattacks

12 March 2024 at 06:14

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I remember the first time I tried on a VR headset. It was the first Oculus Rift, and I nearly fainted after experiencing an intense but visually clumsy VR roller-coaster. But that was a decade ago, and the experience has gotten a lot smoother and more realistic since. That impressive level of immersiveness could be a problem, though: it makes us particularly vulnerable to cyberattacks in VR. 

I just published a story about a new kind of security vulnerability discovered by researchers at the University of Chicago. Inspired by the Christoper Nolan movie Inception, the attack allows hackers to create an app that injects malicious code into the Meta Quest VR system. Then it launches a clone of the home screen and apps that looks identical to the user’s original screen. Once inside, attackers are able to see, record, and modify everything the person does with the VR headset, tracking voice, motion, gestures, keystrokes, browsing activity, and even interactions with other people in real time. New fear = unlocked. 

The findings are pretty mind-bending, in part because the researchers’ unsuspecting test subjects had absolutely no idea they were under attack. You can read more about it in my story here.

It’s shocking to see how fragile and unsecure these VR systems are, especially considering that Meta’s Quest headset is the most popular such product on the market, used by millions of people. 

But perhaps more unsettling is how attacks like this can happen without our noticing, and can warp our sense of reality. Past studies have shown how quickly people start treating things in AR or VR as real, says Franzi Roesner, an associate professor of computer science at the University of Washington, who studies security and privacy but was not part of the study. Even in very basic virtual environments, people start stepping around objects as if they were really there. 

VR has the potential to put misinformation, deception and other problematic content on steroids because it exploits people’s brains, and deceives them physiologically and subconsciously, says Roesner: “The immersion is really powerful.”  

And because VR technology is relatively new, people aren’t vigilantly looking out for security flaws or traps while using it. To test how stealthy the inception attack was, the University of Chicago researchers recruited 27 volunteer VR experts to experience it. One of the participants was Jasmine Lu, a computer science PhD researcher at the University of Chicago. She says she has been using, studying, and working with VR systems regularly since 2017. Despite that, the attack took her and almost all the other participants by surprise. 

“As far as I could tell, there was not any difference except a bit of a slower loading time—things that I think most people would just translate as small glitches in the system,” says Lu.  

One of the fundamental issues people may have to deal with in using VR is whether they can trust what they’re seeing, says Roesner. 

Lu agrees. She says that with online browsers, we have been trained to recognize what looks legitimate and what doesn’t, but with VR, we simply haven’t. People do not know what an attack looks like. 

This is related to a growing problem we’re seeing with the rise of generative AI, and even with text, audio, and video: it is notoriously difficult to distinguish real from AI-generated content. The inception attack shows that we need to think of VR as another dimension in a world where it’s getting increasingly difficult to know what’s real and what’s not. 

As more people use these systems, and more products enter the market, the onus is on the tech sector to develop ways to make them more secure and trustworthy. 

The good news? While VR technologies are commercially available, they’re not all that widely used, says Roesner. So there’s time to start beefing up defenses now. 


Now read the rest of The Algorithm

Deeper Learning

An OpenAI spinoff has built an AI model that helps robots learn tasks like humans

In the summer of 2021, OpenAI quietly shuttered its robotics team, announcing that progress was being stifled by a lack of data necessary to train robots in how to move and reason using artificial intelligence. Now three of OpenAI’s early research scientists say the startup they spun off in 2017, called Covariant, has solved that problem and unveiled a system that combines the reasoning skills of large language models with the physical dexterity of an advanced robot.

Multimodal prompting: The new model, called RFM-1, was trained on years of data collected from Covariant’s small fleet of item-picking robots that customers like Crate & Barrel and Bonprix use in warehouses around the world, as well as words and videos from the internet. Users can prompt the model using five different types of input: text, images, video, robot instructions, and measurements. The company hopes the system will become more capable and efficient as it’s deployed in the real world. Read more from James O’Donnell here

Bits and Bytes

You can now use generative AI to turn your stories into comics
By pulling together several different generative models into an easy-to-use package controlled with the push of a button, Lore Machine heralds the arrival of one-click AI. (MIT Technology Review

A former Google engineer has been charged with stealing AI trade secrets for Chinese companies
The race to develop ever more powerful AI systems is becoming dirty. A Chinese engineer downloaded confidential files about Google’s supercomputing data centers to his personal Google Cloud account while working for Chinese companies. (US Department of Justice)  

There’s been even more drama in the OpenAI saga
This story truly is the  gift that keeps on giving. OpenAI has clapped back at Elon Musk and his lawsuit, which claims the company has betrayed its original mission of doing good for the world, by publishing emails showing that Musk was keen to commercialize OpenAI too. Meanwhile, Sam Altman is back on the OpenAI board after his temporary ouster, and it turns out that chief technology officer Mira Murati played a bigger role in the coup against Altman than initially reported. 

A Microsoft whistleblower has warned that the company’s AI tool creates violent and sexual images, and ignores copyright
Shane Jones, an engineer who works at Microsoft, says his tests with the company’s Copilot Designer gave him concerning and disturbing results. He says the company acknowledged his concerns, but it did not take the product off the market. Jones then sent a letter explaining these concerns to the Federal Trade Commission, and Microsoft has since started blocking some terms that generated toxic content. (CNBC)

Silicon Valley is pricing academics out of AI research
AI research is eye-wateringly expensive, and Big Tech, with its huge salaries and computing resources, is draining academia of top talent. This has serious implications for the technology, causing it to be focused on commercial uses over science. (The Washington Post

Building a data-driven health-care ecosystem

The application of AI to health-care data has promise to align the U.S. health-care system to quality care and positive health outcomes. But AI for health care hasn’t reached its full capacity.  One reason is the inconsistent quality and integrity of the data that AI depends on. The industry—hospitals, providers, insurers, and administrators—uses diverse systems. The resulting data can be difficult to share because of incompatibility, privacy regulations, and the unstructured nature of much of the data. The data can carry errors, omissions, and duplications, making it difficult to access, analyze, and use. Even the best data can cause data bias: the data used to train AI models can reinforce underrepresentation of historically marginalized populations. The growth of AI in all industries means data quality is increasingly vital.

While AI-driven innovation is still growing, the U.S. continues to spend more than twice as much as the average high-income country for its health care, while its health outcomes are falling: the latest data from the U.S. Center for Disease Control’s National Center for Health Statistics indicates U.S. life expectancy rates dropped for the second year in a row in 2021.

To spark innovation by identifying gaps and pain points in the employer-based health-care system, JPMorgan Chase launched Morgan Health in 2021. Morgan Health’s chief technology officer of corporate responsibility, Tiffany West Polk, says Morgan Health is driven to improve health outcomes, affordability, and equity, with data at its foundation. Gaining insights from large data streams means optimizing analytical platforms and ensuring data remains secure, while also HIPAA and Health Resources and Services Administration (HRSA) compliant, she says.

Currently, Polk says, the U.S. health-care system seems to be “quite stuck” in terms of keeping health-care quality and positive outcomes in line with rising costs.

  • “If you look across the broader U.S. environment in particular, employer sponsored insurance is a huge part of the health-care net for the United States, and employers make significant financial investment to provide health benefits to their employees. It’s one of the main things that people look at when they’re looking across an employer landscape and thinking about who they want to work for.”

Investing in new ways to provide health care

Nearly 160 million people in the U.S. have employer-sponsored health insurance as of 2022, according to health-care policy research non-profit KFF (formerly the Kaiser Family Foundation). JPMorgan Chase launched Morgan Health because of its focus on improving employer-sponsored health care, not least for its 165,000 employees.

Morgan Health has invested $130 million in capital during the past 18-plus months in five innovative health-care companies: advanced primary care provider Vera Whole Health; health-care data analytics specialist Embold Health; Kindbody, a fertility clinic network and global family-building benefits provider; LetsGetChecked, which creates home-monitoring clinical tools; and Centivo, which provides health care plans for self-insured employers.

All of these companies offer new approaches to conventional employer-sponsored health care to deliver a higher standard of care. Morgan Health’s collaboration with these enterprises will examine how these change patient outcomes, health-care equity, and affordability, and how to scale their successes.

“Many Americans today face real barriers to receiving high-quality, affordable, and equitable health care, even with employer-sponsored insurance,” Polk says. This calls for breaking the paradigm of delivery-incentivized health care, she says, which rewards providers for delivering services, but pays insufficient attention to outcomes.  

  • “We have a model today where our health-care providers are incentivized based on the number of patients they see or the number of services they perform. What that means is that they’re not incentivized based on improvements, patient’s health, and wellbeing. And so when you have a model that thinks volume versus value, those challenges then serve to compound the disparities that we have. And that then also means that those who have employer-sponsored insurance are also similarly challenged.”

For Morgan Health, AI and machine learning (ML) will be a key to problem-solving with health-care technology, Polk says. AI is ubiquitous across industries, and is the go-to when we think about innovation, she says, but the hype can mean we forget about the importance of data accessibility and quality.

Polk says solving this data challenge makes this an exciting and transformational time to be a chief technology officer and a technologist. The next stage of evolution in health care can’t proceed without better data, Polk says, and this is what the data and analytics team at Morgan Health are addressing.

  • “[AI] has become so ubiquitous in terms of how we think about everything. And we think that it is the thing that’s going to fix anything and everything in technology. And it has become so ubiquitous and so the go-to when you think about innovation, that I think that sometimes, there’s this way in which people kind of forget about what AI actually is underneath the covers.”

Garnering data-based insights

To address the strength of health-care data, the industry is moving increasingly toward standard electronic health-care records (EHRs) for patients. A 2023 Deloitte study says use of EHRs and health information exchanges (HIEs) is growing rapidly, with organizations building data lakes and using AI to combine and cleanse data. These measures provide a “strong digital backbone” for building connections between hospitals, primary care centers, and payment tools, the study says, and this should help reduce errors, unnecessary readmissions, and duplicate testing.

The U.S. Department of Health and Human Services (HHS) is also building a network for digital connection in the health-care industry, to allow data to flow among multiple providers and geographies. Its Office of the National Coordinator for Health Information Technology (ONC) announced in December 2023 that its national health data exchange—the Trusted Exchange Framework and Common Agreement (TEFCA)—is operational. The exchange connects Quality Health Care Information Networks, which it certifies and onboards, with standard policies and technical requirements.

Polk says Morgan Health is improving foundations to incentivize better outcomes for patients. Morgan Health’s work can create standards—grounded in data—that incentivize better performance, which can then be shared across the employer-sponsored insurance network, and among broader communities. Using AI features such as metadata tagging (algorithms that can group and label data that has a common purpose), she says, “is one way health-care companies can simplify tasks and open up more time for providing care.”

  • “If you do your data ingestion right, if you cleanse your data right, if you make sure that your metadata tagging is correct, and then you are very aware of the way in which your algorithms have been biased in the past, you can be aware of that so that you can make sure that your algorithms are inclusive moving forward.”

“I think the most important thing is incentivizing our health-care partners who provide for our employees to meaningfully improve health-care quality, equity, and affordability through incentivizing outcomes, not incentivizing volume, not incentivizing visits, but really incentivizing outcomes,” Polk says.

This article is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The Download: hacking VR headsets, and contrails to cool the planet

12 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology

VR headsets can be hacked with an Inception-style attack

In the Christoper Nolan movie Inception, Leonardo DiCaprio’s character uses technology to enter his targets’ dreams to steal information and insert false details into their subconscious. 

A new “inception attack” in virtual reality works in a similar way. Researchers at the University of Chicago exploited a security vulnerability in Meta’s Quest VR system that allows hackers to hijack users’ headsets, steal sensitive information, and—with the help of generative AI—manipulate social interactions. 

The attack hasn’t been used in the wild yet, and the bar to executing it is high, because it requires a hacker to gain access to the VR headset user’s Wi-Fi network. However, it is highly sophisticated and leaves those targeted vulnerable to phishing, scams, and grooming, among other risks. Read the full story.

—Melissa Heikkilä

You can read more about why we need to defend against VR cyberattacks in the latest edition of The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

How rerouting planes to produce fewer contrails could help cool the planet

What’s happening: A handful of studies have concluded that making minor adjustments to the routes of a small fraction of airplane flights could meaningfully reduce global warming. Now a new paper finds that these changes could be pretty cheap to pull off as well.

How it works: Jets release heat, water vapor, and particulate matter that can produce thin clouds in the sky, known as “contrails”. When numerous flights pass through such areas, these contrails can form clouds that absorb radiation escaping from the surface, acting as blankets floating above the Earth.

Why it matters: A small fraction of overall flights, between 2% and 10%, create about 80% of the contrails. So the growing hope is that simply rerouting those flights could significantly reduce the effect, presenting a potentially high leverage, low cost and fast way of easing warming. Read the full story.

—James Temple

LLMs become more covertly racist with human intervention

The news: Since their inception, it’s been clear that large language models like ChatGPT absorb racist views from the millions of pages of the internet they are trained on. Developers have responded by trying to make them less toxic. But new research suggests that those efforts are only curbing racist views that are overt, while letting more covert stereotypes grow stronger and better hidden. And it’s a problem that grows as these models get bigger and bigger.

How they did it: Researchers asked five AI models to make judgments about speakers who used African-American English (AAE). The race of the speaker was not mentioned in the instructions. Even when the two sentences had the same meaning, the models were more likely to apply adjectives like “dirty,” “lazy,” and “stupid” to speakers of AAE than speakers of Standard American English. Read the full story.

—James O’Donnell

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The movement to ban TikTok is gaining momentum
A backlash from US users appears to be making politicians more determined to ban it. (Vox)
+ TikTok is far from the first Chinese company the US has sought to punish. (WSJ $)
+ The app has become a major political hot potato. (Bloomberg $)

2 South Korea’s chipmaking giants have stopped selling used equipment
Samsung and SK Hynix want to avoid falling foul of US sanctions. (FT $)
+ For its part, the US is now backing chip production in the Philippines. (Bloomberg $)
+ Why China is betting big on chiplets. (MIT Technology Review)

3 Midjourney has banned Stability AI workers from its service
It claims the rival workers caused a systems outage trying to scrape Midjourney’s data. (The Verge)

4 Modern cars are reporting your driving behavior to insurers
Their data is used to draw up sophisticated risk profiles—and increase the cost of insurance. (NYT $)

5 Meet the AI doom mongers 💀
A burgeoning Bay Area community is seeking answers about what to believe. (New Yorker $)
+ How existential risk became the biggest meme in AI. (MIT Technology Review)

6 This tiny deep sea drone is mapping Australia’s coral reefs 🐟
Exploring the ocean is a huge challenge. These machines are making it easier. (IEEE Spectrum)
+ The robots are coming. And that’s a good thing. (MIT Technology Review)

7 How microgravity could help to produce better medicines
Near-weightlessness is a great way to improve the crystal formation essential to manufacturing medications. (WSJ $)

8 This robot is modeled on a long-extinct sea creature
The pleurocystitid existed around 450 million years ago. (Ars Technica)

9 China’s real estate agents are livestreaming available properties
And home sales in niche tourist town Xishuangbanna are booming as a result. (Rest of World)
+ Deepfakes of Chinese influencers are livestreaming 24/7. (MIT Technology Review)

10 Doomscrolling is out—Downpour is in
The simple app allows you to build games starring your own pictures. (The Guardian)
+ I used generative AI to turn my story into a comic—and you can too. (MIT Technology Review)

Quote of the day

“It’s increasingly more of a pond than an ocean.”

—Ekaterina Almasque, a general partner at venture capital firm OpenOcean, tells Reuters how companies are locked in fierce competition to hire AI talent from a dwindling pool of qualified candidates.

The big story

The quest to learn if our brain’s mutations affect mental health

August 2021

Scientists have so far been unable to link brain disorders, such as autism and Alzheimer’s disease, to an identifiable gene.

But a University of California, San Diego study published in 2001 suggested a different path. What if it wasn’t a single faulty gene—or even a series of genes—that always caused cognitive issues? What if it could be the genetic differences between cells?

The explanation had seemed far-fetched, but researchers are belatedly starting to take it seriously. Read the full story.

—Roxanne Khamsi

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ That mysterious sound in the GoldenEye video game soundtrack has finally been explained.
+ Ice cream under the microscope looks seriously weird. 🍦
+ Of course Jon Hamm loves a Bloody Mary during a flight.
+ Seismic alien waves? Err, it was probably a passing truck, sorry.

Let’s not make the same mistakes with AI that we made with social media

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else. 

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible. 

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf. 

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform. 

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation. 

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT. 

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviors.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI. 

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

Nathan E. Sanders is a data scientist and an affiliate with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist and a fellow and lecturer at the Harvard Kennedy School.

The Download: what social media can teach us about AI

13 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Let’s not make the same mistakes with AI that we made with social media

Nathan E. Sanders is a data scientist and an affiliate with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist and a fellow and lecturer at the Harvard Kennedy School.

A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. 

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google is restricting its Gemini chatbot from answering election queries 
Out of an “abundance of caution.” (The Guardian)
+ Gemini will recommend users try Google Search for election questions instead. (Reuters)
+ Three technology trends shaping 2024’s elections. (MIT Technology Review)

2 Kate Middleton conspiracy theories are gaining traction
Everyone is rapidly becoming a royal truther, much to the Palace’s dismay. (The Atlantic $)
+ Here’s a list of everything that was wrong with the infamous photo. (Wired $)
+ The scandal reflects our increasing mistrust of what’s shared online. (The Verge)

3 The pressure is mounting on TikTok to find new owners
It might be the most logical way to avoid an outright ban in the States. (Economist $)
+It’s the undisputed social media success story of the past few years. (WP $)

4 Bitcoin fever is officially back, baby
But we still don’t know what it’s worth, exactly. (Wired $)
+ The cryptocurrency has passed yet another milestone. (Cointelegraph)

5 AI computing costs an arm and a leg
So the UK is launching a new program to try and slash costs. (FT $)

6 Donald Trump approached Elon Musk about buying Truth Social
It appears the pair have stayed in closer contact than was previously known. (WP $)
+ Trump has admitted helping the billionaire in unspecified ways. (CNBC)

7 The simple solution to combat the junkification of the internet
Prioritizing human creations is one way to cut through the AI-generated spam. (The Atlantic $)
+ How to fix the internet. (MIT Technology Review)

8 A nurse wore Apple’s Vision Pro headset during a spinal surgery operation
It helped them prepare and to select the right assistive tools. (Insider $)
+ These minuscule pixels are poised to take augmented reality by storm. (MIT Technology Review)

9 Gen Z doesn’t want to pay for dating apps
And who can blame them? (NYT $)
+ Bumble is considering dropping the requirement for women to message first. (Insider $)

10 Inside the US Patent’s Office’s wonderfully weird collection
For decades, inventors were required to submit whacky models with their patent ideas. (New Yorker $)

Quote of the day

“Close your eyes and think about something that makes you happy.”

—Amazon instructs its fulfillment center workers to practice mindfulness during shifts, 404 Media reports.

The big story

How sounds can turn us on to the wonders of the universe

June 2023

Astronomy should, in principle, be a welcoming field for blind researchers. But across the board, science is full of charts, graphs, databases, and images that are designed to be seen.

So researcher Sarah Kane, who is legally blind, was thrilled three years ago when she encountered a technology known as sonification, designed to transform information into sound. Since then she’s been working with a project called Astronify, which presents astronomical information in audio form. 

For millions of blind and visually impaired people, sonification could be transformative—opening access to education, to once unimaginable careers, and even to the secrets of the universe. Read the full story.

—Corey S. Powell

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ It’s time to get into metal detecting (no really, it is!)
+ Meanwhile, over on Mars
+ A couple in the UK decided to get married on a moving train, because why not?
+ Even giant manta rays need a little TLC every now and again.

An AI that can play Goat Simulator is a step toward more useful machines

13 March 2024 at 10:00

Fly, goat, fly! A new AI agent from Google DeepMind can play different games, including ones it has never seen before such as Goat Simulator 3, a fun action game with exaggerated physics. Researchers were able to get it to follow text commands to play seven different games and move around in three different 3D research environments. It’s a step toward more generalized AI that can transfer skills across multiple environments.  

Google DeepMind has had huge success developing game-playing AI systems. Its system AlphaGo, which beat top professional player Lee Sedol at the game Go in 2016, was a major milestone that showed the power of deep learning. But unlike earlier game-playing AI systems, which mastered only one game or could only follow single goals or commands, this new agent is able to play a variety of different games, including Valheim and No Man’s Sky. It’s called SIMA, an acronym for “scalable, instructable, multiworld agent.”

In training AI systems, games are a good proxy for real-world tasks. “A general game-playing agent could, in principle, learn a lot more about how to navigate our world than anything in a single environment ever could,” says Michael Bernstein, an associate professor of computer science at Stanford University, who was not part of the research. 

“One could imagine one day rather than having superhuman agents which you play against, we could have agents like SIMA playing alongside you in games with you and with your friends,” says Tim Harley, a research engineer at Google DeepMind who was part of the team that developed the agent. 

The team trained SIMA on lots of examples of humans playing video games, both individually and collaboratively, alongside keyboard and mouse input and annotations of what the players did in the game, says Frederic Besse, a research engineer at Google DeepMind.  

Then they used an AI technique called imitation learning to teach the agent to play games as humans would. SIMA can follow 600 basic instructions, such as “Turn left,” “Climb the ladder,” and “Open the map,” each of which can be completed in less than approximately 10 seconds.

The team found that a SIMA agent that was trained on many games was better than an agent that learned how to play just one. This is because it was able to take advantage of concepts shared between games to learn better skills and get better at carrying out instructions, says Besse. 

“This is again a really exciting key property, as we have an agent that can play games it has never seen before, essentially,” he says. 

Seeing this sort of knowledge transfer between games is a significant milestone for AI research, says Paulo Rauber, a lecturer in artificial Intelligence at Queen Mary University of London. 

The basic idea of learning to execute instructions on the basis of examples provided by humans could lead to more powerful systems in the future, especially with bigger data sets, Rauber says. SIMA’s relatively limited data set is what is holding back its performance, he says. 

Although the number of game environments it’s been trained on is still small, SIMA is on the right track for scaling up, says Jim Fan, a senior research scientist at Nvidia who runs its  AI Agents Initiative. 

But the AI system is still not close to human level, says Harley. For example, in the game No Man’s Sky, the AI agent could do just 60% of the tasks humans could do. And when the researchers removed the ability for humans to give SIMA instructions, they found the agent performed much worse than before. 

Next, Besse says, the team is working on improving the agent’s performance. The researchers want to get it to work in as many environments as possible and learn new skills, and they want people to be able to chat with the agent and get a response. The team also wants SIMA to have more generalized skills, allowing it to quickly pick up games it has never seen before, much like a human. 

Humans “can generalize very well to unseen environments and unseen situations,” says Besse. “And we want our agents to be just the same.”  

SIMA inches us closer to a “ChatGPT moment” for autonomous agents, says Roy Fox, an assistant professor at the University of California, Irvine.  

But it is a long way away from actual autonomous AI. That would be “a whole different ball game,” he says. 

Methane leaks in the US are worse than we thought

13 March 2024 at 12:00

Methane emissions in the US are worse than scientists previously estimated, a new study has found.

The study, published today in Nature, represents one of the most comprehensive surveys yet of methane emissions from US oil- and gas-producing regions. Using measurements taken from planes, the researchers found that emissions from many of the targeted areas were significantly higher than government estimates had found. The undercounting highlights the urgent need for new and better ways of tracking the powerful greenhouse gas.

Methane emissions are responsible for nearly a third of the total warming the planet has experienced so far. While there are natural sources of the greenhouse gas, including wetlands, human activities like agriculture and fossil-fuel production have dumped millions of metric tons of additional methane into the atmosphere. The concentration of methane has more than doubled over the past 200 years. But there are still large uncertainties about where, exactly, emissions are coming from.

Answering these questions is a challenging but crucial first step to cutting emissions and addressing climate change. To do so, researchers are using tools ranging from satellites like the recently launched MethaneSAT to ground and aerial surveys. 

The US Environmental Protection Agency estimates that roughly 1% of oil and gas produced winds up leaking into the atmosphere as methane pollution. But survey after survey has suggested that the official numbers underestimate the true extent of the methane problem.  

For the sites examined in the new study, “methane emissions appear to be higher than government estimates, on average,” says Evan Sherwin, a research scientist at Lawrence Berkeley National Laboratory, who conducted the analysis as a postdoctoral fellow at Stanford University.  

The data Sherwin used comes from one of the largest surveys of US fossil-fuel production sites to date. Starting in 2018, Kairos Aerospace and the Carbon Mapper Project mapped six major oil- and gas-producing regions, which together account for about 50% of onshore oil production and about 30% of gas production. Planes flying overhead gathered nearly 1 million measurements of well sites using spectrometers, which can detect methane using specific wavelengths of light. 

Sherwin et al., Nature

Here’s where things get complicated. Methane sources in oil and gas production come in all shapes and sizes. Some small wells slowly leak the gas at a rate of roughly one kilogram of methane an hour. Other sources are significantly bigger, emitting hundreds or even thousands of kilograms per hour, but these leaks may last for only a short period.

The planes used in these surveys detect mostly the largest leaks, above roughly 100 kilograms per hour (though they catch smaller ones sometimes, down to around one-tenth that size, Sherwin says). Combining measurements of these large leak sites with modeling to estimate smaller sources, researchers estimated that the larger leaks account for an outsize proportion of emissions. In many cases, around 1% of well sites can make up over half the total methane emissions, Sherwin says.

But some scientists say that this and other studies are still limited by the measurement tools available. “This is an indication of the current technology limits,” says Ritesh Gautam, a lead senior scientist at the Environmental Defense Fund.

Because the researchers used aerial measurements to detect large methane leaks and modeled smaller sources, it’s possible that the study may be overestimating the importance of the larger leaks, Gautam says. He pointed to several other recent studies, which found that smaller wells contribute a larger fraction of methane emissions.

The problem is, it’s basically impossible to use just one instrument to measure all these different methane sources. We’ll need all the measurement technologies available to get a clearer picture, Gautam explains.

Ground-based tools attached to towers can keep constant watch over an area and detect small emissions sources, though they generally can’t survey large regions. Aerial surveys using planes can cover more ground but tend to detect only larger leaks. They also represent a snapshot in time, so they can miss sources that only leak methane for periods.

And then there are the satellites. Earlier this month, Google and EDF launched MethaneSAT, which joined the growing constellation of methane-detecting satellites orbiting the planet. Some of the existing satellites map huge areas, getting detail only on the order of kilometers. Others have much higher resolution, with the ability to pin methane emissions down to within a few dozen meters. 

Satellites will be especially helpful in finding out more about the many countries around the world that haven’t been as closely measured and mapped as the US has, Gautham says. 

Understanding methane emissions is one thing; actually addressing them is another matter. After identifying a leak, companies then need to take actions like patching faulty pipelines or other equipment, or closing up the vents and flares that routinely release methane into the atmosphere. Roughly 40% of methane emissions from oil and gas production have no net cost, since the money saved by not losing the methane is more than enough to cover the cost of the abatement, according to estimates from the International Energy Agency.

Over 100 countries joined the Global Methane Pledge in 2021, taking on a goal of cutting methane emissions 30% from 2020 levels by the end of the decade. New rules for oil and gas producers announced by the Biden administration could help the US meet those targets. Earlier this year, the EPA released details of a proposed methane fee for fossil-fuel companies, to be calculated on the basis of excess methane released into the atmosphere.

While researchers are slowly getting a better picture of methane emissions, addressing them will be a challenge, as Sherwin notes: “There’s a long way to go.”

Decarbonizing production of energy is a quick win 

By: ADNOC
14 March 2024 at 02:00

Debate around the pace and nature of decarbonization continues to dominate the global news agenda, from the European Scientific Advisory Board on Climate Change warning that the EU must double annual emissions cuts, to forecasts that it could cost more than $1 trillion to decarbonize the global shipping industry. Despite differing opinions on the right path to net zero, all agree that every sector needs to reduce emissions to avoid the worst effects of climate change.

Oil and gas production accounts for 15% of the world’s emissions, according to the International Energy Agency. Some of the largest global companies have embarked on bold plans to cut to zero by 2050 the carbon and methane associated with their production. One player with an ambition to get there five years ahead of the rest is the UAE’s ADNOC, having announced in January 2024 it will lift spending on decarbonization projects to $23 billion from $15 billion.  

In an exclusive interview, Musabbeh Al Kaabi, ADNOC’s Executive Director for Low Carbon Solutions and International Growth, says he is hopeful the industry can make a meaningful contribution while supplying the secure and affordable energy needed to meet growing global demand.

Q: Mr. Al Kaabi, how do you plan to spend the extra $8 billion ADNOC has allocated to decarbonization?

Mr. Mussabeh Al Kaabi: Much of our investment focus is on the technologies and systems that will deliver tangible action in eliminating the emissions from our energy production. At 7 kilograms of CO2 per barrel of oil equivalent, the energy we provide is among the least carbon-intensive in our industry, yet we continue to explore every opportunity for further reductions. For example, we are using clean grid power—from renewable and nuclear sources—to meet the needs of our onshore operations. Meanwhile, we are investing almost $4 billion to electrify our offshore production in order to cut our carbon footprint from those operations by up to 50%.

We also see great potential in carbon capture utilization and sequestration (CCUS), especially where emissions are hard to abate. Last year, we doubled our capacity target to 10 million tonnes per annum by 2030. We currently have close to 4 million tonnes in capacity in development or operation and are working with key players in our industry to create a world-leading carbon management platform.

Additionally, we’re developing nature-based solutions to support our target for net zero by 2045. One of our initiatives is to plant 10 million mangroves, which serve as powerful carbon sinks, along our coastline by 2030. We used drone technology to plant 2.5 million mangrove seeds in 2023.

Q: What about renewables?

Mr. Mussabeh Al Kaabi: It’s in everyone’s interests that we invest in the growth of renewables and low-carbon fuels like hydrogen. Through our shareholding in Masdar and Masdar Green Hydrogen, we are tripling our renewable capacity by supporting a growth target of 100 gigawatts by 2030.

Q: We have been talking about hydrogen and carbon capture and storage (CCS) as the energies and solutions of tomorrow for decades. Why haven’t they broken through yet?

Mr. Mussabeh Al Kaabi: Hydrogen and CCS offer great promise, but, like any other transformative technology, they require R&D attention, investment, and scale-up opportunities.

Hydrogen is an abundant and portable fuel that could help reduce emissions from many sectors, including transport and power. Meanwhile, CCS could abate emissions from heavy, energy-intensive industries like steel and cement.

These technologies are proven, and we expect more improvements to allow wider consumer use. We will continue to develop and invest in them, while continuing to responsibly provide our traditional portfolio of low-carbon energy products that the world needs.

Q: Is there any evidence the costs can come down?

Mr. Mussabeh Al Kaabi: Yes, absolutely. The dramatic fall in the price of solar over recent years—an 89% reduction from 2010 to 2022 according to the International Renewable Energy Agency—just goes to show that clean technologies can become viable, mainstream sources of energy if the right policy and investment mechanisms are in place.

Q: Do you favor a particular decarbonization technology?

Mr. Mussabeh Al Kaabi: We don’t have the luxury of picking winners and losers. The scale of the challenge is too great. World economies consume the equivalent of around 250 million barrels of oil, gas, and coal every single day. We are going to need to invest in every viable clean energy and decarbonization technology. If CCS can do it, let’s do it. If renewables can do it, let’s invest in it.

That said, I am especially optimistic about the role artificial intelligence will play in our decarbonization drive. We’ve been implementing AI and machine learning tools across our value chain for many years; they’ve helped us eliminate around a million tonnes of CO2 emissions over the past two years. As AI technology grows at an exponential rate, we will continue to invest in the latest innovations to ensure we provide maximum energy with minimum emissions.

Q: Can traditional energy companies be part of the solution?

Mr. Mussabeh Al Kaabi: They can and they must be part of the solution. Energy companies have the technical capabilities, the project management experience and, crucially, the financial strength to advance solutions. For example, we’re investing in one of the largest integrated carbon capture projects in the Middle East and North Africa, at our gas processing facility in Habshan. Once complete, it will add 1.5 million tonnes of CCUS capacity. We’ve also just announced an investment into Storegga, the lead developer of the UK’s Acorn CCS project in Scotland, marking our first overseas investment of its kind.

Q: What’s your approach to decarbonization investment?

Mr. Mussabeh Al Kaabi: Our approach is to partner with successful developers of economic technologies and to incubate promising climate solutions so ADNOC and other players can use them to accelerate the path to net zero. There are numerous examples.

Last year, we launched the ADNOC Decarbonization Technology Challenge, a global competition that attracted 650 climate tech startups vying for a million-dollar piloting opportunity with us. The winner was Revterra, a Houston-based startup that will pilot its kinetic battery technology with us over the coming months.  

We’re also working to deploy another cutting-edge battery technology that involves taking used electric vehicle batteries and upcycling them into a battery energy storage system, which we’ll use to help decarbonize our remote production activity by up to 25%.

In the northern regions of the UAE, we’re working closely with another startup company to pilot carbon dioxide mineralization technology. It is a project we are all excited about because it presents opportunities for CO2 removal at a significant scale.

Additionally, we are working with leading industry service providers to explore new ways of producing graphene and low-carbon hydrogen.

Q: Finally, how confident are you that transformation will happen?

Mr. Mussabeh Al Kaabi: I am confident.It can be done. Transformation is happening. It won’t happen overnight, and it needs to be just and equitable for the poorest among us, but I am optimistic.We must focus on taking tangible action and not underestimate the power of human innovation. History has shown that, when we come together, we can innovate and act. I am positive that, over time, we will continue to see progress towards our common goal.

This content was produced by ADNOC. It was not written by MIT Technology Review’s editorial staff.


Why methane emissions are still a mystery

14 March 2024 at 06:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

If you follow papers in climate and energy for long enough, you’re bound to recognize some patterns. 

There are a few things I’ll basically always see when I’m sifting through the latest climate and energy research: one study finding that perovskite solar cells are getting even more efficient; another showing that climate change is damaging an ecosystem in some strange and unexpected way. And there’s always some new paper finding that we’re still underestimating methane emissions. 

That last one is what I’ve been thinking about this week, as I’ve been reporting on a new survey of methane leaks from oil and gas operations in the US. (Yes, there are more emissions than we thought there were—get the details in my story here.) But what I find even more interesting than the consistent underestimation of methane is why this gas is so tricky to track down. 

Methane is the second most abundant greenhouse gas in the atmosphere, and it’s responsible for around 30% of global warming so far. The good news is that methane breaks down quickly in the atmosphere. The bad news is that while it’s floating around, it’s a super-powerful greenhouse gas, way more potent than carbon dioxide. (Just how much more potent is a complicated question that depends on what time scale you’re talking about—read more in this Q&A.)

The problem is, it’s difficult to figure out where all this methane is coming from. We can measure the total concentration in the atmosphere, but there are methane emissions from human activities, there are natural methane sources, and there are ecosystems that soak up a portion of all those emissions (these are called methane sinks). 

Narrowing down specific sources can be a challenge, especially in the oil and gas industry, which is responsible for a huge range of methane leaks. Some are small and come from old equipment in remote areas. Other sources are larger, spewing huge amounts of the greenhouse gas into the atmosphere but only for short times. 

A lot of stories about tracking methane have been in the news recently, mostly because of a methane-hunting satellite launched earlier this month. It’s designed to track down methane using tools called spectrometers, which measure how light is reflected and absorbed. 

This is just one of a growing number of satellites that are keeping an eye on the planet for methane emissions. Some take a wide view, spotting which regions have high emissions. Other satellites are hunting for specific sources and can see within a few dozen meters where a leak is coming from. (If you want to read more about why there are so many methane satellites, I recommend this story from Emily Pontecorvo at Heatmap.)

But methane tracking isn’t just a space game. In a new study published in Nature, researchers used nearly a million measurements taken from airplanes flown over oil- and gas-producing regions to estimate total emissions. 

The results are pretty staggering: researchers found that, on average, roughly 3% of oil and gas production at the sites they examined winds up as methane emissions. That’s about three times the official government estimates used by the US Environmental Protection Agency. 

I spoke with one of the authors of the study, Evan Sherwin, who completed the research as a postdoc at Stanford. He compared the challenge of understanding methane leaks to the parable of the blind men and the elephant: there are many pieces of the puzzle (satellites, planes, ground-based detection), and getting the complete story requires fitting them all together. 

“I think we’re really starting to see an elephant,” Sherwin told me. 

That picture will continue to get clearer as MethaneSAT and other surveillance satellites come online and researchers get to sift through the data. And that understanding will be crucial as governments around the world race to keep promises about slashing methane emissions. 


Now read the rest of The Spark

Related reading

For more on how researchers are working to understand methane emissions, give my latest story a read

If you’ve missed the news on methane-hunting satellites, check out this story about MethaneSAT from last month

Pulling methane out of the atmosphere could be a major boost for climate action. Some startups hope that spraying iron particles above the ocean could help, as my colleague James Temple wrote in December

five planes flying out of white puffy clouds at different angles across a blue sky, leaving contrails behind
PHOTO ILLUSTRATION | GETTY IMAGES

Another thing

Making minor changes to airplane routes could put a significant dent in emissions, and a new study found that these changes could be cheap to implement. 

The key is contrails, thin clouds that planes produce when they fly. Minimizing contrails means less warming, and changing flight paths can reduce the amount of contrail formation. Read more about how in the latest from my colleague James Temple

Keeping up with climate  

New rules from the US Securities and Exchange Commission were watered down, cutting off the best chance we’ve had at forcing companies to reckon with the dangers of climate change, as Dara O’Rourke writes in a new opinion piece. (MIT Technology Review)

Yes, heat pumps slash emissions, even if they’re hooked up to a pretty dirty grid. Switching to a heat pump is better than heating with fossil fuels basically everywhere in the US. (Canary Media)

Rivian announced its new R2, a small SUV set to go on sale in 2026. The reveal signals a shift to focusing on mass-market vehicles for the brand. (Heatmap)

Toyota has focused on selling hybrid vehicles instead of fully electric ones, and it’s paying off financially. (New York Times)

→ Here’s why I wrote in December 2022 that EVs wouldn’t be fully replacing hybrids anytime soon. (MIT Technology Review)

Some scientists think we should all pay more attention to tiny aquatic plants called azolla. They can fix their own nitrogen and capture a lot of carbon, making them a good candidate for crops and even biofuels. (Wired)

New York is suing the world’s largest meat company. The company has said it’ll produce meat with no emissions by 2040, a claim that is false and misleading, according to the New York attorney general’s office. (Vox)

A massive fire in Texas has destroyed hundreds of homes. Climate change has fueled dry conditions, and power equipment sparked an intense fire that firefighters struggled to contain. (Grist)

→ Many of the homes destroyed in the blaze are uninsured, creating a tough path ahead for recovery. (Texas Tribune)

The Download: AI’s gaming prowess, and calculating methane emissions

14 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

An AI that can play Goat Simulator is a step toward more useful machines

The news: A new AI agent from Google DeepMind can play different games, including ones it has never seen before such as Goat Simulator 3, a fun action game with exaggerated physics. Unlike earlier game-playing AI systems, which mastered only one game or could only follow single goals or commands, this new agent is able to play a variety of different games, including Valheim and No Man’s Sky. 

How they did it: Researchers were able to get it to follow text commands to play seven different games and move around in three different 3D research environments. They trained it on lots of examples of humans playing video games, alongside keyboard and mouse input and annotations of what the players did. Then they used an AI technique called imitation learning to teach the agent to play games as humans would.

Why it’s a big deal: It’s a step toward more generalized AI that can transfer skills across multiple environments—and this sort of knowledge transfer between games represents a significant milestone for AI research. Read the full story.

—Melissa Heikkilä

Methane leaks in the US are worse than we thought

What’s happening: Methane emissions in the US are worse than scientists previously estimated, a new study has found. The research represents one of the most comprehensive surveys yet of methane emissions from US oil- and gas-producing regions, and found that  emissions were significantly higher than previously estimated.

The big picture: The study highlights the urgent need for new and better ways of tracking the powerful greenhouse gas. The problem is, it’s basically impossible to use just one instrument to measure all the different methane sources. Read the full story.

—Casey Crownhart

To learn more about why methane emissions are still such a mystery, check out the latest edition of The Spark, our weekly climate newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US has passed a bill that could lead to a TikTok ban 
But that still doesn’t mean it’ll happen. (Vox)
+ What happens next is anyone’s guess. (NYT $)
+ TikTok is insisting it’s a major contributor to US GDP. (WP $)
+…But the app itself loses several billion dollars a year. (The Information $)

2 Measles is resurging in the US

Vaccination rates are down, and outbreaks are on the rise. (The Atlantic $)
+ The very young and the immunocompromised are at the highest risk. (Vox)
+ How wastewater could offer an early warning system for measles. (MIT Technology Review)

3 SpaceX is limbering up for another Starship launch today
It’s hoping to demonstrate relighting a Raptor engine in space for the first time. (TechCrunch)+ It’s also the first time SpaceX is anticipating splashing down in the Indian Ocean. (Ars Technica)
+ Starlink has been denied permission to deploy new satellites in low orbit. (IEEE Spectrum)

4 China’s record on climate change is a mixed bag
On one hand, it’s a green tech hub. On the other, it’s still a massive pollutor. (Economist $)
+ The world’s biggest crude oil producer, though? That’d be the US. (Vox)
+ Emissions hit a record high in 2023. (MIT Technology Review)

5 Black women who blow the whistle on tech malpractice face higher risks
They’re forced to weather considerably more scrutiny and abuse than their white counterparts. (The Markup)
+ Inside Timnit Gebru’s last days at Google. (MIT Technology Review)

6 A child extortion network has been hiding in plain sight online
The sprawling ecosystem of predators spreads across major platforms, who have failed to stamp the groups out. (Wired $) 

7 Commercial safes can be bypassed by secret backdoor codes
And the US Department of Defense wants to keep it quiet. (404 Media)

8 We’re entering the age of moon mining
The moon is rich in Helium-3, an isotope that could fuel nuclear reactors. (WP $)
+ Here’s how we could mine the moon for rocket fuel. (MIT Technology Review)

9 Facebook Marketplace is the last good thing about the social network
If you can swerve the scams, that is. (NYT $)

10 Neil Young’s music is returning to Spotify
But the man himself is—characteristically—unhappy about it. (Insider $)

Quote of the day

“TikTok is banned in China. So, we’re going to emulate the Chinese communists by banning it in our country?”

—US Senator Rand Paul makes his feelings on the proposed TikTok ban clear in an interview with The Hill.

The big story

California’s coming offshore wind boom faces big engineering hurdles

December 2022

Last December, dozens of companies fought for the right to lease the first commercial wind power sites off the coast of California in an auction that could kick-start the state’s next clean energy boom.

The state has an ambitious goal: building 25 gigawatts of offshore wind by 2045. That’s equivalent to nearly a third of the state’s total generating capacity today, or enough to power 25 million homes.

But, among other tests, the plans are facing a daunting geological challenge: the continental shelf drops steeply just a few miles off the California coast. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The winners of this year’s Sony World Photography Awards are truly jaw-dropping.
+ This scarf-printing process is weirdly soothing to watch.
+ Today, on Albert Einstein’s birthday, why not take the time to brush up on the theory of relativity?
+ Curb Your Enthusiasm: ranked. Do you agree?

Brazil is fighting dengue with bacteria-infected mosquitos

15 March 2024 at 05:00

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

As dengue cases continue to rise in Brazil, the country is facing a massive public health crisis. The viral disease, spread by mosquitoes, has sickened more than a million Brazilians in 2024 alone, overwhelming hospitals.

The dengue crisis is the result of the collision of two key factors. This year has brought an abundance of wet, warm weather, boosting populations of Aedes aegypti, the mosquitoes that spread dengue. It also happens to be a year when all four types of dengue virus are circulating. Few people have built up immunity against them all.   

Brazil is busy fighting back.  One of the country’s anti-dengue strategies aims to hamper the mosquitoes’ ability to spread disease by infecting the insects with a common bacteria—Wolbachia. The bacteria seems to boost the mosquitoes’ immune response, making it more difficult for dengue and other viruses to grow inside the insects. It also directly competes with viruses for crucial molecules they need to replicate. 

The World Mosquito Program breeds mosquitoes infected with Wolbachia in insectaries and releases them into communities. There they breed with wild mosquitoes. Wild females that mate with Wolbachia-infected males produce eggs that don’t hatch. Wolbachia-infected females produce offspring that are also infected. Over time, the bacteria spread throughout the population. Last year I visited the program’s largest insectary—a building in Medellín, Colombia, buzzing with thousands of mosquitoes in netted enclosures— with a group of journalists. “We’re essentially vaccinating mosquitoes against giving humans disease,” said Bryan Callahan, who was director of public affairs at the time.

At the World Mosquito Program’s insectary in Medellín, Colombia. These strips of paper are covered with Ades aegypti eggs. Dried eggs can survive for months at a time before being rehydrated, making it possible to ship them all over the world.

The World Mosquito Program first began releasing Wolbachia mosquitoes in Brazil in 2014. The insects now cover an area with a population of more than 3 million across five municipalities: Rio de Janeiro, Niterói, Belo Horizonte, Campo Grande, and Petrolina.

In Niterói, a community of about 500,000 that lies on the coast just across a large bay from Rio de Janeiro, the first small pilot releases began in 2015, and in 2017 the World Mosquito Program began larger deployments. By 2020, Wolbachia had infiltrated the population. Prevalence of the bacteria ranged from 80% in some parts of the city to 40% in others. Researchers compared the prevalence of viral illnesses in areas where mosquitoes had been released with a small control zone where they hadn’t released any mosquitoes. Dengue cases declined by 69%. Areas with Wolbachia mosquitoes also experienced a 56% drop in chikungunya and a 37% reduction in Zika.

How is Niterói faring during the current surge? It’s early days. But the data we have so far are encouraging. The incidence of dengue is one of the lowest in the state, with 69 confirmed cases per 100,000 people. Rio de Janeiro, a city of nearly 7 million, has had more than 42,000 cases, an incidence of 700 per 100,000.

“Niterói is the first Brazilian city we have fully protected with our Wolbachia method,” says Alex Jackson, global editorial and media relations manager for the World Mosquito Program. “The whole city is covered by Wolbachia mosquitoes, which is why the dengue cases are dropping significantly.”

The program hopes to release Wolbachia mosquitoes in six more cities this summer. But Brazil has more than 5,000 municipalities. To make a dent in the overall incidence in Brazil, the program will have to release millions more mosquitoes. And that’s the plan.

The World Mosquito Program is about to start construction on a mass rearing facility—the biggest in the world—in Curitiba. “And we believe that will allow us to essentially cover most of urban Brazil within the next 10 years,” Callahan says.

There are also other mosquito-based approaches in the works. The UK company Oxitec has been providing genetically modified “friendly” mosquito eggs to Indaiatuba, Brazil, since 2018. The insects that hatch—all males—don’t bite. And when they mate, their female offspring don’t survive, reducing populations. 

Another company, Forrest Brasil Tecnologia, has been releasing sterile male mosquitoes in parts of Ortigueira. When these males mate with wild females, they produce eggs that don’t hatch.  From November 2020 to July 2022, the company recorded a 98.7% decline in the Ades aegypti  population in Ortigueira. 

Brazil is also working on efforts to provide its citizens with greater immunity, vaccinating the most vulnerable with a new shot from Japan and working on its own home-grown dengue vaccine. 

None of these solutions are a quick fix. But they all provide some hope that the world can find ways to fight back even as climate change drives dengue and other infections to new peaks and into new territories. ““Cases of dengue fever are rising at an alarming rate,” Gabriela Paz-Bailey, who specializes in dengue at the US Centers for Disease Control and Prevention, told the Washington Post. “It’s becoming a public health crisis and coming to places that have never had it before.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

We’ve written about the World Mosquito Program before. Here’s a 2016 story from Antonio Regalado that looked at early excitement and Bill Gates’ backing of the project. 

That same year we reported on Oxitec’s early work in Brazil using genetically modified mosquitoes. Flavio Devienne Ferreira has the story

And this story from Emily Mullin looks at Google’s sister company, Verily. It built a robot to create Wolbachia-infected mosquitoes and began releasing them in California in 2017. (The project is now called Debug). 

From around the web

The FDA-approved ALS drug Relyvrio has failed to benefit patients in a large clinical trial. It was approved early amidst questions about its efficacy, and now the medicine’s manufacturer has to decide whether to pull it off the  market. (NYT)

Wegovy: it’s not just for weight loss anymore. The FDA has approved a label expansion that will allow Novo Nordisk to market the drug for its heart benefits, which might prompt more insurers to cover it. (CNN)

Covid killed off one strain of the flu and experts suggest dropping it from the next flu vaccine. (Live Science

Scientists have published the first study linking microplastic pollution to human disease. The research shows that people with plastic in their artery tissues were twice as likely to have a heart attack, stroke, or die than people without plastic. (CNN)

Africa’s push to regulate AI starts now        

15 March 2024 at 08:00

In the Zanzibar archipelago of Tanzania, rural farmers are using an AI-assisted app called Nuru that works in their native language of Swahili to detect a devastating cassava disease before it spreads. In South Africa, computer scientists have built machine learning models to analyze the impact of racial segregation in housing. And in Nairobi, Kenya, AI classifies images from thousands of surveillance cameras perched on lampposts in the bustling city’s center. 

The projected benefit of AI adoption on Africa’s economy is tantalizing. Estimates suggest that four African countries alone—Nigeria, Ghana, Kenya, and South Africa—could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools.

Now, the African Union—made up of 55 member nations—is preparing an ambitious AI policy that envisions an Africa-centric path for the development and regulation of this emerging technology. But debates on when AI regulation is warranted and concerns about stifling innovation could pose a roadblock, while a lack of AI infrastructure could hold back the technology’s adoption.  

“We’re seeing a growth of AI in the continent;  it’s really important there be set rules in place to govern these technologies,” says Chinasa T. Okolo, a fellow in the Center for Technology Innovation at Brookings, whose research focuses on AI governance and policy development in Africa.

Some African countries have already begun to formulate their own legal and policy frameworks for AI. Seven have developed national AI policies and strategies, which are currently at different stages of implementation. 

On February 29, the African Union Development Agency published a policy draft that lays out a blueprint of AI regulations for African nations. The draft includes recommendations for industry-specific codes and practices, standards and certification bodies to assess and benchmark AI systems, regulatory sandboxes for safe testing of AI, and the establishment of national AI councils to oversee and monitor responsible deployment of AI. 

The heads of African governments are expected to eventually endorse the continental AI strategy, but not until February 2025, when they meet next at the AU’s annual summit in Addis Ababa, Ethiopia. Countries with no existing AI policies or regulations would then use this framework to develop their own national strategies, while those that already have will be encouraged to review and align their policies with the AU’s.

Elsewhere, major AI laws and policies are also taking shape. This week, the European Union passed the AI Act, set to become the world’s first comprehensive AI law. In October, the United States issued an executive order on AI. And the Chinese government is eyeing a sweeping AI law similar to the EU’s, while also setting rules that target specific AI products as they’re developed. 

If African countries don’t develop their own regulatory frameworks that protect citizens from the technology’s misuse, some experts worry that Africans will face social harms, including bias that could exacerbate inequalities. And if these countries don’t also find a way to harness AI’s benefits, others fear these economies could be left behind. 

“We want to be standard makers”

Some African researchers think it’s too early to be thinking about AI regulation. The industry is still nascent there due to the high cost of building data infrastructure, limited internet access, a lack of funding, and a dearth of powerful computers needed to train AI models. A lack of access to quality training data is also a problem. African data is largely concentrated in the hands of companies outside of Africa.

In February, just before the AU’s AI policy draft came out, Shikoh Gitau, a computer scientist who started the Nairobi-based AI research lab Qubit Hub, published a paper arguing that Africa should prioritize the development of an AI industry before trying to regulate the technology. 

“If we start by regulating, we’re not going to figure out the innovations and opportunities that exist for Africa,” says David Lemayian, a software engineer and one of the paper’s co-authors.  

Okolo, who consulted on the AU-AI draft policy, disagrees. Africa should be proactive in developing regulations, Okolo says. She suggests African countries reform existing laws such as policies on data privacy and digital governance to address AI. 

But Gitau is concerned that a hasty approach to regulating AI could hinder adoption of the technology. And she says it’s critical to build homegrown AI with applications tailored for Africans to harness the power of AI to improve economic growth. 

“Before we put regulations [in place], we need to do the hard work of understanding the full spectrum of the technology and invest in building the African AI ecosystem,” she says.

More than 50 countries and the EU have AI strategies in place, and more than 700 AI policy initiatives have been implemented since 2017, according to the Organisation for Economic Co-operation and Development’s AI Policy Observatory. But only five of those initiatives are from Africa and none of the OECD’s 38 member countries are African.

Africa’s voices and perspectives have largely been absent from global discussions on AI governance and regulation, says Melody Musoni, a policy and digital governance expert at ECDPM, an independent-policy think tank in Brussels.   

“We must contribute our perspectives and own our regulatory frameworks,” says Musoni. “We want to be standard makers, not standard takers.” 

Nyalleng Moorosi, a specialist in ethics and fairness in machine learning who is based in Hlotse, Lesotho and works at the Distributed AI Research Institute, says that some African countries are already seeing labor exploitation by AI companies. This includes poor wages and lack of psychological support for data labelers, who are largely from low-income countries but working for big tech companies. She argues regulation is needed to prevent that, and to protect communities against misuse by both large corporations and authoritarian governments. 

In Libya, autonomous lethal weapons systems have already been used in fighting, and in Zimbabwe, a controversial, military-driven national facial-recognition scheme has raised concerns over the technology’s alleged use as a surveillance tool by the government. The draft AU-AI policy didn’t explicitly address the use of AI by African governments for national security interests, but it acknowledges that there could be perilous AI risks. 

Barbara Glover, program officer for an African Union group that works on policies for emerging technologies, points out that the policy draft recommends that African countries invest in digital and data infrastructure, and collaborate with the private sector to build investment funds to support AI startups and innovation hubs on the continent. 

Unlike the EU, the AU lacks the power to enforce sweeping policies and laws across its member states. Even if the draft AI strategy wins endorsement of parliamentarians at the AU’s assembly next February, African nations must then implement the continental strategy through national AI policies and laws.

Meanwhile, tools powered by machine learning will continue to be deployed, raising ethical questions and regulatory needs and posing a challenge for policymakers across the continent. 

Moorosi says Africa must develop a model for local AI regulation and governance which balances the localized risks and rewards. “If it works with people and works for people, then it has to be regulated,” she says.             

The Download: Africa’s AI regulation push, and how to fight denge

15 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Africa’s push to regulate AI starts now

In Tanzania, farmers are using an AI-assisted app that works in their native language of Swahili to detect a devastating cassava disease before it spreads. In South Africa, computer scientists have built machine learning models to analyze the impact of racial segregation in housing. And in Nairobi, Kenya, AI classifies images from thousands of surveillance cameras perched on lampposts in the bustling city’s center.

The projected benefit of AI adoption on Africa’s economy is tantalizing. Estimates suggest that four African countries alone—Nigeria, Ghana, Kenya, and South Africa—could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools.

Now, the African Union—made up of 55 member nations—is preparing an ambitious AI policy that envisions an Africa-centric path for the development and regulation of this emerging technology. But debates on when AI regulation is warranted and concerns about stifling innovation could pose a roadblock, while a lack of AI infrastructure could hold back the technology’s adoption. Read the full story.

—Abdullahi Tsanni

Brazil is fighting dengue with bacteria-infected mosquitos

As dengue cases continue to rise in Brazil, the country is facing a massive public health crisis. The viral disease, spread by mosquitoes, has sickened more than a million Brazilians in 2024 alone, overwhelming hospitals.

The dengue crisis is the result of the collision of two key factors. This year has brought an abundance of wet, warm weather, boosting populations of the mosquitoes that spread dengue. It also happens to be a year when all four types of dengue virus are circulating. Few people have built up immunity against them all.   

Brazil is busy fighting back—with help from the World Mosquito Program, it’s essentially vaccinating mosquitoes against giving humans disease. Read the full story.

—Cassandra Willyard

This story is from The Checkup, our weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China is likely to block a forced sale of TikTok 
It looks like authorities would rather it was banned in the US instead. (WSJ $)
+ The question is, who can afford it? (Vox)
+ TikTok isn’t really helping itself. (NYT $)
+ The debate is heading towards the courts. (WP $)

2 It was third time lucky for SpaceX’s Starship rocket
The world’s largest rocket finally reached orbit on its third attempt. (Economist $)

3 The majority of AI chatbots can be hacked to leak their responses
With the exception of Google’s Gemini, major chatbots are vulnerable to a sneaky side channel attack. (Ars Technica)
+ Three ways AI chatbots are a security disaster. (MIT Technology Review)

4 Russia is borrowing from China’s online censorship playbook
Ahead of Russia’s elections, its authorities have cracked down on circumvention tools. (NYT $)
+ The end of anonymity online in China. (MIT Technology Review)

5 Vast swathes of Africa are struggling to connect to the internet
A mysterious series of faults in four subsea cables is to blame. (Bloomberg $)
+ It’s one of the most severe outages in recent years. (The Guardian)
+ An AI-powered phone is certainly one solution for internet blackouts. (Reuters)

6 A second Gamergate harassment campaign is gaining traction
A Montreal indie gamesmaker is its latest target. (Wired $)

7 AI is making it easier than ever to sell products on Amazon
Whether the AI-generated listings are correct or not remains to be seen, though. (The Verge)

8 How an Uber Eats worker took on its algorithm—and won
Train the algorithm, or the algorithm will train you. (FT $)
+ Banned gig economy workers are renting accounts from their colleagues. (Rest of World)
+ What Luddites can teach us about resisting an automated future. (MIT Technology Review)

9 How to turn electronic waste into gold 
A protein sponge makes extracting the precious metal surprisingly simple. (IEEE Spectrum)

10 What it’s like to let an AI bot swipe Tinder for you 📱
Don’t get your hopes up. (404 Media)

Quote of the day

“When you see other people’s good things, you must find ways to own them.”

—Wang Wenbin, China’s foreign ministry spokesperson, criticizes what he calls America’s “robber’s logic” towards TikTok, the Financial Times reports.

The big story

This US company sold iPhone hacking tools to UAE spies

September 2021

When the United Arab Emirates paid over $1.3 million for a powerful and stealthy iPhone hacking tool in 2016, the monarchy’s spies—and the American mercenary hackers they hired—put it to immediate use.

The tool exploited a flaw in Apple’s iMessage app to enable hackers to completely take over a victim’s iPhone. It was used against hundreds of targets in a vast campaign of surveillance and espionage whose victims included geopolitical rivals, dissidents, and human rights activists. 

MIT Technology Review can confirm the exploit was developed and sold by an American firm named Accuvant—shedding new light on the role played by American companies and mercenaries in the proliferation of powerful hacking capabilities around the world. Read the full story.

—Patrick Howell O’Neill

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ All hail Muhamed the mathematically-minded German horse.
+ How many of the Great American Novels have you read? (Atlantic $)
+ Folded scrambled eggs, or fancy omelet?
+ If you’ve ever teared up in a yoga class, you’re not alone.

This self-driving startup is using generative AI to predict traffic

15 March 2024 at 11:00

Self-driving company Waabi is using a generative AI model to help predict the movement of vehicles, it announced today.

The new system, called Copilot4D, was trained on troves of data from lidar sensors, which use light to sense how far away objects are. If you prompt the model with a situation, like a driver recklessly merging onto a highway at high speed, it predicts how the surrounding vehicles will move, then generates a lidar representation of 5 to 10 seconds into the future (showing a pileup, perhaps). Today’s announcement is about the initial version of Copilot4D, but Waabi CEO Raquel Urtasun says a more advanced and interpretable version is deployed in Waabi’s testing fleet of autonomous trucks in Texas that helps the driving software decide how to react. 

While autonomous driving has long relied on machine learning to plan routes and detect objects, some companies and researchers are now betting that generative AI — models that take in data of their surroundings and generate predictions — will help bring autonomy to the next stage. Wayve, a Waabi competitor, released a comparable model last year that is trained on the video that its vehicles collect. 

Waabi’s model works in a similar way to image or video generators like OpenAI’s DALL-E and Sora. It takes point clouds of lidar data, which visualize a 3D map of the car’s surroundings, and breaks them into chunks, similar to how image generators break photos into pixels. Based on its training data, Copilot4D then predicts how all points of lidar data will move. Doing this continuously allows it to generate predictions 5-10 seconds into the future.

A diptych view of the same image via camera and LiDAR.

Waabi is one of a handful of autonomous driving companies, including competitors Wayve and Ghost, that describe their approach as “AI-first.” To Urtasun, that means designing a system that learns from data, rather than one that must be taught reactions to specific situations. The cohort is betting their methods might require fewer hours of road-testing self-driving cars, a charged topic following an October 2023 accident where a Cruise robotaxi dragged a pedestrian in San Francisco. 

Waabi is different from its competitors in building a generative model for lidar, rather than cameras. 

“If you want to be a Level 4 player, lidar is a must,” says Urtasun, referring to the automation level where the car does not require the attention of a human to drive safely. Cameras do a good job of showing what the car is seeing, but they’re not as adept at measuring distances or understanding the geometry of the car’s surroundings, she says.

Though Waabi’s model can generate videos showing what a car will see through its lidar sensors, those videos will not be used as training in the company’s driving simulator that it uses to build and test its driving model. That’s to ensure any hallucinations arising from Copilot4D do not get taught in the simulator.

The underlying technology is not new, says Bernard Adam Lange, a PhD student at Stanford who has built and researched similar models, but it’s the first time he’s seen a generative lidar model leave the confines of a research lab and be scaled up for commercial use. A model like this would generally help make the “brain” of any autonomous vehicle able to reason more quickly and accurately, he says.

“It is the scale that is transformative,” he says. “The hope is that these models can be utilized in downstream tasks” like detecting objects and predicting where people or things might move next.

Copilot4D can only estimate so far into the future, and motion prediction models in general degrade the farther they’re asked to project forward. Urtasun says that the model only needs to imagine what happens 5 to 10  seconds ahead for the majority of driving decisions, though the benchmark tests highlighted by Waabi are based on 3-second predictions. Chris Gerdes, co-director of Stanford’s Center for Automotive Research, says this metric will be key in determining how useful the model is at making decisions.

“If the 5-second predictions are solid but the 10-second predictions are just barely usable, there are a number of situations where this would not be sufficient on the road,” he says.

The new model resurfaces a question rippling through the world of generative AI: whether or not to make models open-source. Releasing Copilot4D would let academic researchers, who struggle with access to large data sets, peek under the hood at how it’s made, independently evaluate safety, and potentially advance the field. It would also do the same for Waabi’s competitors. Waabi has published a paper detailing the creation of the model but has not released the code, and Urtasun is unsure if they will. 

“We want academia to also have a say in the future of self-driving,” she says, adding that open-source models are more trusted. “But we also need to be a bit careful as we develop our technology so that we don’t unveil everything to our competitors.”

The quest to legitimize longevity medicine

18 March 2024 at 06:00

On a bright chilly day last December, a crowd of doctors and scientists gathered at a research institute atop a hill in Novato, California. It was the first time this particular group of healthy longevity specialists had met in person, and they had a lot to share.

The group’s goal is to help people add years to their lifespans, and to live those extra years in good health. But the meeting’s participants had another goal as well: to be recognized as a credible medical field.

For too long, modern medicine has focused on treating disease rather than preventing it, they say. They believe that it’s time to move from reactive healthcare to proactive healthcare. And to do so in a credible way—by setting “gold standards” and medical guidelines for the field. These scientists and clinicians see themselves spearheading a revolution in medicine.

Eric Verdin directs the Buck Institute for Research on Aging, which hosted the meeting. “We will look back in 20 years at this meeting as really the beginning of a whole new field of medicine,” Verdin told attendees. Referring to the movement as a “revolution” would be an understatement, he said. “We can write new rules on how we treat patients.”

Establishing a new discipline of medicine is no mean feat. Longevity doctors have started to make progress by establishing learning programs and embedding these courses in medical schools. They’ve started drafting guidelines for the field, and working out how they might go about becoming recognized by national medical boards.

But proponents recognize the challenges ahead. Clinicians disagree on how they should assess and treat aging. Most clinics are expensive and currently only cater to the wealthy. And their task is made more difficult by the sheer scale and variety of longevity clinics out there, which range from high-end spas offering beauty treatments to offshore clinics offering unproven stem cell therapies.

Without standards and guidelines, there is a real risk that some clinics could end up not only failing to serve their clients, but potentially harming them. 

A visit to the clinic

Almost all longevity clinics offer their clients a suite of tests, usually over a four- to six-hour testing session. Blood tests are pretty standard—clinicians will look at everything from cholesterol and blood sugar to clues of inflammation. And beyond measuring your height and weight, these clinics will look at your body composition—how much fat you’re storing and the density of your bones.

They might put you on a treadmill and measure your VO2 max—the amount of oxygen your body can use while you exercise. Many will assess your cognition, memory, and  physical strength. You’ll be asked questions about your diet, lifestyle, and well-being. Plenty of clinics will also offer a range of scans—and some will offer to look at your whole body in an MRI scanner.

Some clinics will continue to track your diet and movements after this initial appointment, using fitness trackers and wearable devices that monitor your sleep. You might speak to a nutritionist about your diet, a psychologist about your mental health, and a fitness coach about your exercise routine. Some will even analyze your genome and your microbiome.

The idea is to get a full picture of how well your body is functioning—and what could be done to improve things. Got a low VO2 max score? Maybe you need to start taking some HIIT classes. Your microbiome looks like it’s missing some key microbes? Time to increase your fiber intake. The goal is to figure out which aspects of a person’s health or lifestyle might prevent them from living a long, healthy life, and to address those aspects, even if much of the advice is common sense.

Such rigorous testing is not routine in modern medicine. This is partly because of costs, but also because excessive testing can cause patient anxiety, put people at risk of infections, and increase the chance of a misdiagnosis. But if doctors want to keep their patients in good health for longer, they need to start offering more tests, says Evelyne Bischof, director of the Sheba Longevity Center, which is embedded within a public hospital in Ramat Gan, Israel. Longevity medicine needs to become mainstream, and more people should have access to a full range of diagnostic tests that might pick up early signs of age-related diseases, she says. 

Bischof co-led the development of the Healthy Longevity Medicine Society (HLMS), an international organization established in August 2022 to, among other things, “build a clinically credible framework and platform for longevity medicine.” The society now has more than 200 members, including medical doctors, healthcare professionals, and other people associated with longevity clinics, she says.

Bischof wants longevity medicine to be officially recognized as a medical discipline, like cardiology or neurology, for example. Clinics should meet certain criteria to qualify as longevity clinics, she says, and longevity doctors should be required to obtain qualifications before they can make use of the title. This would require signoff by national medical councils like the American Medical Association.

It will take years to get to this point, Bischof acknowledges. In the meantime, she thinks education is a good place to start. She and her colleagues have developed a course for doctors interested in longevity medicine. In theory, anyone with a computer can take the course, but it has been accredited by the Accreditation Council for Continuing Medical Education, which means that doctors who take the course earn credits that support their continued medical education in the US— something that is required by some medical employers. And it is already being implemented in four medical schools, Bischof says—although she adds she can’t yet say which as the information is not yet being made public. “Over 6,000 [have taken] that course already,” she says. “But it should be more—it should be 6 million.”

“This is a new field,” says Andrea Maier of the National University of Singapore, who co-founded the private “high-end” Chi Longevity clinic and is president of the HLMS. “We have to organize ourselves; we have to set standards.”

That task won’t be straightforward. Longevity doctors agree on some key points—namely that they want to extend healthy lifespans—but they disagree on how to measure signs of aging in their patients, how to assess their general health, and how best to treat or advise them.

Questionable tests

Take, for example, aging clocks. These tools aim to estimate a person’s biological age—a score that is meant to capture how close they are to death. More than a hundred of these clocks have been developed, and they work in slightly different ways. Many of them work by assessing chemical markers on your DNA—the pattern of which is known to change as we get older.

Lots of longevity clinics make use of these clocks. The problem is that they don’t work all that well. When Verdin sent one of his own blood samples off to 10 different companies, he says he got 10 different results back—with estimates of his biological age ranging from 25 to 66.

The first such clock was developed by Steven Horvath, a researcher now at Altos Labs, a biotech company exploring ways to rejuvenate cells and, eventually, people. But even he warns of their fallibility. A few days before the longevity clinic meeting, he told an audience of scientists not to “waste your money” on aging clocks.

Some argue that the clocks aren’t useless. Using the same clock over time might give a doctor some idea of how their patient is progressing on a certain treatment plan. And a low score might provide the motivation a person needs to ramp up their exercise regimen. Maier uses multiple clocks when she runs clinical trials of experimental longevity treatments at her clinic. “We have 60 clocks now in our lab, and you have to use different clocks for different populations in different studies,” she says.

But others, including Sara Bonnes, medical director of the healthy longevity clinic at the Mayo Clinic in Rochester, Minnesota, is steering clear until there’s more evidence. “There is still controversy as to which is the best,” she says.

And then there are the whole-body MRI scans. These essentially involve using a magnet-based scanner to look at your insides—all the way from the top of your head to about halfway down your shins.

MRIs are usually used to search for abnormalities that might explain a person’s pain or other symptoms, or to check for signs of damage after a person has sustained an injury. But at longevity clinics, doctors are casting a wide net, and essentially searching the body for anything that looks unusual.

The problem is that almost all of us have a body that is unusual in some way. “Nobody will be ‘normal’ or optimal in their body,” says Maier, who doesn’t offer the scans but wants to partner with clinics that do to learn more about their potential use. “At the moment there is not clear evidence on how much harm you do and how much good you do.”

While whole-body MRI scans might be appropriate for someone with a known risk of, say, cancer, they are not the right choice for everyone, says Anusha Khan, who directs Mosaic Theory MD, a private prevention and longevity clinic in Sterling, Virginia.

Khan refers to a clinical case a colleague shared with her. When the colleague’s patient underwent a whole-body MRI, their doctors spotted something unusual in the person’s biliary tree–a series of ducts connected to the liver and gallbladder. The person’s doctors ended up performing a procedure known as ERCP—involving an endoscope and X-rays—to further investigate.  

The lesion itself turned out to be harmless. But the medical procedure left the person with an infection—and they ended up dying with sepsis, says Khan. “These are still clinical-grade interventions,” she says. “They shouldn’t be taken lightly.”

Wellness and the Wild West

The problem is, if longevity doctors want to standardize practices like the usage of MRIs for otherwise well patients, they will first have to define exactly what a longevity clinic is.

According to a working definition put together by Andrea Maier and her colleagues at HLMS, healthy longevity clinics apply healthy longevity medicine, which involves “optimizing health and healthspan while antagonizing aging processes across the lifespan,” says Maier. This definition would rule out centers that solely offer beauty treatments like botox, which only affect how young a person looks. But she acknowledges that it isn’t yet totally clear where wellness ends and longevity medicine begins.

While most of the doctors presenting at the conference focused on health more generally, there were frequent mentions of physical prowess. Some speakers showed images of themselves mid-workout, muscles bulging. “This is pretty gratuitous I admit,” said David Karow, chief innovation officer at Human Longevity, a company that runs three longevity clinics in the US and China, as he showed the audience a picture of himself topless, mid-run during a triathlon. He then told the audience he was 51 when the photo was taken, but he was in “the top 15 percentile of all male racers in this international triathlon above the age of 18.” 

And looks do seem to be important to some in the field. A longevity clinic director I shared a taxi with during the conference advised me on how I could benefit from a little botox, in the right places.

There’s also the question of where should the cutoff be at the other end of the spectrum; for clinics that offer or recommend supplements, drugs or other treatments? There are no approved longevity medicines. And we don’t have much evidence for the vast array of supplements being touted for healthy life extension, either. 

And while most clinicians would argue that at least most of the treatments they recommend are generally regarded as safe, that is not the case for stem cell treatments, which numerous clinics are offering for longevity. Such clinics can be found in the US and in other countries, and might make claims about reversing the aging process, says Leigh Turner at the University of California, Irvine, who has been studying stem cell clinics for years. “There are a lot of bold advertising claims, and there’s not really meaningful data to back up those claims,” he says. As of 2021, Turner found 89 such clinics offering treatments for “aging” in the US.

There are a variety of stem cell-based treatments offered with vague promises of repairing and rejuvenating a person’s body. One might, for example, involve removing some of a person’s fat through liposuction, then attempting to extract stem cells from the tissue and injecting them into a person’s bloodstream. These clinics are not regulated, and there’s no way of knowing exactly what is being injected, or if it might cause an infection or clot, says Turner.

It doesn’t help that consumer demand has “really exploded” in the last five years, says Maier. Many clinics have lengthy waiting lists. Maier says she has “people knocking on our doors” asking for all kinds of longevity treatments, including stem cell treatments.

“It’s a Wild Wild West at the moment,” says Maier. She worries that if someone receiving such a treatment were to develop, say, a dangerous clot in their lungs, “even the most unregulated countries would shut [longevity clinics] down.” And if such treatments aren’t delivered as part of a clinical trial, we will never learn whether or not they do anything, she says.

Maier says she has recently assessed the published evidence on stem cell treatments for longevity. “For me, there is no evidence,” she says. “I would never do it.” She doesn’t want to pass judgment on those offering unproven and unregulated “therapies,” though. “We have to define ourselves [as a field] first before blaming others for crossing a boundary,” she says.

HLMS won’t accept every membership application they receive. Individuals are turned down if there is any sign they are engaging in any kind of misconduct, says Bischof. The society also turns down biohackers. “Those things we are very careful about,” says Bischof, although she notes that she personally views the self-experimenters as “friends.”

Death is not optional

One area that longevity clinicians do seem to agree on is the finite nature of life. All of those contacted by MIT Technology Review are keen to distance themselves from immortalists, people who are on a quest to live forever. 

Instead, most believe that the majority of people can live to around 100 in good health, providing they eat, sleep and exercise well, identify their personal health needs and address the earliest signs of age-related diseases long before they start to develop symptoms. When I walked into the meeting, one of the first things I noticed was the absence of the bowls of cookies that seem to be standard conference fare. In their place was a range of fresh-fruit smoothies. One doctor used the term “previvorship” to describe overcoming a disease decades before it starts to cause significant problems.

“It’s not that I don’t want to get older—I’m very happy to get old and die,” says Maier. “But I realized… that old age with lots of function is what I’d love to achieve for everybody.”

“The term ‘immortality’ should never be part of our discussion… it’s a total pipe dream,” says Verdin, who personally hopes to live to around 95. “My worry is that it makes us like a cult.”

Longevity doctors also tend to agree that, while longevity clinics are a pricey experience for the rich, they should eventually be accessible to everyone. “The clinics charge between $5,000 and $50,000 a year,” says Verdin. “It’s medicine for the rich, by the rich, which is something I deplore.”

At the December meeting, attendees were offered the chance to win prizes. Stick your name in a fish bowl, and get a chance to win a biological age test, or a scan at a private clinic. The total worth of the “ten to twelve” prizes on offer was €20,000, or around $21,600.

High price tags aren’t just an equality issue. They can also exacerbate a placebo effect. People tend to feel better when they’re given a sugar pill if they believe that candy might improve their symptoms. Paying for a treatment can exacerbate the effects, says Nir Barzilai, who studies aging at Albert Einstein College of Medicine in New York and is scientific director of the American Federation for Aging Research. “You cannot afford to not be satisfied.” And research suggests expensive placebos are more effective than cheap ones.

But prices should come down in time. “Their vision is to start with high-paying clientele…but in the future look at how we can democratize this,” says Verdin, who advises multiple longevity clinics. And at least three public longevity clinics have opened in the last few years, in Singapore, Israel and the US. These clinics are all affiliated with public hospitals, and the costs to patients are much lower than they are for those who visit private clinics, say the doctors who direct them. These clinics are also all running clinical trials of potential longevity treatments.

The healthy longevity clinic at the Mayo Clinic in Rochester is the first public longevity clinic in the US. Since the clinic opened in July last year, doctors have seen around 100 patients aged between 35 and 81, says Bonnes, the clinic’s medical director.

Some want to maintain their health; others want help managing a disease. Still others have been referred by their doctor because they have already embarked on a longevity regimen, but are taking things too far, says Bonnes.

“Certain supplements that they’re taking may interact with other medications or things that they’re on,” she says. “Taking 20 supplements may not be helpful.” And some who are limiting their calorie intake can have eating disorders, she says. “We don’t necessarily know what’s really going to help, but if we can at least avoid harm, that is a big step in the right direction.”

Maier envisions healthy longevity medicine starting out in a similar hospital outpatient setting before eventually moving to GP care, just as we’ve seen asthma, for example, move from specialist to GP-led care over time. “Let’s define the protocol and then give it, in a decade, to the GP level,” she says.

 In the meantime, Barzilai and his colleagues are “trying to make the field responsible,” he says. “There’s a lot of longevity doctors out there, and a lot of them… I don’t know what [they’re doing],” he says. “We have to educate longevity doctors–we tell them what we know, but more importantly, what they don’t know.”

The growing demand for longevity treatments should be met with credible, evidence-based medicine, says Maier. “We have to come together with regulators and ethical committees,” she says.

“There is a consumer drive which cannot be stopped anymore,” she says. “This is a very fragile phase.”

Harvard has halted its long-planned atmospheric geoengineering experiment

18 March 2024 at 09:00

Harvard researchers have ceased a long-running effort to conduct a small geoengineering experiment in the stratosphere, following repeated delays and public criticism.

In a university statement released on March 18, Frank Keutsch, the principal investigator on the project, said he is “no longer pursuing the experiment.”

The basic concept behind solar geoengineering is that the world might be able to counteract global warming by spraying tiny particles in the atmosphere that could scatter sunlight. 

The plan for the Harvard experiments was to launch a high-altitude balloon, equipped with propellers and sensors, that could release a few kilograms of calcium carbonate, sulfuric acid or other materials high above the planet. It would then turn around and fly through the plume to measure how widely the particles disperse, how much sunlight they reflect and other variables. The aircraft will now be repurposed for stratospheric research unrelated to solar geoengineering, according to the statement.

The vast majority of solar geoengineering research to date has been carried out in labs or computer models. The so-called stratospheric controlled perturbation experiment (SCoPEx) was expected to be the first such scientific effort conducted in the stratosphere. But it proved controversial from the start and, in the end, others may have beaten them across the line of deliberately releasing reflective materials into that layer of the atmosphere. (The stratosphere stretches from approximately 10 to 50 kilometers above the ground.) 

Last spring, one of the main scientists on the project, David Keith, relocated to the University of Chicago, where he is leading the Climate Systems Engineering initiative. The new research group will explore various approaches to solar geoengineering, as well as carbon dioxide removal and regional climate interventions, such as efforts to shore up glaciers. 

That summer, the research team informed its advisory committee that it had “suspended work” on the experiment. But it stayed in limbo for months. No final decision on the project’s fate had been made as of early October, Harvard professor Daniel Schrag, who serves on the advisory committee of the university’s broader Solar Geoengineering Research Program, told MIT Technology Review at the time.

Proponents of solar geoengineering research argue we should investigate the concept because it may significantly reduce the dangers of climate change. Further research could help scientists better understand the potential benefits, risks and tradeoffs between various approaches. 

But critics argue that even studying the possibility of solar geoengineering eases the societal pressure to cut greenhouse gas emissions. They also fear such research could create a slippery slope that increases the odds that nations or rogue actors will one day deploy it, despite the possibility of dangerous side-effects, including decreasing precipitation and agricultural output in some parts of the world.

Keith and other scientists laid out the blueprint of the experiment in a paper a decade ago. Then in 2017, he and Keutsch announced they hoped to carry it out, by launching balloons from a site in Tucson, Arizona as early as the following year.

But the project switched locations several times. Most recently, the team hoped to launch a balloon to test out the aircraft’s hardware from the Esrange Space Center in Kiruna, Sweden in the summer of 2021. But those plans were canceled on the recommendation of the project’s advisory committee, which determined the researchers should hold discussions with the public ahead of any flights. The effort was also heavily criticized by the Saami Council, which represents the indigenous Saami peoples’ groups in Sweden and neighboring regions, as well as environmental groups and other organizations, who argued it’s too dangerous a tool to use. 

Harvard professor Frank Keutsch, principal investigator of SCoPEx.
ELIZA GRINNELL, HARVARD SCHOOL OF ENGINEERING AND APPLIED SCIENCE

Solar geoengineering “is a technology that entails risks of catastrophic consequences, including the impact of uncontrolled termination, and irreversible sociopolitical effects that could compromise the world’s necessary efforts to achieve zero-carbon societies,” the group wrote in a letter to the advisory committee. “There are therefore no acceptable reasons for allowing the SCoPEx project to be conducted either in Sweden or elsewhere.”

When asked why he decided to stop work on the experiment, and if it had anything to do with the public pushback or delays, Keutsch replied via email that he “learned important lessons about governance and engagement throughout the course of this project.”

“The field of [solar radiation management] has undergone a significant transformation in the last few years, expanding the community and opening new doors for research and collaboration,” he added. “I felt that it was time to focus on other innovative research avenues in the incredibly important field of SRM that promise impactful results.”

Amid the delays to the Harvard project, other groups have forged ahead with their own geoengineering-related efforts. The controversial venture-backed startup, Make Sunsets, has repeatedly launched weather balloons filled with a few grams of sulfur dioxide that it claims likely burst in the stratosphere. Meanwhile, an independent researcher in the UK, Andrew Lockley, says he carried out several balloon launches, including a September 2022 flight that burst about 15 miles above the Earth and could have released around 400 grams of sulfur dioxide.

Despite the public controversy, the SCoPEx researchers earned high marks among some in the field for striving to carry out the field effort in a small-scale, controlled, transparent way, setting down clear research objectives and creating an independent advisory committee to review the proposals. 

Gernot Wagner, a climate economist at Columbia Business School and the former executive director of Harvard’s Solar Geoengineering Research Program, said in an email that the cancellation of the project was “unfortunate,” as it had taken on larger significance in the field. 

He stressed that the effort “widened the operating space for other, younger researchers to look into this important topic.” In addition, by publishing the plans in a peer-reviewed journal and operating transparently, the group “set a standard of sorts for responsible research in this area,” he added.

“Responsible researchers deciding not to conduct this kind of research, meanwhile, gives ample room for irresponsible actors with all sorts of crazy ideas,” Wagner said.

Harvard will continue to study geoengineering through the Solar Geoengineering Research Program, a multidisciplinary research effort set up in 2017 with funding from Microsoft cofounder Bill Gates, the Hewlett Foundation, the Alfred P. Sloan Foundation and other organizations and individuals. Other current or former projects there include a lab study of other materials that could potentially be used for solar geoengineering and an effort to identify and address some of the larger challenges in governing such tools. 

Also on Monday, the project’s advisory committee released a report to highlight the approach it developed to oversee the project and the key lessons learned, in the hope of informing future geoengineering research experiments. It stressed the need to engage with the public early on, to listen to their concerns, and to develop a plan to respond to them.

The Download: legitimizing longevity science, and Harvard’s geoengineering U-turn

18 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology

The quest to legitimize longevity medicine

On a bright chilly day last December, a crowd of doctors and scientists gathered at a research institute atop a hill in Novato, California. Their goal is to help people add years to their lifespans, and to live those extra years in good health. But the meeting’s participants had another goal as well: to be recognized as a credible medical field.

For too long, modern medicine has focused on treating disease rather than preventing it, they say. They believe that it’s time to move from reactive healthcare to proactive healthcare. And to do so in a credible way—by setting “gold standards” and medical guidelines for the field. These scientists and clinicians see themselves spearheading a revolution in medicine.

But proponents recognize the challenges ahead. Clinicians disagree on how they should assess and treat aging. And without standards and guidelines, there is a real risk that some clinics could end up not only failing to serve their clients, but potentially harming them. Read the full story.

—Jessica Hamzelou

Harvard halts its long-planned atmospheric geoengineering experiment

Harvard researchers have ceased a long-running effort to conduct a small geoengineering experiment in the stratosphere, following repeated delays and public criticism.

The basic concept behind solar geoengineering is that the world might be able to counteract global warming by spraying tiny particles in the atmosphere that could scatter sunlight. Proponents of solar geoengineering research argue we should investigate the concept because it may significantly reduce the dangers of climate change.

But critics argue that even studying the possibility of solar geoengineering eases the societal pressure to cut greenhouse gas emissions. They also fear such research could create a slippery slope that increases the odds that nations or rogue actors will one day deploy it, despite the possibility of dangerous side-effects. Read the full story.

—James Temple

This self-driving startup is using generative AI to predict traffic

The news: Self-driving company Waabi is using a generative AI model to help predict the movement of vehicles. The new system was trained on troves of data from lidar sensors, which use light to sense how far away objects are.

How it works: If you prompt the model with a situation, like a driver recklessly merging onto a highway at high speed, it predicts how the surrounding vehicles will move, then generates a lidar representation of 5 to 10 seconds into the future. 

Why it matters: While autonomous driving has long relied on machine learning to plan routes and detect objects, some companies and researchers are now betting that generative AI — models that take in data of their surroundings and generate predictions — will help bring autonomy to the next stage. Read the full story.

—James O’Donnell

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Biden administration’s social media battle has reached the Supreme Court
Justices will hear arguments over whether officials violated the First Amendment when they told platforms to remove alleged misinformation. (The Hill)
+ It highlights the difficulties in defining free speech in the internet age. (NYT $)
+ What constitutes censorship is in the eye of the beholder. (WP $)

2 SpaceX is building a spy satellite network for US intelligence
And China isn’t happy about it. (Reuters)
+ Chinese automakers are equipping electric cars with camera drones. (Wired $)

3 Apple is facing an AirTags stalking lawsuit
The company’s bid to have the claims overturned was dismissed. (Bloomberg $)+ Google is failing to enforce its own ban on ads for stalkerware. (MIT Technology Review)

4 How a county in South Carolina is waging a war to connect rural America
Broadband providers are reluctant to lay fiber optic cable in “unprofitable areas.” (The Guardian)

5 Ukraine is convinced that US satellite imagery is guiding Russian missiles
Its military believes Russia’s strikes are too precise to be random. (The Atlantic $)
+ It’s shockingly easy to buy sensitive data about US military personnel. (MIT Technology Review)

6 Sam Bankman-Fried is facing up to 110 years in prison
But a sentence between 40 and 50 years is more likely. (NYT $)

7 AI is getting uncannily good at creating pro-level songs
Startup Suno’s model works in tandem with ChatGPT to create songs indistinguishable from human creations. (Rolling Stone $)
+ Why is Slack’s hold music so darn catchy? (Wired $)
+ These impossible instruments could change the future of music. (MIT Technology Review)

8 An airplane’s Wi-Fi is generally pretty safe ✈
But there are extra-cautious steps you can take. (WSJ $)

9 Gen Z is over quiet quitting
Younger workers are quitting their jobs loudly, and in front of an online audience. (FT $)
+ Keynes was wrong. Gen Z will have it worse. (MIT Technology Review)

10 Never trust AI’s assertion that a mushroom is safe to eat 🍄
Mushroom identification apps just aren’t reliable enough—so don’t risk finding out the hard way. (WP $)

Quote of the day

“I simply swiped right on individuals in the industry I aspire to join.”

—Jade Liang, a master’s student in Shanghai, tells NBC News why China’s increasingly tough labor market is driving the country’s young jobseekers to an unusual hiring avenue: dating apps.

The big story

After 25 years of hype, embryonic stem cells are still waiting for their moment​

August 2023

In 1998, researchers isolated powerful stem cells from human embryos. It was a breakthrough, since these cells are the starting point for human bodies and have the capacity to turn into any other type of cell—heart cells, neurons, you name it.

National Geographic would later summarize the incredible promise: “the dream is to launch a medical revolution in which ailing organs and tissues might be repaired” with living replacements. It was the dawn of a new era. A holy grail. Pick your favorite cliché—they all got airtime.

Yet today, more than two decades later, there are no treatments on the market based on these cells. Not one. Our biotech editor Antonio Regalado set out to investigate why, and when that might change. Here’s what he discovered.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ I like the look of this humongous blueberry.
+ This Reddit community for submitting photos of yourself caught unawares by delivery drivers is very funny.
+ This beautifully detailed Mario cookie is a work of art.
+ Belgium’s new soccer away kit is a fitting tribute to the one and only Tintin.

How AI taught Cassie the two-legged robot to run and jump

18 March 2024 at 10:00

If you’ve watched Boston Dynamics’ slick videos of robots running, jumping and doing parkour, you might have the impression robots have learned to be amazingly agile. In fact, these robots are still coded by hand, and would struggle to deal with new obstacles they haven’t encountered before.

However, a new method of teaching robots to move could help to deal with new scenarios, through trial and error—just as humans learn and adapt to unpredictable events.  

Researchers used an AI technique called reinforcement learning to help a two-legged robot nicknamed Cassie to run 400 meters, over varying terrains, and execute standing long jumps and high jumps, without being trained explicitly on each movement. Reinforcement learning works by rewarding or penalizing an AI as it tries to carry out an objective. In this case, the approach taught the robot to generalize and respond in new scenarios, instead of freezing like its predecessors may have done. 

“We wanted to push the limits of robot agility,” says Zhongyu Li, a PhD student at University of California, Berkeley, who worked on the project, which has not yet been peer-reviewed. “The high-level goal was to teach the robot to learn how to do all kinds of dynamic motions the way a human does.”

The team used a simulation to train Cassie, an approach that dramatically speeds up the time it takes it to learn—from years to weeks—and enables the robot to perform those same skills in the real world without further fine-tuning.

Firstly, they trained the neural network that controlled Cassie to master a simple skill from scratch, such as jumping on the spot, walking forward, or running forward without toppling over. It was taught by being encouraged to mimic motions it was shown, which included motion capture data collected from a human and animations demonstrating the desired movement.

After the first stage was complete, the team presented the model with new commands encouraging the robot to perform tasks using its new movement skills. Once it became proficient at performing the new tasks in a simulated environment, they then diversified the tasks it had been trained on through a method called task randomization. 

This makes the robot much more prepared for unexpected scenarios. For example, the robot was able to maintain a steady running gait while being pulled sideways by a leash. “We allowed the robot to utilize the history of what it’s observed and adapt quickly to the real world,” says Li.

Cassie completed a 400-meter run in two minutes and 34 seconds, then jumped 1.4 meters in the long jump without needing additional training.

The researchers are now planning on studying how this kind of technique could be used to train robots equipped with on-board cameras. This will be more challenging than completing actions blind, adds Alan Fern, a professor of computer science at Oregon State University who helped to develop the Cassie robot but was not involved with this project.

“The next major step for the field is humanoid robots that do real work, plan out activities, and actually interact with the physical world in ways that are not just interactions between feet and the ground,” he says.

The AI Act is done. Here’s what will (and won’t) change

19 March 2024 at 07:17

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s official. After three years, the AI Act, the EU’s new sweeping AI law, jumped through its final bureaucratic hoop last week when the European Parliament voted to approve it. (You can catch up on the five main things you need to know about the AI Act with this story I wrote last year.) 

This also feels like the end of an era for me personally: I was the first reporter to get the scoop on an early draft of the AI Act in 2021, and have followed the ensuing lobbying circus closely ever since. 

But the reality is that the hard work starts now. The law will enter into force in May, and people living in the EU will start seeing changes by the end of the year. Regulators will need to get set up in order to enforce the law properly, and companies will have between up to three years to comply with the law.

Here’s what will (and won’t) change:

1. Some AI uses will get banned later this year

The Act places restrictions on AI use cases that pose a high risk to people’s fundamental rights, such as in healthcare, education, and policing. These will be outlawed by the end of the year. 

It also bans some uses that are deemed to pose an “unacceptable risk.” They include some pretty out-there and ambiguous use cases, such as AI systems that deploy “subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making,” or exploit vulnerable people. The AI Act also bans systems that infer sensitive characteristics such as someone’s political opinions or sexual orientation, and the use of real-time facial recognition software in public places. The creation of facial recognition databases by scraping the internet à la Clearview AI will also be outlawed. 

There are some pretty huge caveats, however. Law enforcement agencies are still allowed to use sensitive biometric data, as well as facial recognition software in public places to fight serious crime, such as terrorism or kidnappings. Some civil rights organizations, such as digital rights organization Access Now, have called the AI Act a “failure for human rights” because it did not ban controversial AI use cases such as facial recognition outright. And while companies and schools are not allowed to use software that claims to recognize people’s emotions, they can if it’s for medical or safety reasons.

2. It will be more obvious when you’re interacting with an AI system

Tech companies will be required to label deepfakes and AI-generated content and notify people when they are interacting with a chatbot or other AI system. The AI Act will also require companies to develop AI-generated media in a way that makes it possible to detect. This is promising news in the fight against misinformation, and will give research around watermarking and content provenance a big boost. 

However, this is all easier said than done, and research lags far behind what the regulation requires. Watermarks are still an experimental technology and easy to tamper with. It is still difficult to reliably detect AI-generated content. Some efforts show promise, such as the C2PA, an open-source internet protocol, but far more work is needed to make provenance techniques reliable, and to build an industry-wide standard. 

3. Citizens can complain if they have been harmed by an AI

The AI Act will set up a new European AI Office to coordinate compliance, implementation, and enforcement (and they are hiring). Thanks to the AI Act, citizens in the EU cansubmit complaints about AI systems when they suspect they have been harmed by one, and can receive explanations on why the AI systems made decisions they did. It’s an important first step toward giving people more agency in an increasingly automated world. However, this will require citizens to have a decent level of AI literacy, and to be aware of how algorithmic harms happen. For most people, these are still very foreign and abstract concepts. 

4. AI companies will need to be more transparent

Most AI uses will not require compliance with the AI Act. It’s only AI companies developing technologies in “high risk” sectors, such as critical infrastructure or healthcare, that will have new obligations when the Act fully comes into force in three years. These include better data governance, ensuring human oversight and assessing how these systems will affect people’s rights.

AI companies that are developing “general purpose AI models,” such as language models, will also need to create and keep technical documentation showing how they built the model, how they respect copyright law, and publish a publicly available summary of what training data went into training the AI model. 

This is a big change from the current status quo, where tech companies are secretive about the data that went into their models, and will require an overhaul of the AI sector’s messy data management practices

The companies with the most powerful AI models, such as GPT-4 and Gemini, will face more onerous requirements, such as having to perform model evaluations and risk-assessments and mitigations, ensure cybersecurity protection, and report any incidents where the AI system failed. Companies that fail to comply will face huge fines or their products could be banned from the EU. 

It’s also worth noting that free open-source AI models that share every detail of how the model was built, including the model’s architecture, parameters, and weights, are exempt from many of the obligations of the AI Act.


Now read the rest of The Algorithm

Deeper Learning

Africa’s push to regulate AI starts now

The projected benefit of AI adoption on Africa’s economy is tantalizing. Estimates suggest that Nigeria, Ghana, Kenya, and South Africa alone could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools. Now the African Union—made up of 55 member nations—is trying to work out how to develop and regulate this emerging technology. 

It’s not going to be easy: If African countries don’t develop their own regulatory frameworks to protect citizens from the technology’s misuse, some experts worry that Africans will be hurt in the process. But if these countries don’t also find a way to harness AI’s benefits, others fear their economies could be left behind. (Read more from Abdullahi Tsanni.) 

Bits and Bytes

An AI that can play Goat Simulator is a step toward more useful machines
A new AI agent from Google DeepMind can play different games, including ones it has never seen before such as Goat Simulator 3, a fun action game with exaggerated physics. It’s a step toward more generalized AI that can transfer skills across multiple environments. (MIT Technology Review

This self-driving startup is using generative AI to predict traffic
Waabi says its new model can anticipate how pedestrians, trucks, and bicyclists move using lidar data. If you prompt the model with a situation, like a driver recklessly merging onto a highway at high speed, it predicts how the surrounding vehicles will move, then generates a lidar representation of 5 to 10 seconds into the future (MIT Technology Review

LLMs become more covertly racist with human intervention
It’s long been clear that large language models like ChatGPT absorb racist views from the millions of pages of the internet they are trained on. Developers have responded by trying to make them less toxic. But new research suggests that those efforts, especially as models get larger, are only curbing racist views that are overt, while letting more covert stereotypes grow stronger and better hidden. (MIT Technology Review)

Let’s not make the same mistakes with AI that we made with social media
Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies, argue Nathan E. Sanders and Bruce Schneier. (MIT Technology Review

OpenAI’s CTO Mira Murati fumbled when asked about training data for Sora
In this interview with the Wall Street Journal, the journalist asks Murati whether OpenAI’s new video-generation AI system, Sora, was trained on videos from YouTube. Murati says she is not sure, which is an embarrassing answer from someone who should really know. OpenAI has been hit with copyright lawsuits about the data used to train its other AI models, and I would not be surprised if video was its next legal headache. (Wall Street Journal

Among the AI doomsayers
I really enjoyed this piece. Writer Andrew Marantz spent time with people who fear that AI poses an existential risk to humanity, and tried to get under their skin. The details in this story are both hilarious and juicy—and raise questions about who we should be listening to when it comes to AI’s harms. (The New Yorker

The Download: new AI regulations, and a running robot

19 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology

The AI Act is done. Here’s what will (and won’t) change

After three years, the AI Act, the EU’s new sweeping AI law, jumped through its final bureaucratic hoop last week when the European Parliament voted to approve it.

But the reality is that the hard work starts now. The law will enter into force in May, and people living in the EU will start seeing changes by the end of the year. Regulators will need to get set up in order to enforce the law properly, and companies will have between up to three years to comply with the law.

Here’s what you need to know about what will (and crucially won’t) change after then—from the types of AI uses that will be banned, to a new era of AI transparency. Read the full story.

—Melissa Heikkilä

This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

To read more about the AI regulations, take a look at:

+Five things you need to know about the EU’s new AI Act. Why the new rules will effectively turn the EU into the world’s AI police. Read the full story.

+ Here’s why it was such a difficult Act for the EU’s governing bodies to agree on.

+ Four lessons from 2023 that tell us where AI regulation is going this year—and why it matters.

+ How judges rather than politicians could help to dictate AI rules in America.

How AI taught Cassie the two-legged robot to run and jump

If you’ve watched Boston Dynamics’ slick videos of robots running, jumping and doing parkour, you might have the impression robots have learned to be amazingly agile. In fact, these robots are still coded by hand, and would struggle to deal with new obstacles they haven’t encountered before.

However, a new method of teaching robots to move could help to deal with new scenarios, through trial and error—just as humans learn and adapt to unpredictable events.

Researchers used an AI technique called reinforcement learning to help a two-legged robot nicknamed Cassie to run 400 meters, over varying terrains, and execute standing long jumps and high jumps, without being trained explicitly on each movement. Their approach taught the robot to generalize and respond in new scenarios, instead of freezing like its predecessors may have done. Read the full story.

Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Nvidia has unveiled a slew of AI chips
They’re faster, larger, and a lot more powerful. (WSJ $)
+ A quick primer on why these chips matter so much. (Bloomberg $)
+ The company plans on making itself integral to the future of autonomous cars, too. (Reuters)
+ It’s the hottest stock in town. (WP $)

2 Meta has offered to slash the price of its ad-free subscription service
In a bid to appease privacy regulators in Europe. (Reuters)

3 We’re edging closer to a global cybersecurity standard for smart home tech
Not all gadgets are equally secure. A universal standard could help. (The Verge)

4 Carmaker Fisker has paused making EVs
Things aren’t looking too good for the embattled company—and money is tight. (Wired $)
+ Why the world’s biggest EV maker is getting into shipping. (MIT Technology Review)

5 No one knows why electroconvulsive therapy works
But new research suggests that zapping a brain with electricity may help to restore balance between excitation and inhibition. (Quanta Magazine)
+ Here’s how personalized brain stimulation could treat depression. (MIT Technology Review)

6 How generative AI is warping Google’s search results
Its Search Generative Experience is still working out what to prioritize. (Insider $)
+ We are hurtling toward a glitchy, spammy, scammy, AI-powered internet. (MIT Technology Review)

7 Scientists have created a synthetic blood-clotting drug
The most common version, called heparin, is traditionally made using pig intestines. (New Scientist $)
+ AI is dreaming up drugs that no one has ever seen. (MIT Technology Review)

8 Gig workers don’t get proper time to rest between jobs
So this is what they do instead. (Rest of World)
+ What TikTok can learn from Uber. (Slate $)

9 AI-generated waffle is cropping up in academic journals
Certain phrases are a dead giveaway to ChatGPT’s involvement. (404 Media)
+ YouTube has added an AI content labeling tool to its services. (The Verge)

10 Sony can’t shift its newest VR headset
It’s got a massive backlog of units, because they just aren’t selling. (Bloomberg $)
+ VR headsets can be hacked with an Inception-style attack. (MIT Technology Review)

Quote of the day

“If you think of the internet ecosystem as a colander with a million holes in it, I don’t know why they think plugging one of those tiny holes is going to fix these problems.”

— Calli Schroeder, global privacy counsel at the Electronic Privacy Information Center, tells Bloomberg why the US government’s obsession with banning TikTok is misdirected.

The big story

One city’s fight to solve its sewage problem with sensors

April 2021

In the city of South Bend, Indiana, wastewater from people’s kitchens, sinks, washing machines, and toilets flows through 35 neighborhood sewer lines. On good days, just before each line ends, a vertical throttle pipe diverts the sewage into an interceptor tube, which carries it to a treatment plant where solid pollutants and bacteria are filtered out.

As in many American cities, those pipes are combined with storm drains, which can fill rivers and lakes with toxic sludge when heavy rains or melted snow overwhelms them, endangering wildlife and drinking water supplies. But city officials have a plan to make its aging sewers significantly smarter. Read the full story

—Andrew Zaleski

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These kind wildlife workers in Virginia really went above and beyond to look after an orphaned fox kit.
+ This version of Smells Like Teen Spirit is banging.
+ Techno, techno, techno! Why Berlin’s clubbing culture has been placed under Unesco protection.
+ Enjoy a bit of John Denver this morning, for no reason other than it’s a wonderful song.

Google DeepMind’s new AI assistant helps elite soccer coaches get even better

19 March 2024 at 12:00

Soccer teams are always looking to get an edge over their rivals. Whether it’s studying players’ susceptibility to injury, or opponents’ tactics—top clubs look at reams of data to give them the best shot of winning. 

They might want to add a new AI assistant developed by Google DeepMind to their arsenal. It can suggest tactics for soccer set-pieces that are even better than those created by professional club coaches. 

The system, called TacticAI, works by analyzing a dataset of 7,176 corner kicks taken by players for Liverpool FC, one of the biggest soccer clubs in the world. 

Corner kicks are awarded to an attacking team when the ball passes over the goal line after touching a player on the defending team. In a sport as free-flowing and unpredictable as soccer, corners—like free kicks and penalties—are rare instances in the game when teams can try out pre-planned plays.

TacticAI uses predictive and generative AI models to convert each corner kick scenario—such as a receiver successfully scoring a goal, or a rival defender intercepting the ball and returning it to their team—into a graph, and the data from each player into a node on the graph, before modeling the interactions between each node. The work was published in Nature Communications today.

Using this data, the model provides recommendations about where to position players during a corner to give them, for example, the best shot at scoring a goal, or the best combination of players to get up front. It can also try to predict the outcomes of a corner, including whether a shot will take place, or which player is most likely to touch the ball first.

The main benefit is that the AI assistant reduces the workload of the coaches, says Ondřej Hubáček, an analyst at the sports data firm Ematiq who specializes in predictive models, and who did not work on the project. “An AI system can go through the data quickly and point out errors a team is making—I think that’s the added value you can get from AI assistants,” he says. 

To assess TacticAI’s suggestions, GoogleDeepMind presented them to five football experts: three data scientists, one video analyst, and one coaching assistant, all of whom work at Liverpool FC. Not only did these experts struggle to distinguish’s TacticAI’s suggestions from real game play scenarios, they also favored the system’s strategies over existing tactics 90% of the time.

These findings suggest that TacticAI’s strategies could be useful for human coaches in real-life games, says Petar Veličković, a staff research scientist at GoogleDeepMind who worked on the project. “Top clubs are always searching for an edge, and I think our results indicate that techniques like these are likely going to become a part of modern football going forward,” he says.

TacticAI’s powers of prediction aren’t just limited to corner kicks either—the same method could be easily applied to other set pieces, general play throughout a match, or even other sports entirely, such as American football, hockey, or basketball, says Veličković.

“As long as there’s a team-based sport where you believe that modeling relationships between players will be useful and you have a source of data, it’s applicable,” he says.

Chinese platforms are cracking down on influencers selling AI lessons

By: Zeyi Yang
20 March 2024 at 06:00

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Over the last year, a few Chinese influencers have made millions of dollars peddling short video lessons on AI, profiting off people’s fears about the as-yet-unclear impact of the new technology on their livelihoods. 

But the platforms they thrived on have started to turn against them. Just a few weeks ago, WeChat and Douyin began suspending, removing, or restricting their accounts. While influencers on these platforms have been turning people’s anxiety into traffic and profits for a long time, the latest actions show how Chinese social platforms are trying to contain the damage before it goes too far. 

The backlash started last month, as students angrily complained on social media about the superficiality of the courses, saying that they fell far short of the educational promises made about them. 

“I paid 198 RMB ($27.50), and the first three courses were void of actual content. It’s all about urging people to keep paying 1980 RMB for the next course,” Bessie, a Chinese user of the social media site Xiaohongshu, posted about her experience. The courses were created by Li Yizhou, a serial entrepreneur turned startup mentor who, despite having no background in AI, pivoted to posting about explaining AI and drumming up anxiety after the release of ChatGPT in November 2022.

Li sold his entry-level course package for $27.50, and an advanced one for 10 times that price. The cheaper offering contained 40 lesson videos, most of which were around 10 minutes long. Li’s course consisted of tutorials of specific generative AI tools, talks with Chinese AI company executives, and introductions to unrelated topics like how to manage your time more effectively. 

His lessons were a huge commercial success. According to the social media data analysis site Feigua, they were sold over 250,000 times last year, which could have brought in over $6 million in revenue. 

Li is not the only influencer who, despite having no background in AI, saw a business opportunity to calm people’s AI anxieties with quick fixes. There’s also “Teacher He,” an influencer with over 7 million followers who until recently mostly talked about marketing and personal finance, and Zhang Shitong, also followed by millions, whose usual videos mix basic economics with sensational conspiracies like 9/11 denialism. These creators also offered beginner AI lessons at a similar price to Li’s.

In addition to quality complaints, buyers reported that it was hard to get a refund when they changed their mind. Bessie tells MIT Technology Review that she got a refund since she applied early, but others who applied for a refund more than a week after the purchase were denied. A Beijing-based AI community website has also accused Li of appropriating their free user-contributed templates and selling them for profit as part of his course offering. 

By late February, the platforms that hosted these video lessons began to heed the complaints. All of the classes by Li and other AI gurus have been removed from Chinese social media and e-commerce websites. Li hasn’t posted on any of his social media channels since he was suspended in late February. Other creators like “Teacher He” and Zhang Shitong have also been silent.

Li and “Teacher He” didn’t respond to a media inquiry sent by MIT Technology Review. But a customer representative working for Zhang Shitong said the team processes all refund requests in 12 hours and that it was the team’s own decision to not post anything for the past three weeks.

On Douyin, the Chinese version of TikTok, Li’s account, which used to have over 3 million followers, is now hidden from search results. WeChat Channels, another popular short-video platform, blocked Li and other similar creators from getting new followers in the last week of February. Other smaller platforms have also taken action. Zhishi Xingqiu, a Patreon-like platform that was used by many influencers to sell access to AI-focused communities, has now blocked the search for keywords like “AI,” “Li Yizhou,” or “Sora.”

But none of the platforms have specified which rules the gurus violated. While they may have overpromised with their marketing, it’s hard to say whether their activities really qualified as “scams.” Douyin and WeChat declined to comment on their decisions.

However, there are signs that the restrictions could be reversed. While Chinese social media platforms often permanently delete the accounts of users they believe are flouting rules, these AI course creators have kept their accounts on all platforms. On WeChat, after around two weeks of being blocked from receiving new followers, the creators quietly regained that ability in mid-March. On Douyin, Li’s account was hidden from in-app search results, but his past videos can still be found by going directly to his profile page. 

So far, the Chinese government has not directly addressed the phenomenon or given its official stance. The government has been reining in the livestreaming industry heavily in recent years to censor how influencers act and post, and Chinese platforms set their own rules accordingly, sometimes ahead of government orders, to show they are doing their parts in content regulation.

Even as the creators and their lessons were removed online, there are still lots of Chinese people keen to access these lessons. On social media, some people are now reselling pirated videos of Li’s courses through file sharing, likely without Li’s permission. Now, instead of $27.50, people can spend a few bucks to access the whole course package.

Do you think these AI gurus have crossed a line? Let me know your thoughts at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. The US House of Representatives voted overwhelmingly to pass a bill that would force ByteDance to either sell TikTok or see it banned in the US. Now it’s heading to the Senate, where there’s less urgency to pass it. (Associated Press)

2. While the TSMC chip plant in Arizona is delayed, the company’s other new plant in Japan is set to start mass production on schedule in the fourth quarter of 2024. (Wall Street Journal $)

3. Tesla is talking to countries like Thailand to prepare for a potential production expansion in Southeast Asia. But it will have to compete with Chinese EV companies like BYD, which currently accounts for over a quarter of the EVs sold in the region. (Reuters $)

4. An obscure Chinese e-commerce platform called Pandabuy is recruiting influencers to peddle counterfeit products on TikTok and Facebook. (Wired $)

5. The US and Chinese governments quietly renewed their bilateral deal on science and technology research for another six months. (Wall Street Journal $)

6. Chinese students and academics say they are increasingly being targeted at US airports when they enter the country. (Washington Post $)

7. As the Chinese population ages quickly, a tutoring industry for the elderly is thriving. (Reuters $)

Lost in translation

As the Chinese automobile industry moves fast toward battery-operated cars and electric motors, internal combustion engine technology is increasingly seen as a thing of the past. The Chinese publication Economic Observer talked to students who chose to study combustion engines out of their love for cars. They found it’s a decision some now regret, as they’re finding it hard to land a job after graduation. 

Engineering universities are recruiting experts who can teach students about car batteries, but the pace is not fast enough to catch up with the speed of the Chinese market. From January to July 2023, there was a 6% increase in job postings in the automotive industry in China, but a 18% increase in job postings in the EV industry. As a result, large numbers of combustion engineering students say they are being rejected by the auto industry. They either have to compete for the limited positions still available, or find jobs outside the car industry.

One more thing

Youdao, a Chinese online dictionary app, recently started letting users upload their own pronunciations of English words to appear alongside the standard pronunciations in American or British accents. It soon became a vehicle for fun, with people competing to insert jokes, cultural memes, viral TikTok soundbites, and dramatic acting as pronunciations. In a particularly amusing example, someone pronounces “constipation” as if they are actually experiencing it.

An AI-driven “factory of drugs” claims to have hit a big milestone

20 March 2024 at 06:30

Alex Zhavoronkov has been messing around with artificial intelligence for more than a decade. In 2016, the programmer and physicist was using AI to rank people by looks and sort through pictures of cats.

Now he says his company, Insilico Medicine, has created the first “true AI drug” that’s advanced to a test of whether it can cure a fatal lung condition in humans.

Zhavoronkov says his drug is special because AI software not only helped decide what target inside a cell to interact with, but also what the drug’s chemical structure should be.

Popular forms of AI can draw pictures and answer questions. But there’s a growing effort to get AI to dream up cures for awful diseases, too. That may be why Jensen Huang, president of Nvidia, which sells AI chips and servers, claimed in December that “digital biology” is going to be the “next amazing revolution” for AI. 

“This is going to be flat out one of the biggest ones ever,” he said. “For the very first time in human history, biology has the opportunity to be engineering, not science.”

The hope for AI is that software can point researchers toward new treatments they’d never have thought of on their own. Like a chatbot that can give an outline for a term paper, AI could speed the initial phases of discovering new treatments by coming up with proposals for what targets to hit with drugs, and what those drugs might look like.

Zhavoronkov says both approaches were used to find Insilico’s drug candidate, whose fast progress—it took 18 months for the compound to be synthesized and complete testing in animals—is a demonstration that AI can make drug discovery faster. “Of course, it’s due to AI,” he says.

Mushroom cloud

Starting about 10 years ago, biotech saw a mushroom cloud of new startups promising to use AI to speed up drug searches, including names like Recursion Pharmaceuticals and, more recently, Isomorphic Labs, a spin-out of Google’s DeepMind division.

Puffed up by prevailing hype around AI, these companies raised around $18 billion between 2012 and 2022, according to the Boston Consulting Group (BCG). Insilico, which remains private, and has operations in Taiwan and China, is financed with more than $400 million from private equity firm Warburg Pincus and Facebook cofounder Eduardo Saverin, among others.

The problem they are solving, however, is an old one. A recent report estimated that the world’s top drug companies are spending $6 billion on research and development for every new drug that enters the market, partly because most candidate drugs end up flopping. And the process usually takes at least 10 years.

Whether AI can really make that drug quest more efficient is still up in the air. Another study by BCG, from 2022, determined that  “AI-native” biotechs (those which say AI is central to their research) were advancing an “impressive” wave of new drug ideas. The consultants counted 160 candidate chemicals being tested in cells or animals, and another 15 in early human tests. 

The large tally suggests that computer-generated drugs could become common. What BCG couldn’t determine was if AI-enabled drugs were progressing more quickly than the conventional pace, even though they wrote that “one of the greatest hopes for AI-enabled drug discovery is …an acceleration of…timelines.” So far, there’s not enough data to say, since no AI drugs have completed the journey to approval.

What is true is that some computer-generated chemicals are selling for big figures. In 2022, a company called Nimbus sold a promising chemical to a Japanese drug giant for $4 billion. It had used computational approaches to design the compound, though not strictly AI (its software models the physics of how molecules bond together). And last year, Insilico sold a drug candidate initially proposed by AI to a larger company, Exelixis, for $80 million.

“It does show people are willing to pay a lot of money,” says Zhavoronkov. “Our job is to be a factory of drugs.”

24/7 CEO

Like any startup, the elbow grease put in by its founder may have something to do with his company’s results so far. Zhavoronkov, a Latvian and Canadian citizen who is co-CEO of the company, is a self-described “24/7” workaholic with a prolific record of scientific publications and whose company incessantly bombards journalists with press releases.

He finds time to write a blog at Forbes, often commenting on human life extension, which he describes as his ultimate interest. A recent post titled “The Kardashian of Longevity” explored the media presence of Bryan Johnson, an entrepreneur whose “open quest for personal longevity” included getting blood transfusions from his son.

Alex Zhavoronkov shows the scars on his arm left by donating tissue for longevity experiments.
ANTONIO REGALADO/MITTR

Zhavoronkov also has skin in the game. During an interview, he pulled up his sleeve to reveal numerous scars—punch-hole marks left by giving his tissue for the manufacture of stem cells. He waved toward his waist. More scars there, he indicated.

“My only goal in life is to extend healthy, productive longevity. I am not married and don’t have kids,” he says. “I just do this.”

Zhavoronkov has a track record of implementing cutting-edge AI methods as soon as they’re available. He started Insilico in 2014, shortly after AI started to achieve new breakthroughs in image recognition with so-called deep-learning models. The new approach blew away prior techniques for classifying images and on tasks like finding cats in YouTube videos.

Zhavoronkov initially found notoriety—and some controversy—for AI apps that guessed people’s age and a program that ranked people by their looks. His beauty contest software, Beauty.AI, proved to be an early misstep into AI bias when it was criticized for picking few people with dark skin.

By 2016, though, his company was proposing a “generative” approach to imagining new drugs. Generative methods can create new data—like drawings, answers, or songs—based on examples they’ve been trained on, as is the case with Google’s Gemini app.  Given a biological target, such as a protein, Zhavoronkov says, Insilico’s software, called Chemistry42, takes about 72 hours to propose chemicals that can interact with it. That software is also for sale and is in use by several large drug companies, he says.

Generative drug

On March 8, Insilico published a paper in Nature Biotechnology describing a candidate drug for a lung disease, idiopathic pulmonary fibrosis. The article detailed how AI software both suggested a possible target (a protein called TNIK) and several chemicals that could interfere with it, one of which was then tested in cells, animals, and ultimately in humans in initial safety tests.

Some observers called the paper a comprehensive demonstration of how to develop a drug candidate using AI. “This really does, from soup to nuts, the whole thing,” Timothy Cernak, an assistant professor of medicinal chemistry at the University of Michigan, told the publication Chemical & Engineering News.

The drug has since advanced to Phase II trials in China and the U.S., which will seek initial evidence of whether it’s actually helpful to patients with the lung disease, whose causes remain mysterious and which leads to death in a few years.

While Zhavoronkov claims the chemical is the first true AI drug to advance that far, and the first from a “generative” AI, the nebulous definition of AI makes his claim impossible to affirm. This summer, CNBC host Joe Kernen noted that, in the past, many companies set out to rationalize drug design using computers. “I don’t know where we went over the tipping point,” said Kernen. “We’ve been using computers for how many years? And when did we cross over this step of calling it AI?”

For example, a covid-19 vaccine approved in South Korea, called Skycovione, is packaged inside a nanoparticle that was designed “from the ground up” by a computer, according to David Baker, a researcher at the University of Washington, where it was initially developed.  

Chris Gibson, CEO of Recursion Pharmaceuticals, also pushed back on Zhavoronkov’s claim, saying that AI has found its way into a number of drug quests that have advanced into Phase II, including five from his company, which has used AI to classify images of how cells respond to drugs. “This is one of many programs that have claimed to be ‘first’ over the last few years, depending on how you slice the use of AI,” he said on X. “AI can be used for many aspects of drug discovery.”

Some AI skeptics say coming up with candidate drugs isn’t the true bottleneck. That’s because the costliest setbacks often occur in later tests, if a drug doesn’t demonstrate benefits when tried on patients. And so far, AI is no guarantee against such failures. Last year, biotech Benevolent AI, based in the UK, laid off 180 people, half its staff, and cut back operations after its lead drug failed to help people with skin conditions. It had been touting an “AI-enabled drug discovery engine” that could predict “high confidence targets” and “improve the probability of clinical success.”

Now that he’s got a drug in human efficacy tests, Zhavoronkov agrees its origin in a computer probably won’t speed up what’s left of the journey. “It’s like a Tesla. The initial 0 to 60 is very fast, but after that you are moving at the speed of traffic,” he says. “And you can still fail.”

Zhavoronkov says his dream is for the drug program to keep advancing and demonstrate it can help lung patients, maybe even provide an antidote to the ravages of aging. “That is when you are a hero,” he says. “I don’t even want them to remember me for AI. I want to be remembered for the program.”

The Download: AI drugs, and how AI is improving soccer tactics

20 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A wave of drugs dreamed up by AI is on its way

Alex Zhavoronkov has been messing around with artificial intelligence for more than a decade. In 2016, the programmer and physicist was using AI to rank people by looks and sort through pictures of cats.

Now he says his company, Insilico Medicine, has created the first “true AI drug” that’s advanced to a test of whether it can cure a fatal lung condition in humans.

Popular forms of AI can draw pictures and answer questions. But there’s a growing effort to get AI to dream up cures for awful diseases, too. The problem they are solving, however, is an old one. Read the full story.

—Antonio Regalado

Google DeepMind’s new AI assistant helps elite soccer coaches get even better

The news: Soccer teams are always looking to get an edge over their rivals. They might want to add a new AI assistant developed by Google DeepMind to their arsenal. It can suggest tactics for soccer set-pieces that are even better than those created by professional club coaches.

How it works: The system, called TacticAI, works by analyzing a dataset of 7,176 corner kicks taken by players for Liverpool FC, one of the biggest soccer clubs in the world. It uses predictive and generative AI models to analyze each scenario and produce recommendations and predictive outcomes. Read the full story.

—Rhiannon Williams

Chinese platforms are cracking down on influencers selling AI lessons

Over the last year, a few Chinese influencers have made millions of dollars peddling short video lessons on AI, profiting off people’s fears about the as-yet-unclear impact of the new technology on their livelihoods. 

But the platforms they thrived on have started to turn against them. Just a few weeks ago, WeChat and Douyin began suspending, removing, or restricting their accounts. While influencers on these platforms have been turning people’s anxiety into traffic and profits for a long time, the latest actions show how Chinese social platforms are trying to contain the damage before it goes too far. Read the full story.

Zeyi Yang

This story is from China Report, our weekly newsletter giving you the inside track on all things happening in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 DeepMind co-founder Mustafa Suleyman is joining Microsoft 
He’ll head up the company’s consumer arm, developing AI-infused products. (Bloomberg $)
+ Suleyman will leave his startup, Inflection AI, along with a load of its staff. (NYT $)
+ Inflection’s investors won’t be left out of pocket, though. (The Information $)
+ Check out our interview with Suleyman about why he thinks generative AI is just a phase. (MIT Technology Review)

2 China is sending a satellite to the dark side of the moon
It’ll play a crucial role in the country’s bid to leapfrog the US in moon exploration. (Reuters)
+ Some scientists aren’t sure if exploring Mars is a wise investment. (Undark Magazine)

3 Anonymous career site Glassdoor exposed its users’ real names
Which, unsurprisingly, has upset users who posted honest reviews of their former workplaces. (Ars Technica)

4 The concrete industry has a major carbon problem
Now, emission-capturing formulas could make a difference. (Wired $)
+ The climate solution beneath your feet. (MIT Technology Review)

5 Artists who use AI are more productive
But, crucially, they’re less original. (New Scientist $)
+ This artist is dominating AI-generated art. And he’s not happy about it. (MIT Technology Review)

6 Saudi Arabia is poised to become an AI superpower
To the tune of a $40 billion sinking fund. (NYT $)
+ We’re just five years away from artificial general intelligence, according to Nvidia CEO Jensen Huang, at least. (TechCrunch)
+ Google DeepMind wants to define what counts as artificial general intelligence. (MIT Technology Review)

7 Brace yourself—dynamic pricing is coming
Emboldened by Uber’s surge pricing model, other businesses want in. (Vox)

8 Indonesia’s ebike shops are dicing with danger 
They’re creating souped-up batteries that prioritize power over safety. (Rest of World)
+ Three things to love about batteries. (MIT Technology Review)

9 Did you just poke me on Facebook? 👉
The social network has quietly restored one of its weirdest features. (Insider $)

10 This unknown Swedish composer has racked up more Spotify plays than ABBA
Johan Röhr is the mastermind behind more than 650 different artists on the platform. (The Guardian)

Quote of the day

“It feels very self-centered. Everyone is like, ‘I’ve got somewhere to be, out of my way.’”

—Tamara Siemering, an actor who recently moved to Los Angeles, explains her shock at the city’s driving culture as it tries to embrace autonomous cars, the New York Times reports.

The big story

Inside the enigmatic minds of animals

October 2022

More than ever, we feel a duty and desire to extend empathy to our nonhuman neighbors. In the last three years, more than 30 countries have formally recognized other animals—including gorillas, lobsters, crows, and octopuses—as sentient beings.

A trio of books from Ed Yong, Jackie Higgins, and Philip Ball detail creatures’ rich inner worlds and capture what has led to these developments: a booming field of experimental research challenging the long-standing view that animals are neither conscious nor cognitively complex. Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ One for fans of The Strokes: a thread of cool references throughout their artwork.
+ A Japanese pig cafe sounds like a pretty relaxing place to hang out. 🐷
+ If Dolly Parton was Welsh, this is what Jolene would sound like.
+ A happy belated Nowruz to all those who celebrate!

New York City’s plan to stop e-bike battery fires

20 March 2024 at 11:00

Walk just a few blocks in New York City and you’ll likely spot an electric bike zipping by.

The vehicles have become increasingly popular in recent years, especially among delivery drivers, tens of thousands of whom weave through New York streets. But the e-bike influx has caused a wave of fires sparked by their batteries, some of them deadly.

Now, the city wants to fight those fires with battery swapping. A pilot program will provide a small number of delivery drivers with alternative options to power up their e-bikes, including swapping stations that supply fully charged batteries on demand. 

Proponents say the program could lay the groundwork for a new mode of powering small electric vehicles in the city, one that’s convenient and could reduce the risk of fires. But the road to fire safety will likely be long and winding given the sheer number of batteries we’re integrating into our daily lives, in e-bikes and beyond.

A swapping solution

The number of fires caused by batteries in New York City increased nearly ninefold between 2019 and 2023, according to reporting from The City. Concern over fires has been steadily growing, and in March 2023 Mayor Eric Adams announced a plan to address the problem that included regulations for e-bikes and their batteries, crackdowns on unsafe charging practices, and outreach for delivery drivers.

While batteries can catch fire for a variety of reasons, many incidents appear to have been caused by e-bike drivers charging their batteries in apartment buildings, including a February blaze that killed one person and injured 22.

The city’s most recent effort, designed to address charging, is a pilot program for delivery drivers who use e-bikes. For six months, 100 drivers will be matched with one of three startups that will provide a charging solution that doesn’t involve plugging in batteries in apartment buildings.

One of the startups, Swiftmile, is building fast charging stations that look like bike racks and can charge an e-bike battery within two hours. The other two participating companies, Popwheels and Swobbee, are proposing a different, even quicker solution: battery swapping. Instead of plugging in a battery and waiting for it to power up, a rider can swap out a dead battery for a fresh one.

Battery swapping is already being used for some electric vehicles, largely across Asia. Chinese automaker Nio operates a network of battery swapping stations that can equip a car with a fresh battery in just under three minutes. Gogoro, one of MIT Technology Review’s 2023 Climate Tech Companies to Watch, has a network of battery swapping stations for electric scooters that can accommodate more than 400,000 swaps each day.

The concept will need to be adjusted for New York and for delivery drivers, says Baruch Herzfeld, co-founder and CEO of Popwheels. “But if we get it right,” he says, “we think everybody in New York will be able to use light electric vehicles.”

Existing battery swap networks like Nio’s have mostly included a single company’s equipment, giving the manufacturer control over the vehicle, battery, and swapping equipment. That’s because one of the keys to making battery swapping work is fleet commonality—a base of many vehicles that can all use the same system.

Fortunately, delivery drivers have formed something of a de facto fleet in New York City, says David Hammer, co-founder and president of Popwheels. Roughly half of the city’s 60,000-plus delivery workers rely on e-bikes, according to city estimates. Many of them use bikes from a brand called Arrow, which include removable batteries.

Convenience is key for delivery drivers working on tight schedules. “For a lot of people, battery charging, battery swapping, it’s just technology. But for [delivery workers], it’s their livelihood,” says Irene Figueroa-Ortiz, a policy advisor at the NYC Department of Transportation.

For the New York pilot, Popwheels is building battery cabinets in several locations throughout the city that will include 16 charging slots for e-bike batteries. Riders will open a cabinet door using a smartphone app, plug in the used battery and take a fresh one from another slot. Based on the company’s modeling, each cabinet should be able to support constant use by 40 to 50 riders, Hammer says.

“Maybe it leads to an even larger vision of battery swapping as a part of an urban future,” Hammer says. “But for now, it’s solving a very real and immediate problem that delivery workers have around how they can work a full day, and earn a reasonable living, and do it without having to put their lives at risk for battery fires.”

A growing problem

Lithium-ion batteries power products from laptops and cellphones to electric vehicles, including cars, trucks, and e-bikes. A major benefit of the battery chemistry is its energy density, or ability to pack a lot of energy into a small container. But all that stored energy can also be dangerous.

Batteries can catch fire during charging or use, and even while being stored. Generally, fires happen when temperatures around the battery rise to unsafe levels or if a physical problem in a battery causes a short circuit, allowing current to flow unchecked. These factors can set in motion a dangerous process called thermal runaway.

Most batteries include a battery management system to control charging, which prevents temperatures from spiking and sparking a fire. But if this system malfunctions or if a battery doesn’t include one, charging can lead to fires, says Ben Hoff, who leads fire safety engineering and hardware design at Popwheels.

Some of the delivery drivers who attended a sign-up event for New York’s charging pilot program in late February cited safety as a reason they were looking for alternative solutions for their batteries. “Of course, I worry about that,” Jose Sarmiento, a longtime delivery worker, said at the event. “Even when I’m sleeping, I’m thinking about the battery.”  

Battery swapping could also be a key to safer electric transit, Popwheels’ Hammer says. The company has tight control over the batteries it provides drivers, and its monitoring systems include temperature sensors installed in the charging cabinets. Charging can be shut down immediately if a battery starts to overheat, and an aerosol fire suppression system can slow a fire if one does happen to start inside a cabinet.

The batteries Popwheels provides are also UL-certified, meaning they’re required to pass third-party safety tests. New York City banned the sale of uncertified batteries and e-bikes last year, but many drivers still use them, Hammer says.

Low-quality batteries are more likely to cause fires, a problem that can often be traced to the manufacturing process, says Michael Pecht, a professor at the University of Maryland who studies the reliability and safety of electronic devices.

Battery manufacturing facilities should be as clean as a medical operating room or a semiconductor facility, Pecht explains. Contamination from dust and dirt that wind up in batteries can create problems over time as charging and discharging a battery causes small physical changes. After enough charging cycles, even a tiny dust particle can lead to a short circuit that sparks a fire.

Low-quality manufacturing makes battery fires more likely, but it’s a daunting task to keep tight control over the huge number of cells being made each year. Large manufacturers can produce billions of batteries annually, making the solution to battery fires a complex one, Pecht says: “I think there’s a group who want an easy answer. To me, the answer is not that easy.”

New programs that provide well-manufactured batteries and tightly control charging could make a dent in safety concerns. But real progress will require quick and dramatic scale-up, alongside regulations and continual outreach to communities. 

Popwheels would need to install hundreds of its battery swapping cabinets to support a significant fraction of the city’s delivery drivers. The pilot will help determine whether riders are willing to use new methods of powering their livelihood. As Hammer says, “If they don’t use it, it doesn’t matter.”

Building a more reliable supply chain

In 2021, when a massive container ship became wedged in the Suez Canal, you could almost hear the collective sigh of frustration around the globe. It was a here-we-go-again moment in a year full of supply chain hiccups. Every minute the ship remained stuck represented about $6.7 million in paralyzed global trade.

The 12 months leading up to the debacle had seen countless manufacturing, production, and shipping snags, thanks to the covid-19 pandemic. The upheaval illuminated the critical role of supply chains in consumers’ everyday lives—nothing, from baby formula to fresh produce to ergonomic office chairs, seemed safe.

For companies producing just about any physical product, the many “black swan” events (catastrophic incidents that are nearly impossible to predict) of the last four years illustrate the importance of supply chain resilience—businesses’ ability to anticipate, respond, and bounce back. Yet many organizations still don’t have robust measures in place for future setbacks.

In a poll of 250 business leaders conducted by MIT Technology Review Insights in partnership with Infosys Cobalt, just 12% say their supply chains are in a “fully modern, integrated” state. Almost half of respondents’ firms (47%) regularly experience some supply chain disruptions—nearly one in five (19%) say they feel “constant pressure,” and 28% experience
“occasional disruptions.” A mere 6% say disruptions aren’t an issue. But there’s hope on the horizon. In 2024, rapidly advancing technologies are making transparent, collaborative, and data-driven supply chains more realistic.

“Emerging technologies can play a vital role in creating more sustainable and circular supply chains,” says Dinesh Rao, executive vice president and co-head of delivery at digital services and consulting company Infosys. “Recent strides in artificial intelligence and machine learning, blockchain, and other systems will help build the ability to deliver future-ready, resilient supply chains.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

There is a new most expensive drug in the world. Price tag: $4.25 million

20 March 2024 at 14:23

There is a new most expensive drug ever—a gene therapy that costs as much as a Brooklyn brownstone or a Miami mansion, and more than the average person will earn in a lifetime.

Lenmeldy is a gene treatment for metachromatic leukodystrophy (MLD) and was approved in the U.S. on Monday. Its maker, Orchard Therapeutics, said today the $4.25 million wholesale cost reflects the value the treatment has for patients and families.

No doubt, MLD is awful. The nerve disorder strikes toddlers, quickly robbing them of their ability to speak and walk. Around half die, the others live on in a vegetative state causing crushing burdens for families.

But it’s also incredibly rare, affecting only around 40 kids a year in the U.S. The extreme rarity of such diseases is what’s behind the soaring price-tags of new gene therapies. Just consider the economics: Orchard employs 160 people, much more than the number of kids they’ll be able to treat over several years.

A child in isolation after gene therapy for metachromatic leukodystrophy
AMY PRICE

It means even at this price, selling the newest DNA treatment could be a shaky business. “Gene therapies have struggled commercially—and I wouldn’t expect Lenmeldy to buck that trend,” says Maxx Chatsko, founder of Solt DB, which gathers data about biotech products..  

Call it the curse of being the world’s most expensive drug.

The MLD therapy was approved in Europe starting three years ago, where its price is somewhat lower, but Chatsko notes that Orchard generated only $12.7 million from product sales during most of last year. It means you can count the number of kids who got it on your hands.

There’s no doubt the treatment is a lifesaver. The gene therapy adds a missing gene to the bone marrow cells of children, reversing the condition’s root cause in the brain. Many of the kids who got it, in trials that began in 2010, have been growing up to be beautifully average.

“My heart wants to talk about what an effect this therapy has had in these children,” says Orchard’s chief medical officer, Leslie Meltzer. “Without it, they will die very young or live for many years in a vegetative state.” But kids who get the gene therapy, mostly end up being able to walk and do well cognitively “The ones we treat are going to school, they’re playing sports, and are able to tell their stories,” Meltzer says.

Independent groups also think the drug could be cost-effective. One, called the Institute for Clinical and Economic review, and which assesses the value of drugs, said last September that the MLD gene therapy was worth it at a cost between $2.3 and $3.9 million, according to their models.

But there’s no denying that super-high prices can signal that a treatment isn’t economically sustainable. 

One prior title holder for most expensive drug, the gene therapy Glybera, was purchased only once before being retired from the market. It didn’t work well enough to justify the $1 million price tag, which made it the price champion at the time.

Then there’s the treatment that’s been reigning as the costliest until today, when Lenmeldy took over. It’s a $3.5 million hemophilia treatment called Hemegenix, which is also a gene therapy. Such treatments were meant to be generate billions in sales, yet they aren’t getting nearly the uptake you’d expect according to news reports.

Orchard itself gave up on another DNA fix, Strimvelis, which was an out-and-out cure for a type of immune deficiency. It owned the gene therapy and even got it approved in Europe. The issue was both too few patients and the existence of an alternative treatment. Not even a money back guarantee could save Strimvelis, which Orchard discontinued in 2022.

Orchard was subsequently bought by Japanese drug company Kyowa Kirin, of which it’s now a subsidiary. 

So it can seem like even though gene-therapies are hitting home runs in trials, they’re losing the ballgame. In the case of this Lenmeldy, the critical issue will be early testing for the disease. That’s because once children display symptoms, it can be too late. For now, many patients are being discovered only because an older sibling has already succumbed to the inherited condition.

In 2016, MIT Technology Review recounted the dramatic effects of the MLD gene therapy, but also the heartbreak for parents as one child would die in order to save another.   

Orchard says it hopes to solve this problem by getting on the list of diseases automatically tested for at birth, something that could secure their market, and save many more children. A decision on testing, advocates say, could be reached following a May meeting of the U.S. government committee on newborn screening.

Among those cheering for the treatment is Amy Price, a rare disease advocate who runs her own consultancy, Rarralel, in Denver. Price had three children with MLD—one who died, but two who were saved by the MLD gene therapy, which they received starting in 2011, when it was in testing.

Price says her two treated kids, now in their tweens and teens, “are totally ordinary, absolutely average.” And that is worth the price, she says. “The economic burden of an untreated child….exceeds any gene therapy prices so far,” she says. “That reality is hard to understand when people want to react to the price alone.”

Why New York City is testing battery swapping for e-bikes

21 March 2024 at 06:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Spend enough time in a city and you’ll get to know its unique soundscape. In New York City, it features the echoes of car stereos, the deep grumbles of garbage truck engines, and, increasingly, the high-pitched whirring of electric bikes.

E-bikes and scooters are becoming a staple across the city’s boroughs, and e-bikes in particular are especially popular among the tens of thousands of delivery workers who zip through the streets.

On a recent cloudy afternoon in Manhattan, I joined a few dozen of them at a sign-up event for a new city program that aims to connect delivery drivers with new charging technologies. Drivers who enroll in the pilot will have access to either fast chargers or battery swapping stations for six months.

It’s part of the city’s efforts to cut down on the risk of battery fires, some of which have been sparked by e-bike batteries charging inside apartment buildings, according to the fire department. For more on the program and how it might help address fires, check out my latest story. In the meantime, here’s what I heard from delivery drivers and the startups at the kickoff event.

On a windy late-February day, I wove my way through the lines of delivery workers who showed up to the event in Manhattan’s Cooper Square. Some of them straddled their bikes in line, while others propped up their bikes in clusters. Colorful bags sporting the logos of various delivery services sprouted from their cargo racks.

City officials worked at tables under tents, assigning riders to one of the three startups that are partnering with the city for the new program. One company, Swiftmile, is building fast-charging bike racks for drivers. The other two, Popwheels and Swobbee, are aiming to bring battery swapping to the city.

Battery swapping is a growing technology in some parts of the world, but it’s not common in the US, so I was especially intrigued by the two companies who had set up battery swap cabinets.

Swobbee runs a small network of swapping stations around the world, including at its base in Germany. It is retrofitting bikes to accommodate its battery, which attaches to the rear of the bike. Popwheels is taking a slightly different approach, providing batteries that are already compatible with the majority of e-bikes delivery drivers use today, with little modification required.

I watched a Popwheels employee demonstrate the company’s battery swapping station to several newly enrolled drivers. Each one would approach the Popwheels cabinet, which is roughly the size and shape of a bookcase and has 16 numbered metal doors on the front. After they made a few taps on their smartphone, a door would swing open. Inside, there was space to slide in a used battery and a cord to plug into it. Once the battery was in the cabinet and the door had been shut, another door would open, revealing a fully charged e-bike battery the rider could unplug and slide out. Presto!

The whole process took just a minute or two—much quicker than waiting for a battery to charge. It’s similar to picking up a package from an automated locker in an upscale apartment building.

The crowd seemed to grow during the two hours I spent at the event, and the line stretched and squeezed closer to the edge of the sidewalk. I made a comment about the turnout to Baruch Herzfeld, Popwheels’ CEO and co-founder. “This is nothing,” he said. “There’s demand for 100,000 batteries in New York tomorrow.”

Indeed, New York City has roughly 60,000 delivery workers, many of whom rely on e-bikes to get around. And commuters and tourists might be interested in small, electrified vehicles. Meeting anything close to that sort of demand will take a whole lot more battery cabinets, as one can service just up to 50 riders, according to Popwheels’ estimates.

After they’d signed up and seen the battery swap demo, drivers who were ready to take batteries with them wheeled their bikes over to a few more startup employees, who helped make a slight tweak to a rail under their seats for the company’s batteries to slide into. Some adjustments required a bit of elbow grease, but I watched as one rider slid his new, freshly charged battery into place. He hopped on his bike and darted off into the bike lane, integrating into the flow of traffic.


Now read the rest of The Spark

Related reading

For more on the city’s plans for battery swapping and how they might cut fire risk, give my latest story a read.

Gogoro, one of our 15 Climate Tech Companies to Watch in 2023, operates a huge network of battery swapping stations for electric scooters, largely in Asia.

Some companies think battery swapping is an option for larger electric vehicles, too. Here’s how one startup wants to use modular, swappable batteries to get more EVs on the road.

the SCoPEx balloon diagram with a crimson "X" hovers in a blue background with black particles
STEPHANIE ARNETT/MITTR | SCOPEX (BALLOON)

Another thing

Harvard researchers have given up on a long-running effort to conduct a solar geoengineering experiment. 

The idea behind the technique is a simple one: scatter particles in the upper atmosphere to scatter sunlight, counteracting global warming. But related research efforts have sparked controversy. Read more in my colleague James Temple’s latest story.

Keeping up with climate  

The Biden administration finalized strict new rules for vehicle tailpipe emissions. Under the regulations, EVs are expected to make up over half of new vehicle sales by 2030. (NPR)

The first utility-scale offshore wind farm in the US is officially up and running. It’s a bright spot that could signal a turning point for the industry. (Canary Media)

→ Here’s what’s next for offshore wind. (MIT Technology Review)

The UK has big plans for heat pumps, but installations aren’t moving nearly fast enough, according to a new report. Installations need to increase more than tenfold to keep pace with goals. (The Guardian)

States across the US are proposing legislation to ban lab-grown meat. It’s the latest escalation in an increasingly weird battle over a product that basically doesn’t exist yet. (Wired)

Low-cost EVs from Chinese automakers are pushing US-based companies to reconsider their electrification strategy. More affordable EV options? A girl can dream. (Bloomberg)

→ EV prices in the US are inching down, approaching parity with gas-powered vehicles. (Washington Post)

Goodbye greenwashing, hello “greenhushing”! Corporations are increasingly going radio silent on climate commitments. (Inside Climate News)

The Summer Olympics are fast approaching, and organizers in Paris are working to reduce the event’s climate impact. Think fewer new buildings, more bike lanes. (New York Times)

Early springs mean cherry blossoms are blooming earlier than ever. Warmer winters in the future could cause an even bigger problem. (Bloomberg)

The Download: the world’s most expensive drug, and New York City’s e-bike plan

21 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

There is a new most expensive drug in the world. Price tag: $4.25 million

The news: There is a new most expensive drug ever—a gene therapy that costs as much as a Brooklyn brownstone or a Miami mansion, and more than the average person will earn in a lifetime. Lenmeldy is a gene treatment for metachromatic leukodystrophy (MLD) and was approved in the US on Monday. Its maker, Orchard Therapeutics, says the $4.25 million wholesale cost reflects the value the treatment has for patients and families.

Why it matters:  MLD is a nerve disorder that strikes toddlers, quickly robbing them of their ability to speak and walk. Around half die, the others live on in a vegetative state. But it’s incredibly rare, affecting only around 40 kids a year in the US. The extreme rarity of such diseases is what’s behind the soaring price-tags of new gene therapies, and why selling the newest DNA treatment could be a shaky business. Read the full story.

—Antonio Regalado

New York City’s plan to stop e-bike battery fires

Walk just a few blocks in New York City and you’ll likely spot an electric bike zipping by. They have become increasingly popular in recent years, especially among delivery drivers. But the e-bike influx has caused a wave of fires sparked by their batteries, some of them deadly.

Now, the city wants to fight those fires with battery swapping. A pilot program will provide a small number of delivery drivers with alternative options to power up their e-bikes, including swapping stations that supply fully charged batteries on demand. 

Proponents say the program could lay the groundwork for a new mode of powering small electric vehicles in the city, one that’s convenient and could reduce the risk of fires. But the road to fire safety will likely be long and winding given the sheer number of batteries we’re integrating into our daily lives, in e-bikes and beyond. Read the full story.

—Casey Crownhart

To learn more about New York City’s battery swapping ambitions, check out the latest edition of The Spark, MIT Technology Review’s weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Reddit is set to go public today
Its vocal users have wreaked havoc on Wall Street in the past. Will they again? (Bloomberg $)
+ Redditors are understandably wary of what the future could hold. (WP $)
+ That future is increasingly looking like it’ll involve a lot of AI. (The Information $)
+ It’s one of the few online spaces that still fosters community. (NYT $) 

2 Neuralink shared a video of its first patient playing games with his brain implant
Noland Arbaugh, who is quadriplegic, called the surgical procedure “super easy.” (The Verge)
+ But he also acknowledged that the chip wasn’t perfect. (Insider $)
+ Elon Musk wants more bandwidth between people and machines. Do we need it? (MIT Technology Review)

3 Russian disinformation campaigns are rippling across Europe
Its deepfake videos are designed to erode public trust ahead of the European parliament elections in June. (FT $)
+ Eric Schmidt has a 6-point plan for fighting election misinformation. (MIT Technology Review)

4 The ‘room-temperature superconductor’ physicist engaged in research misconduct
At least four papers co-written by Ranga Dias have now been retracted by the journals that published them. (WSJ $)

5 The US government has awarded its biggest chip grant to date
Intel is the lucky recipient of $8.5 billion to build and expand its US facilities. (NYT $)
+ Intel’s planning to spend an eye watering $100 billion in total. (Reuters)

6 A record number of people died trying to enter to US in 2022
Surveillance has a body count. (The Verge)
+ The new US border wall is an app. (MIT Technology Review)

7 Wherever you go, you’re being tracked across the web
But you might not realize just how extensive that tracking really is. (Wired $)

8 Poverty porn is YouTube’s latest fixation
It treats deprivation as depressing shock-content. (Vox)

9 China is betting on these spacecraft to collect moon samples
While one has already launched, another four are set to follow. (IEEE Spectrum)

10 Fitbit’s future is looking increasingly uncertain
Die-hard fans are losing patience with its owner Google’s recent changes. (Ars Technica)

Quote of the day

“I can’t wait to short the s*** outta this!”

—A Reddit user reacts to the news of the company’s IPO on the infamous r/wallstreetbets Subreddit, Vox reports.

The big story

Running Tide is facing scientist departures and growing concerns over seaweed sinking for carbon removal

June 2022

Running Tide, an aquaculture company based in Portland, Maine, hopes to set tens of thousands of tiny floating kelp farms adrift in the North Atlantic. The idea is that the fast-growing macroalgae will eventually sink to the ocean floor, storing away thousands of tons of carbon dioxide in the process.

The company has raised millions in venture funding and gained widespread media attention. But it struggled to grow kelp along rope lines in the open ocean during initial attempts last year and has lost a string of scientists in recent months, sources with knowledge of the matter tell MIT Technology Review. What happens next? Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Aww, this duck is absolutely adorable 🦆
+ Hiking in India seems a pretty worthwhile way to spend your time.
+ Kim Gordon and Chloe Sevigny: two people who know a thing or two about being cool.
+ These behind the scenes shots of A Streetcar Named Desire are very cool.

This startup wants to fight growing global dengue outbreaks with drones

21 March 2024 at 12:00

The world is grappling with dengue epidemics, with 100 to 400 million cases worldwide every year,  an eightfold increase since 20 years ago, according to the World Health Organization. Much of this is driven by the warming climate, which allows mosquitos to thrive in more areas. 

A startup in São Paulo,  Brazil, one of the countries being hit the hardest by dengue outbreaks, has a possible solution: drones that release sterile male mosquitoes. 

Birdview has previously used drones in agriculture—releasing pest-fighting insects to make it easier to get to every corner of crop fields, which allows farmers to use fewer pesticides.

But in 2021, engineer and founder Ricardo Machado had an idea. He heard about scientists working to prevent diseases like dengue, yellow fever, chikungunya, and zika, all transmitted by the Aedes aegypti mosquito, which lays its eggs on the surface of stagnant water. 

The scientists were releasing sterile males of the species into communities with high instances of the diseases to mate with females already in the region. It’s a measure intended to curb the females’ reproductive potential, thus leading to fewer mosquitos being born, and ultimately cutting the number of cases of mosquito-borne diseases.

At the time, researchers walked or drove through affected neighborhoods, carrying canisters of sterile males and releasing them into areas that they knew were breeding grounds for mosquitoes. But there was one hurdle they couldn’t overcome: getting into the nooks and crannies of the neighborhoods where stagnant water often collects—the pool in an abandoned backyard, the tires out back at the local mechanic’s shop, the plant pots left at cemeteries.

Machado thought his drones—which could carry 17,000 mosquitoes per 10-minute flight over a 25-acre area—could be the answer to that problem.

“The challenge is getting into those hidden places,” says Machado. “It’s rare that Aedes aegypti breeding areas are found out in the open, like on a sidewalk, because when people see them, they destroy them. But with drones, we can get into areas we just can’t otherwise.”

Birdview has carried out studies with several partners since 2021, including the United Nations, the University of São Paulo (USP), and the state-owned Brazilian Agricultural Research Corporation (Embrapa), to better understand the effectiveness of releasing the disease-fighting mosquitoes with drones. First they looked at how the mechanism of the drone and outside conditions, like wind turbulence, affected the survival rate of the mosquitoes and their ability to fly.

The results were positive, so they moved on to flight-and-release tests in the Brazilian states of Pernambuco and Paraná, as well as Florida, where they’ve been working with the Lee County Mosquito Control District to see how far the mosquitoes spread upon release. They used the “mark, release and recapture” method, which involves sterile male mosquitoes being marked with a certain color before being released and later recaptured with traps so the team could see how far they had flown. They also set traps where eggs could be laid and monitored. 

“From what we’ve seen so far, our method seems to be working well,” says Machado. 

This isn’t the only attempt to use drones for the dispersal of disease-fighting mosquitoes—one team ran similar studies in Brazil’s northeast after the region saw an outbreak of zika in 2015 and 2016 that led to 3,308 babies being born with birth defects, and another is carrying out EU-funded tests in France and Spain

Birdview is now negotiating with different biofactories, or insectaries, that sterilize male Aedes aegypti and with others that create what are called Wolbachia mosquitoes—Aedes aegypti injected with the Wolbachia bacteria can no longer transmit viruses like dengue—in hopes of creating partnerships so it can bring its technology to other countries.

“The mosquito is the deadliest animal in the world,” says Machado. “We want to work with as many insectaries as possible. This doesn’t have to be used just to fight the Aedes aegypti mosquito and the diseases it spreads. It can be used to fight malaria too.”

But for some experts, scaling up Birdview’s model and getting that technology to other countries—especially those that are low and middle-income—could become an obstacle.

“It’s a method that sounds promising, but we still need to better understand the costs involved,” says Neelika Malavige, head of Dengue Global Program and Scientific Affairs at the Drugs for Neglected Diseases Initiative (DNDi). “We need to know how affordable it will be to use this technology and how it can be relocated to other countries.”

Machado says the UN has previously given financial support to low-income countries for similar projects and hopes that it and other organizations will continue to do the same with this one.

He also notes the importance of decentralizing the work done with the drones by training at least one pilot per community using the mosquito-releasing technology.

“We don’t want anybody to have to rely on Birdview or any other company to do this work,” says Machado. “We want to be able to hand them the tools they need so they can be the ones to protect their own communities.”

Roundtables: How China Got Ahead on EVs

21 March 2024 at 13:57

Recorded on March 21, 2024

How China Got Ahead on EVs

Speakers: Zeyi Yang, China reporter, Amanda Silverman, Features & investigations editor, and Abby Ivory-Ganja, Sr engagement editor

In the race to produce and sell more electric vehicles, China has emerged as the unexpected winner. If you visit Shanghai or Shenzhen today, it feels like half of the cars running on the streets are electric. The burgeoning domestic demand also transformed Chinese auto companies into aggressive challengers in the global auto market. What did China’s government and companies do to achieve this progress? How will that impact auto companies and consumers in the West?

Related Coverage

How scientists traced a mysterious covid case back to six toilets

22 March 2024 at 06:00

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week I have a mystery for you. It’s the story of how a team of researchers traced a covid variant in Wisconsin from a wastewater plant to six toilets at a single company. But it’s also a story about privacy concerns that arise when you use sewers to track rare viruses back to their source. 

That virus likely came from a single employee who happened to be shedding an enormous quantity of a very weird variant. The researchers would desperately like to find that person. But what if that person doesn’t want to be found?

A few years ago, Marc Johnson, a virologist at the University of Missouri, became obsessed with weird covid variants he was seeing in wastewater samples. The ones that caught his eye were odd in a couple of different ways: they didn’t match any of the common variants, and they didn’t circulate. They would pop up in a single location, persist for some length of time, and then often disappear—a blip. Johnson found his first blip in Missouri. “It drove me nuts,” he says. “I was like, ‘What the hell was going on here?’” 

Then he teamed up with colleagues in New York, and they found a few more.

Hoping to pin down even more lineages, Johnson put a call out on Twitter (now X) for wastewater. In January 2022, he got another hit in a wastewater sample shipped from a Wisconsin treatment plant. He and David O’Connor, a virologist at the University of Wisconsin, started working with state health officials to track the signal—from the treatment plant to a pumping station and then to the outskirts of the city, “one manhole at a time,” Johnson says. “Every time there was a branch in the road, we would check which branch [the signal] was coming from.”

They chased some questionable leads. The researchers were suspicious the virus might be coming from an animal. At one point O’Connor took people from his lab to a dog park to ask dog owners for poop samples. “There were so many red herrings,” Johnson says.

Finally, after sampling about 50 manholes, the researchers found the manhole, the last one on the branch that had the variant. They got lucky. “The only source was this company,” Johnson says. Their results came out in March in Lancet Microbe

Wastewater surveillance might seem like a relatively new phenomenon, born of the pandemic, but it goes back decades. A team of Canadian researchers outlines several historical examples in this story. In one example, a public health official traced a 1946 typhoid outbreak to the wife of a man who sold ice cream at the beach. Even then, the researcher expressed some hesitation. The study didn’t name the wife or the town, and he cautioned that infections probably shouldn’t be traced back to an individual “except in the presence of an outbreak.”

In a similar study published in 1959, scientists traced another typhoid epidemic to one woman, who was then banned from food service and eventually talked into having her gallbladder removed to eliminate the infection. Such publicity can have a “devastating effect on the carrier,” they remarked in their write-up of the case. “From being a quiet and respected citizen, she becomes a social pariah.”

When Johnson and O’Connor traced the virus to that last manhole, things got sticky. Until that point, the researchers had suspected these cryptic lineages were coming from animals. Johnson had even developed a theory involving organic fertilizer from a source further upstream. Now they were down to a single building housing a company with about 30 employees. They didn’t want to stigmatize anyone or invade their privacy. But someone at the company was shedding an awful lot of virus. “Is it ethical to not tell them at that point?” Johnson wondered.

O’Connor and Johnson had been working with state health officials from the very beginning. They decided the best path forward would be to approach the company, explain the situation, and ask if they could offer voluntary testing. The decision wasn’t easy. “We didn’t want to cause panic and say there’s a dangerous new variant lurking in our community,” Ryan Westergaard, the state epidemiologist for communicable diseases at the Wisconsin Department of Health Services, told Nature. But they also wanted to try to help the person who was infected. 

The company agreed to testing, and 19 of its 30 employees turned up for nasal swabs. They were all negative.

That may mean one of the people who didn’t test was carrying the infection. Or could it mean that the massive covid infection in the gut didn’t show up on a nasal swab? “This is where I would use the shrug emoji if we were doing this over email,” O’Connor says.

At the time, the researchers had the ability to test stool samples for the virus, but they didn’t have approval. Now they do, and they’re hoping stool will lead them to an individual infected with one of these strange viruses who can help answer some of their questions. Johnson has identified about 50 of these cryptic covid variants in wastewater. “The more I study these lineages, the more I am convinced that they are replicating in the GI tract,” Johnson says. “It wouldn’t surprise me at all if that’s the only place they were replicating.” 

But how far should they go to find these people? That’s still an open question. O’Connor can imagine a dizzying array of problems that might arise if they did identify an individual shedding one of these rare variants. The most plausible hypothesis is that the lineages arise in individuals who have immune disorders that make it difficult for them to eliminate the infection. That raises a whole host of other thorny questions: what if that person had a compromised immune system due to HIV in addition to the strange covid variant? What if that person didn’t know they were HIV positive, or didn’t want to divulge their HIV status? What if the researchers told them about the infection, but the person couldn’t access treatment? “If you imagine what the worst-case scenarios are, they’re pretty bad,” O’Connor says.

On the other hand, O’Connor says, they think there are a lot of these people around the country and the world. “Isn’t there also an ethical obligation to try to learn what we can so that we can try to help people who are harboring these viruses?” he asks.


Now read the rest of The Checkup

More from MIT Technology Review

Longevity specialists aim to help people live longer and healthier lives. But they have yet to establish themselves as a credible medical field. Expensive longevity clinics that cater to the wealthy worried well aren’t helping. Jessica Hamzelou takes us inside the quest to legitimize longevity medicine.

Drug developers bet big on AI to help speed drug development. But when will we see our first generative drug? Antonio Regalado has the story

Read more from MIT Technology Review’s archive

The covid pandemic brought the tension between privacy and public health into sharp relief, wrote Karen Hao in 2020

That same year Genevieve Bell argued that we can reimagine contact tracing in a way that protects privacy.

In 2021, Antonio Regalado covered some of the first efforts to track the spread of covid variants using wastewater.  

Earlier this year I wrote about using wastewater to track measles. 

From around the web

Surgeons have transplanted a kidney from a genetically engineered pig into a 62-year-old man in Boston. (New York Times)
→ Surgeons transplanted a similar kidney into a brain-dead patient in 2021. (MIT Technology Review
→ Researchers are also looking into how to transplant other organs. Just a few months ago, surgeons connected a genetically engineered pig liver to another brain-dead patient. (MIT Technology Review)

The FDA has approved a new gene therapy for a rare but fatal genetic disorder in children. Its $4.25 million price tag will make it the world’s most expensive medicine, but it promises to give children with the disease a shot at a normal life. (CNN)
→ Read Antonio Regalado’s take on the curse of the costliest drug. (MIT Technology Review)

People who practice intermittent fasting have an increased risk of dying of heart disease, according to new research presented at the American Heart Association meeting in Chicago. There are, of course, caveats. (Washington Post and Stat)

Some parents aren’t waiting to give their young kids the new miracle drug to treat cystic fibrosis. They’re starting the treatment in utero. (The Atlantic

The Download: tracing a mysterious covid strain, and fighting dengue with drones

22 March 2024 at 09:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How scientists traced a mysterious covid case back to six toilets

This week I have a mystery for you. It’s the story of how a team of researchers traced a covid variant in Wisconsin from a wastewater plant to six toilets at a single company. But it’s also a story about privacy concerns that arise when you use sewers to track rare viruses back to their source.

That virus likely came from a single employee who happened to be shedding an enormous quantity of a very weird variant. The researchers would desperately like to find that person.

But what if that person doesn’t want to be found? And is there an ethical obligation to try to learn what we can so that we can try to help people who are harboring these viruses? Read the full story.

—Cassandra Willyard

This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.

This startup wants to fight growing global dengue outbreaks with drones

The world is grappling with dengue epidemics, with 100 to 400 million cases worldwide every year, an eightfold increase since 20 years ago, according to the World Health Organization.

A startup in São Paulo, Brazil, one of the countries being hit the hardest by dengue outbreaks, has a possible solution: drones that release sterile male mosquitoes. 

Scientists have previously released sterile mosquitoes in a bid to cut the number of insects being born—and ultimately the number of cases of mosquito-borne diseases. However, they faced the hurdle of getting into the nooks and crannies of the neighborhoods where stagnant water often collects, and mosquitoes lay their eggs. Drones could help them overcome it. Read the full story.

—Jill Langlois

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US is suing Apple
Regulators have accused it of abusing its iPhone dominance. (FT $)
+ They’ve been looking into the company’s antitrust record since at least 2019. (The Guardian)
+ The tech giant is having a rough 2024 so far. (NYT $)
+ The lawsuit could force Apple to cooperate better with other smartphone makers. (Vox)

2 The United Nations promise to promote safe, trustworthy AI
But who enforces that, and how, remains to be seen. (WP $)
+ The AI Act is done. Here’s what will (and won’t) change. (MIT Technology Review)

3 What it’s like to receive a brain-computer implant
Jeffrey Keefer, who has Parkinson’s, agreed to having the device temporarily applied to the surface of his brain. (WSJ $)
+ Former Neuralink workers think the firm is taking unnecessary risks. (Vox)
+ How it feels to have a life-changing brain implant removed. (MIT Technology Review)

4 Tragic news stories drove readers to donate thousands of dollars
The only problem is, the victims didn’t exist. (NBC News)
+ Surveillance company Flock Safety claims to have solved 10% of reported US crime. Did it really? (404 Media)

5 We’re still waiting for AI we’re willing to pay for
We enjoy mucking around with generative AI—but we don’t want to fork out to use it. (Bloomberg $)

6 A British-Italian company claims to have discovered a better way to mine bitcoin
But crypto experts smell a rat. (FT $)
+ Ethereum moved to proof of stake. Why can’t Bitcoin? (MIT Technology Review)

7 Brands dependent on TikTok are getting anxious
There isn’t really another app or platform that would generate the same kind of sales. (NYT $)
+ US lawmakers are being targeted by angry TikTok devotees. (WP $)
+ Nvidia is selling its own version of the viral Stanley cup. (Insider $)

8 Care robots haven’t lived up to their hype
In some cases, they can hinder instead of help. (The Atlantic $)
+ Inside Japan’s long experiment in automating elder care. (MIT Technology Review)

9 Theories of reality are seriously confusing
But some of them are much more consequential than others. (New Scientist $)
+ What is death? (MIT Technology Review)

10 How Pixar’s software changed movie making forever
Starting with the stone cold classic Toy Story. (IEEE Spectrum)

Quote of the day

“Buy your mom an iPhone.”

—Apple CEO Tim Cook’s response to a customer complaining they were unable to send their mother certain videos because she used an Android smartphone, the Verge reports.

The big story

How AI is helping historians better understand our past

April 2023

Historians have started using machine learning to examine historical documents, including astronomical tables like those produced in Venice and other early modern cities.

Proponents claim that the application of modern computer science to the past helps draw connections across a broader swath of the historical record than would otherwise be possible, correcting distortions that come from analyzing history one document at a time.

But it introduces distortions of its own, including the risk that machine learning will slip bias or outright falsifications into the historical record. Read the full story.

—Moira Donovan

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The way sea sponges pump water is really quite amazing.
+ Over in Australia, they’re (almost) mistaking new bug species for bird poo.
+ I never thought I’d be transfixed by a bed making competition, but here we are.
+ Meet the people dedicated to watching films at double-speed.

Apple researchers explore dropping “Siri” phrase & listening with AI instead

Researchers from Apple are probing whether it’s possible to use artificial intelligence to detect when a user is speaking to a device like an iPhone, thereby eliminating the technical need for a trigger phrase like “Siri,” according to a paper published on Friday.

In a study, which was uploaded to Arxiv and has not been peer-reviewed, researchers trained a large language model using both speech captured by smartphones as well as acoustic data from background noise to look for patterns that could indicate when they want help from the device. The model was built in part with a version of OpenAI’s GPT-2, “since it is relatively lightweight and can potentially run on devices such as smartphones,” the researchers wrote. The paper describes over 129 hours of data and additional text data used to train the model, but did not specify the source of the recordings that went into the training set. Six of the seven authors list their affiliation as Apple, and three of them work on the company’s Siri team according to their LinkedIn profiles. (The seventh author did work related to the paper during an Apple internship.)

The results were promising, according to the paper. The model was able to make more accurate predictions than audio-only or text-only models, and improved further as the size of the models grew larger. Beyond exploring the research question, it’s unclear if Apple plans to eliminate the “Hey Siri” trigger phrase.

Neither Apple, nor the paper’s researchers immediately returned requests for comment.

Currently, Siri functions by holding small amounts of audio and does not begin recording or preparing to answer user prompts until it hears the trigger phrase. Eliminating that “Hey Siri” prompt could increase concerns about our devices “always listening”, said Jen King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. 

The way Apple handles audio data has previously come under scrutiny by privacy advocates. In 2019, reporting from The Guardian revealed that Apple’s quality control contractors regularly heard private audio collected from iPhones while they worked with Siri data, including sensitive conversations between doctors and patients. Two years later, Apple responded with policy changes, including storing more data on devices and allowing users to opt-out of allowing their recordings to be used to improve Siri. A class action suit was brought against the company in California in 2021 that alleged Siri is being turned on even when not activated.  

The “Hey Siri” prompt can serve an important purpose for users, according to King. The phrases provide a way to know when the device is listening, and getting rid of that might mean more convenience, but less transparency from the device, King told MIT Technology Review. The research did not detail if the trigger phrase would be replaced by any other signal that the AI assistant is engaged. 

“I’m skeptical that a company should mandate that form of interaction,” King says.

The paper is one of a number of recent signals that Apple, which is perceived to be lagging behind other tech giants like Amazon, Google, and Facebook in the artificial intelligence race, is planning to incorporate more AI into its products. According to news first reported by VentureBeat, Apple is building a generative AI model called MM1 that can work in text and images, which would be the company’s answer to Open AI’s ChatGPT and a host of other chatbots by leading tech giants. Meanwhile, Bloomberg reported that Apple is in talks with Google about using the company’s AI model Gemini in iPhones, and on Friday the Wall Street Journal reported that it had engaged in talks with Baidu about using that company’s AI products.

The tech industry can’t agree on what open-source AI means. That’s a problem.

By: Edd Gent
25 March 2024 at 06:01

Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models.

Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions. 

But there’s a fundamental problem—no one can agree on what “open-source AI” means. 

On the face of it, open-source AI promises a future where anyone can take part in the technology’s development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it?

The answers could have significant ramifications for the future of the technology. Until the tech industry has settled on a definition, powerful companies can easily bend the concept to suit their own needs, and it could become a tool to entrench the dominance of today’s leading players.

Entering this fray is the Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source. Founded in 1998, the nonprofit is the custodian of the Open Source Definition, a widely accepted set of rules that determine whether a piece of software can be considered open source. 

Now, the organization has assembled a 70-strong group of researchers, lawyers, policymakers, activists, and representatives from big tech companies like Meta, Google, and Amazon to come up with a working definition of open-source AI. 

The open-source community is a big tent, though, encompassing everything from hacktivists to Fortune 500 companies. While there’s broad agreement on the overarching principles, says Stefano Maffulli, OSI’s executive director, it’s becoming increasingly obvious that the devil is in the details. With so many competing interests to consider, finding a solution that satisfies everyone while ensuring that the biggest companies play along is no easy task.

Fuzzy criteria

The lack of a settled definition has done little to prevent tech companies from adopting the term.

Last July, Meta made its Llama 2 model, which it referred to as open source, freely available, and it has a track record of publicly releasing AI technologies. “We support the OSI’s effort to define open-source AI and look forward to continuing to participate in their process for the benefit of the open source community across the world,” Jonathan Torres, Meta’s associate general counsel for AI, open source, and licensing told us. 

That stands in marked contrast to rival OpenAI, which has shared progressively fewer details about its leading models over the years, citing safety concerns. “We only open-source powerful AI models once we have carefully weighed the benefits and risks, including misuse and acceleration,” a spokesperson said. 

Other leading AI companies, like Stability AI and Aleph Alpha, have also released models described as open source, and Hugging Face hosts a large library of freely available AI models.

While Google has taken a more locked-down approach with its most powerful models, like Gemini and PaLM 2, the Gemma models released last month are freely accessible and designed to go toe-to-toe with Llama 2, though the company described them as “open” rather than “open source.”  

But there’s considerable disagreement about whether any of these models can really be described as open source. For a start, both Llama 2 and Gemma come with licenses that restrict what users can do with the models. That’s anathema to open-source principles: one of the key clauses of the Open Source Definition outlaws the imposition of any restrictions based on use cases.

The criteria are fuzzy even for models that don’t come with these kinds of conditions. The concept of open source was devised to ensure developers could use, study, modify, and share software without restrictions. But AI works in fundamentally different ways, and key concepts don’t translate from software to AI neatly, says Maffulli.

One of the biggest hurdles is the sheer number of ingredients that go into today’s AI models. All you need to tinker with a piece of software is the underlying source code, says Maffulli. But depending on your goal, dabbling with an AI model could require access to the trained model, its training data, the code used to preprocess this data, the code governing the training process, the underlying architecture of the model, or a host of other, more subtle details.

Which ingredients you need to meaningfully study and modify models remains open to interpretation. “We have identified what basic freedoms or basic rights we want to be able to exercise,” says Maffulli. “The mechanics of how to exercise those rights are not clear.”

Settling this debate will be essential if the AI community wants to reap the same benefits software developers gained from open source, says Maffulli, which was built on broad consensus about what the term meant. “Having [a definition] that is respected and adopted by a large chunk of the industry provides clarity,” he says. “And with clarity comes lower costs for compliance, less friction, shared understanding.”

By far the biggest sticking point is data. All the major AI companies have simply released pretrained models, without the data sets on which they were trained. For people pushing for a stricter definition of open-source AI, Maffulli says, this seriously constrains efforts to modify and study models, automatically disqualifying them as open source.

Others have argued that a simple description of the data is often enough to probe a model, says Maffulli, and you don’t necessarily need to retrain from scratch to make modifications. Pretrained models are routinely adapted through a process known as fine-tuning, in which they are partially retrained on a smaller, often application-specific, dataset.

Meta’s Llama 2 is a case in point, says Roman Shaposhnik, CEO of open-source AI company Ainekko and vice president of legal affairs for the Apache Software Foundation, who is involved in the OSI process. While Meta only released a pretrained model, a flourishing community of developers has been downloading and adapting it, and sharing their modifications.

“People are using it in all sorts of projects. There’s a whole ecosystem around it,” he says. “We therefore must call it something. Is it half-open? Is it ajar?”

While it may be technically possible to modify a model without its original training data, restricting access to a key ingredient is not really in the spirit of open source, says Zuzanna Warso, director of research at nonprofit Open Future, who is taking part in the OSI’s discussions. It’s also debatable whether it’s possible to truly exercise the freedom to study a model without knowing what information it was trained on.

“It’s a crucial component of this whole process,” she says. “If we care about openness, we should also care about the openness of the data.”

Have your cake and eat it

It’s important to understand why companies setting themselves up as open-source champions are reluctant to hand over training data. Access to high-quality training data is a major bottleneck for AI research and a competitive advantage for bigger firms that they’re eager to maintain, says Warso.

At the same time, open source carries a host of benefits that these companies would like to see translated to AI. At a superficial level, the term “open source” carries positive connotations for a lot of people, so engaging in so-called “open washing” can be an easy PR win, says Warso.

It can also have a significant impact on their bottom line. Economists at Harvard Business School recently found that open-source software has saved companies almost $9 trillion in development costs by allowing them to build their products on top of high-quality free software rather than writing it themselves.

For larger companies, open-sourcing their software so that it can be reused and modified by other developers can help build a powerful ecosystem around their products, says Warso. The classic example is Google’s open-sourcing of its Android mobile operating system, which cemented its dominant position at the heart of the smartphone revolution. Meta’s Mark Zuckerberg has been explicit about this motivation in earnings calls, saying “open-source software often becomes an industry standard, and when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products.”

Crucially, it also appears that open-source AI may receive favorable regulatory treatment in some places, Warso says, pointing to the EU’s newly passed AI Act, which exempts certain open-source projects from some of its more stringent requirements.

Taken together, it’s clear why sharing pretrained models but restricting access to the data required to build them makes good business sense, says Warso. But it does smack of companies trying to have their cake and eat it too, she adds. And if the strategy helps entrench the already dominant positions of large tech companies, it’s hard to see how that fits with the underlying ethos of open source.

“We see openness as one of the tools to challenge the concentration of power,” says Warso. “If the definition is supposed to help in challenging these concentrations of power, then the question of data becomes even more important.”

Shaposhnik thinks a compromise is possible. A significant amount of data used to train the largest models already comes from open repositories like Wikipedia or Common Crawl, which scrapes data from the web and shares it freely. Companies could simply share the open resources used to train their models, he says, making it possible to recreate a reasonable approximation that should allow people to study and understand models.

The lack of clarity regarding whether training on art or writing scraped from the internet infringes on the creator’s property rights can cause legal complications though, says Aviya Skowron, head of policy and ethics at the nonprofit AI research group EleutherAI, also involved in the OSI process. That makes developers wary of being open about their data.

Stefano Zacchiroli, a professor of computer science at the Polytechnic Institute of Paris who is also contributing to the OSI definition, appreciates the need for pragmatism. His personal view is that a full description of a model’s training data is the bare minimum for it to be described as open source, but he recognizes that stricter definitions of open-source AI might not have broad appeal.

Ultimately, the community needs to decide what it’s trying to achieve, says Zacchiroli: “Are you just following where the market is going so that they don’t essentially co-opt the term ‘open-source AI,’ or are you trying to pull the market toward being more open and providing more freedoms to the users?”

What’s the point of open source?

It’s debatable how much any definition of open-source AI will level the playing field anyway, says Sarah Myers West, co–executive director of the AI Now Institute. She coauthored a paper published in August 2023 exposing the lack of openness in many open-source AI projects. But it also highlighted that the vast amounts of data and computing power needed to train cutting-edge AI creates deeper structural barriers for smaller players, no matter how open models are.

Myers West thinks there’s also a lack of clarity regarding what people hope to achieve by making AI open source. “Is it safety, is it the ability to conduct academic research, is it trying to foster greater competition?” she asks. “We need to be way more precise about what the goal is, and then how opening up a system changes the pursuit of that goal.”

The OSI seems keen to avoid those conversations. The draft definition mentions autonomy and transparency as key benefits, but Maffulli demurred when pressed to explain why the OSI values those concepts. The document also contains a section labeled “out of scope issues” that makes clear the definition won’t wade into questions around “ethical, trustworthy, or responsible” AI.

Maffulli says historically the open-source community has focused on enabling the frictionless sharing of software and avoided getting bogged down in debates about what that software should be used for. “It’s not our job,” he says.

But those questions can’t be dismissed, says Warso, no matter how hard people have tried over the decades. The idea that technology is neutral and that topics like ethics are “out of scope” is a myth, she adds. She suspects it’s a myth that needs to be upheld to prevent the open-source community’s loose coalition from fracturing. “I think people realize it’s not real [the myth], but we need this to move forward,” says Warso.

Beyond the OSI, others have taken a different approach. In 2022, a group of researchers introduced Responsible AI Licenses (RAIL), which are similar to open-source licenses but include clauses that can restrict specific use cases. The goal, says Danish Contractor, an AI researcher who co-created the license, is to let developers prevent their work from being used for things they consider inappropriate or unethical.

“As a researcher, I would hate for my stuff to be used in ways that would be detrimental,” he says. And he’s not alone: a recent analysis he and colleagues conducted on AI startup Hugging Face’s popular model-hosting platform found that 28% of models use RAIL. 

The license Google attached to its Gemma follows a similar approach. Its terms of use list various prohibited use cases considered “harmful,” which reflects its “commitment to developing AI responsibly,” the company said in a recent blog post.The Allen Institute for AI has also developed its own take on open licensing. Its ImpACT Licenses restrict redistribution of models and data based on their potential risks.

Given how different AI is from conventional software, some level of experimentation with different degrees of openness is inevitable and probably good for the field, says Luis Villa, cofounder and legal lead at open-source software management company Tidelift. But he worries that a proliferation of “open-ish” licenses that are mutually incompatible could negate the frictionless collaboration that made open source so successful, slowing down innovation in AI, reducing transparency, and making it harder for smaller players to build on each other’s work.

Ultimately, Villa thinks the community needs to coalesce around a single standard, otherwise industry will simply ignore it and decide for itself what “open” means. He doesn’t envy the OSI’s job, though. When it came up with the open-source software definition it had the luxury of time and little outside scrutiny. Today, AI is firmly in the crosshairs of both big business and regulators.

But if the open-source community can’t settle on a definition, and quickly, someone else will come up with one that suits their own needs. “They’re going to fill that vacuum,” says Villa. “Mark Zuckerberg is going to tell us all what he thinks ‘open’ means, and he has a very big megaphone.”

❌
❌