Normal view

There are new articles available, click to refresh the page.
Today — 17 June 2024MIT Technology Review

The Download: artificial surf pools, and unfunny AI

17 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The cost of building the perfect wave

For nearly as long as surfing has existed, surfers have been obsessed with the search for the perfect wave. 

While this hunt has taken surfers from tropical coastlines to icebergs, these days that search may take place closer to home. That is, at least, the vision presented by developers and boosters in the growing industry of surf pools, spurred by advances in wave-­generating technology that have finally created artificial waves surfers actually want to ride.

But there’s a problem: some of these pools are in drought-ridden areas, and face fierce local opposition. At the core of these fights is a question that’s also at the heart of the sport: What is the cost of finding, or now creating, the perfect wave—and who will have to bear it? Read the full story.

—Eileen Guo

This story is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to go live on Wednesday June 26, so if you don’t already, subscribe now to get a copy when it lands.

What happened when 20 comedians got AI to write their routines

AI is good at lots of things: spotting patterns in data, creating fantastical images, and condensing thousands of words into just a few paragraphs. But can it be a useful tool for writing comedy?

New research from Google DeepMind suggests that it can, but only to a very limited extent. It’s an intriguing finding that hints at the ways AI can—and cannot—assist with creative endeavors more generally. Read the full story.

—Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Meta has paused plans to train AI on European user data
Data regulators rebuffed its claims it had “legitimate interests” in doing so. (Ars Technica)
+ Meta claims it sent more than two billion warning notifications. (TechCrunch)
+ How to opt out of Meta’s AI training. (MIT Technology Review)

2 AI assistants and chatbots can’t say who won the 2020 US election
And that’s a major problem as we get closer to the 2024 polls opening. (WP $)
+ Online conspiracy theorists are targeting political abuse researchers. (The Atlantic $)
+ Asking Meta AI how to disable it triggers some interesting conversations. (Insider $)
+ Meta says AI-generated election content is not happening at a “systemic level.” (MIT Technology Review)

3 A smartphone battery maker claims to have made a breakthrough
Japanese firm TDK says its new material could revolutionize its solid-state batteries. (FT $)
+ And it’s not just phones that could stand to benefit. (CNBC)
+ Meet the new batteries unlocking cheaper electric vehicles. (MIT Technology Review)

4 What should AI logos look like?
Simple, abstract and non-threatening, if these are anything to go by. (TechCrunch)

5 Radiopharmaceuticals fight cancer with molecular precision
Their accuracy can lead to fewer side effects for patients. (Knowable Magazine)

6 UK rail passengers’ emotions were assessed by AI cameras 
Major stations tested surveillance cameras designed to predict travelers’ emotions. (Wired $)
+ The movement to limit face recognition tech might finally get a win. (MIT Technology Review)

7 The James Webb Space Telescope has spotted dozens of new supernovae
Dating back to the early universe. (New Scientist $)

8 Rice farming in Vietnam has had a hi-tech makeover
Drones and AI systems are making the laborious work a bit simpler. (Hakai Magazine)
+ How one vineyard is using AI to improve its winemaking. (MIT Technology Review)

9 Meet the researchers working to cool down city parks
Using water misters, cool tubes, and other novel techniques. (Bloomberg $)
+ Here’s how much heat your body can take. (MIT Technology Review)

10 The latest generative AI viral trend? Pregnant male celebrities.
The stupider and weirder the image, the better. (Insider $)

Quote of the day

“It’s really easy to get people addicted to things like social media or mobile games. Learning is really hard.”

—Liz Nagler, senior director of product management at language app Duolingo, tells the Wall Street Journal it’s far trickier to get people to go back to the app every day than you might think.

The big story

The big new idea for making self-driving cars that can go anywhere


May 2022

When Alex Kendall sat in a car on a small road in the British countryside and took his hands off the wheel back in 2016, it was a small step in a new direction—one that a new bunch of startups bet might be the breakthrough that makes driverless cars an everyday reality.

This was the first time that reinforcement learning—an AI technique that trains a neural network to perform a task via trial and error—had been used to teach a car to drive from scratch on a real road. It took less than 20 minutes for the car to learn to stay on the road by itself, Kendall claims.

These startups are betting that smarter, cheaper tech will let them overtake current market leaders. But is this yet more hype from an industry that’s been drinking its own Kool-Aid for years? Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Twin Peaks meets Sylvanian Families: what’s not to love?
+ You heard it here first: Brat is the album of the summer.
+ Chilis can be pretty painful to eat, but we love them anyway. 🌶
+ How people have been crafting artificial eyes for thousands of years.

The cost of building the perfect wave

17 June 2024 at 05:00

For nearly as long as surfing has existed, surfers have been obsessed with the search for the perfect wave. It’s not just a question of size, but also of shape, surface conditions, and duration—ideally in a beautiful natural environment. 

While this hunt has taken surfers from tropical coastlines reachable only by boat to swells breaking off icebergs, these days—as the sport goes mainstream—that search may take place closer to home. That is, at least, the vision presented by developers and boosters in the growing industry of surf pools, spurred by advances in wave-­generating technology that have finally created artificial waves surfers actually want to ride. 

Some surf evangelists think these pools will democratize the sport, making it accessible to more communities far from the coasts—while others are simply interested in cashing in. But a years-long fight over a planned surf pool in Thermal, California, shows that for many people who live in the places where they’re being built, the calculus isn’t about surf at all. 


Just some 30 miles from Palm Springs, on the southeastern edge of the Coachella Valley desert, Thermal is the future home of the 118-acre private, members-only Thermal Beach Club (TBC). The developers promise over 300 luxury homes with a dazzling array of amenities; the planned centerpiece is a 20-plus-acre artificial lagoon with a 3.8-acre surf pool offering waves up to seven feet high. According to an early version of the website, club memberships will start at $175,000 a year. (TBC’s developers did not respond to multiple emails asking for comment.)

That price tag makes it clear that the club is not meant for locals. Thermal, an unincorporated desert community, currently has a median family income of $32,340. Most of its residents are Latino; many are farmworkers. The community lacks much of the basic infrastructure that serves the western Coachella Valley, including public water service—leaving residents dependent on aging private wells for drinking water. 

Just a few blocks away from the TBC site is the 60-acre Oasis Mobile Home Park. A dilapidated development designed for some 1,500 people in about 300 mobile homes, Oasis has been plagued for decades by a lack of clean drinking water. The park owners have been cited numerous times by the Environmental Protection Agency for providing tap water contaminated with high levels of arsenic, and last year, the US Department of Justice filed a lawsuit against them for violating the Safe Drinking Water Act. Some residents have received assistance to relocate, but many of those who remain rely on weekly state-funded deliveries of bottled water and on the local high school for showers. 

Stephanie Ambriz, a 28-year-old special-needs teacher who grew up near Thermal, recalls feeling “a lot of rage” back in early 2020 when she first heard about plans for the TBC development. Ambriz and other locals organized a campaign against the proposed club, which she says the community doesn’t want and won’t be able to access. What residents do want, she tells me, is drinkable water, affordable housing, and clean air—and to have their concerns heard and taken seriously by local officials. 

Despite the grassroots pushback, which twice led to delays to allow more time for community feedback, the Riverside County Board of Supervisors unanimously approved the plans for the club in October 2020. It was, Ambriz says, “a shock to see that the county is willing to approve these luxurious developments when they’ve ignored community members” for decades. (A Riverside County representative did not respond to specific questions about TBC.) 

The desert may seem like a counterintuitive place to build a water-intensive surf pool, but the Coachella Valley is actually “the very best place to possibly put one of these things,” argues Doug Sheres, the developer behind DSRT Surf, another private pool planned for the area. It is “close to the largest [and] wealthiest surf population in the world,” he says, featuring “360 days a year of surfable weather” and mountain and lake views in “a beautiful resort setting” served by “a very robust aquifer.” 

In addition to the two planned projects, the Palm Springs Surf Club (PSSC) has already opened locally. The trifecta is turning the Coachella Valley into “the North Shore of wave pools,” as one aficionado described it to Surfer magazine. 

The effect is an acute cognitive dissonance—one that I experienced after spending a few recent days crisscrossing the valley and trying out the waves at PSSC. But as odd as this setting may seem, an analysis by MIT Technology Review reveals that the Coachella Valley is not the exception. Of an estimated 162 surf pools that have been built or announced around the world, as tracked by the industry publication Wave Pool Magazine, 54 are in areas considered by the nonprofit World Resources Institute (WRI) to face high or extremely high water stress, meaning that they regularly use a large portion of their available surface water supply annually. Regions in the “extremely high” category consume 80% or more of their water, while those in the “high” category use 40% to 80% of their supply. (Not all of Wave Pool Magazine’s listed pools will be built, but the publication tracks all projects that have been announced. Some have closed and over 60 are currently operational.)

Zoom in on the US and nearly half are in places with high or extremely high water stress, roughly 16 in areas served by the severely drought-stricken Colorado River. The greater Palm Springs area falls under the highest category of water stress, according to Samantha Kuzma, a WRI researcher (though she notes that WRI’s data on surface water does not reflect all water sources, including an area’s access to aquifers, or its water management plan).

Now, as TBC’s surf pool and other planned facilities move forward and contribute to what’s becoming a multibillion-dollar industry with proposed sites on every continent except Antarctica, inland waves are increasingly becoming a flash point for surfers, developers, and local communities. There are at least 29 organized movements in opposition to surf clubs around the world, according to an ongoing survey from a coalition called No to the Surf Park in Canéjan, which includes 35 organizations opposing a park in Bordeaux, France.  

While the specifics vary widely, at the core of all these fights is a question that’s also at the heart of the sport: What is the cost of finding, or now creating, the perfect wave—and who will have to bear it? 


Though wave pools have been around since the late 1800s, the first artificial surfing wave was built in 1969, and also in the desert—at Big Surf in Tempe, Arizona. But at that pool and its early successors, surfing was secondary; people who went to those parks were more interested in splashing around, and surfers themselves weren’t too excited by what they had to offer. The manufactured waves were too small and too soft, without the power, shape, or feel of the real thing. 

The tide really turned in 2015, when Kelly Slater, widely considered to be the greatest professional surfer of all time, was filmed riding a six-foot-tall, 50-second barreling wave. As the viral video showed, he was not in the wild but atop a wave generated in a pool in California’s Central Valley, some 100 miles from the coast.

Waves of that height, shape, and duration are a rarity even in the ocean, but “Kelly’s wave,” as it became known, showed that “you can make waves in the pool that are as good as or better than what you get in the ocean,” recalls Sheres, the developer whose company, Beach Street Development, is building mul­tiple surf pools around the country, including DSRT Surf. “That got a lot of folks excited—myself included.” 

In the ocean, a complex combination of factors—including wind direction, tide, and the shape and features of the seafloor—is required to generate a surfable wave. Re-creating them in an artificial environment required years of modeling, precise calculations, and simulations. 

Surf Ranch, Slater’s project in the Central Valley, built a mechanical system in which a 300-ton hydrofoil—which resembles a gigantic metal fin—is pulled along the length of a pool 700 yards long and 70 yards wide by a mechanical device the size of several train cars running on a track. The bottom of the pool is precisely contoured to mimic reefs and other features of the ocean floor; as the water hits those features, its movement creates the 50-second-long barreling wave. Once the foil reaches one end of the pool, it runs backwards, creating another wave that breaks in the opposite direction. 

While the result is impressive, the system is slow, producing just one wave every three to four minutes. 

Around the same time Slater’s team was tinkering with his wave, other companies were developing their own technologies to produce multiple waves, and to do so more rapidly and efficiently—key factors in commercial viability. 

Fundamentally, all the systems create waves by displacing water, but depending on the technology deployed, there are differences in the necessary pool size, the project’s water and energy requirements, the level of customization that’s possible, and the feel of the wave. 

Thomas Lochtefeld is a pioneer in the field and the CEO of Surf Loch, which powers PSSC’s waves. Surf Loch uses pneumatic technology, in which compressed air cycles water through chambers the size of bathroom stalls and lets operators create countless wave patterns.

One demo pool in Australia uses what looks like a giant mechanical doughnut that sends out waves the way a pebble dropped in water sends out ripples. Another proposed plan uses a design that spins out waves from a circular fan—a system that is mobile and can be placed in existing bodies of water. 

Of the two most popular techniques in commercial use, one relies on modular paddles attached to a pier that runs across a pool, which move in precise ways to generate waves. The other is pneumatic technology, which uses compressed air to push water through chambers the size of bathroom stalls, called caissons; the caissons pull in water and then push it back out into the pool. By choosing which modular paddles or caissons move first against the different pool bottoms, and with how much force at a time, operators can create a range of wave patterns. 

Regardless of the technique used, the design and engineering of most modern wave pools are first planned out on a computer. Waves are precisely calculated, designed, simulated, and finally tested in the pool with real surfers before they are set as options on a “wave menu” in proprietary software that surf-pool technologists say offers a theoretically endless number and variety of waves. 

On a Tuesday afternoon in early April, I am the lucky tester at the Palm Springs Surf Club, which uses pneumatic technology, as the team tries out a shoulder-high right-breaking wave. 

I have the pool to myself as the club prepares to reopen; it had closed to rebuild its concrete “beach” just 10 days after its initial launch because the original beach had not been designed to withstand the force of the larger waves that Surf Loch, the club’s wave technology provider, had added to the menu at the last minute. (Weeks after reopening in April, the surf pool closed again as the result of “a third-party equipment supplier’s failure,” according to Thomas Lochtefeld, Surf Loch’s CEO.)

I paddle out and, at staffers’ instructions, take my position a few feet away from the third caisson from the right, which they say is the ideal spot to catch the wave on the shoulder—meaning the unbroken part of the swell closest to its peak. 

The entire experience is surreal: waves that feel like the ocean in an environment that is anything but. 

Palm Springs Surf Club wide angle vie wof the wave pool
An employee test rides a wave, which was first calculated, designed, and simulated on a computer.
SPENCER LOWELL

In some ways, these pneumatic waves are better than what I typically ride around Los Angeles—more powerful, more consistent, and (on this day, at least) uncrowded. But the edge of the pool and the control tower behind it are almost always in my line of sight. And behind me are the PSSC employees (young men, incredible surfers, who keep an eye on my safety and provide much-needed tips) and then, behind them, the snow-capped San Jacinto Mountains. At the far end of the pool, behind the recently rebuilt concrete beach, is a restaurant patio full of diners who I can’t help but imagine are judging my every move. Still, for the few glorious seconds that I ride each wave, I am in the same flow state I experience in the ocean itself.  

Then I fall and sheepishly paddle back to PSSC’s encouraging surfer-employees to restart the whole process. I would be having a lot of fun—if I could just forget my self-consciousness, and the jarring feeling that I shouldn’t be riding waves in the middle of the desert at all.  


Though long inhabited by Cahuilla Indians, the Coachella Valley was sparsely populated until 1876, when the Southern Pacific Railroad added a new line out to the middle of the arid expanse. Shortly after, the first non-native settlers came to the valley and realized that its artesian wells, which flow naturally without the need to be pumped, provided ideal conditions for farming.  

Agricultural production exploded, and by the early 1900s, these once freely producing wells were putting out significantly less, leading residents to look for alternative water sources. In 1918, they created the Coachella Valley Water District (CVWD) to import water from the Colorado River via a series of canals. This water was used to supply the region’s farms and recharge the Coachella Aquifer, the region’s main source of drinking water. 

""
The author tests a shoulder-high wave at PSSC, where she says the waves were in some ways better than what she rides around Los Angeles.
SPENCER LOWELL

The water imports continue to this day—though the seven states that draw on the river are currently renegotiating their water rights amid a decades-long megadrought in the region. 

The imported water, along with CVWD’s water management plan, has allowed Coachella’s aquifer to maintain relatively steady levels “going back to 1970, even though most development and population has occurred since,” Scott Burritt, a CVWD spokesperson, told MIT Technology Review in an email. 

This has sustained not only agriculture but also tourism in the valley, most notably its world-class—and water-intensive—golf courses. In 2020, the 120 golf courses under the jurisdiction of the CVWD consumed 105,000 acre-feet of water per year (AFY); that’s an average of 875 AFY, or 285 million gallons per year per course. 

Surf pools’ proponents frequently point to the far larger amount of water golf courses consume to argue that opposing the pools on grounds of their water use is misguided. 

PSSC, the first of the area’s three planned surf clubs to open, requires an estimated 3 million gallons per year to fill its pool; the proposed DSRT Surf holds 7 million gallons and estimates that it will use 24 million gallons per year, which includes maintenance and filtration, and accounts for evaporation. TBC’s planned 20-acre recreational lake, 3.8 acres of which will contain the surf pool, will use 51 million gallons per year, according to Riverside County documents. Unlike standard swimming pools, none of these pools need to be drained and refilled annually for maintenance, saving on potential water use. DSRT Surf also boasts about plans to offset its water use by replacing 1 million square feet of grass from an adjacent golf course with drought-tolerant plants. 

a PSSC employee at a control panel overlooking the pool
Pro surfer and PSSC’s full-time “wave curator” Cheyne Magnusson watches test waves from the club’s control tower.
SPENCER LOWELL

With surf parks, “you can see the water,” says Jess Ponting, a cofounder of Surf Park Central, the main industry association, and Stoke, a nonprofit that aims to certify surf and ski resorts—and, now, surf pools—for sustainability. “Even though it’s a fraction of what a golf course is using, it’s right there in your face, so it looks bad.”

But even if it were just an issue of appearance, public perception is important when residents are being urged to reduce their water use, says Mehdi Nemati, an associate professor of environmental economics and policy at the University of California, Riverside. It’s hard to demand such efforts from people who see these pools and luxury developments being built around them, he says. “The questions come: Why do we conserve when there are golf courses or surfing … in the desert?” 

(Burritt, the CVWD representative, notes that the water district “encourages all customers, not just residents, to use water responsibly” and adds that CVWD’s strategic plans project that there should be enough water to serve both the district’s golf courses and its surf pools.)  

Locals opposing these projects, meanwhile, argue that developers are grossly underestimating their water use, and various engineering firms and some county officials have in fact offered projections that differ from the developers’ estimates. Opponents are specifically concerned about the effects of spray, evaporation, and other factors, which increase with higher temperatures, bigger waves, and larger pool sizes. 

As a rough point of reference, Slater’s 14-acre wave pool in Lemoore, California, can lose up to 250,000 gallons of water per day to evaporation, according to Adam Fincham, the engineer who designed the technology. That’s roughly half an Olympic swimming pool.

More fundamentally, critics take issue with even debating whether surf clubs or golf courses are worse. “We push back against all of it,” says Ambriz, who organized opposition to TBC and argues that neither the pool nor an exclusive new golf course in Thermal benefits the local community. Comparing them, she says, obscures greater priorities, like the water needs of households. 

Five surfers sit on their boards in a calm PSSC pool
The PSSC pool requires an estimated 3 million gallons of water per year. On top of a $40 admission fee, a private session there would cost between $3,500 and $5,000 per hour.
SPENCER LOWELL

The “primary beneficiary” of the area’s water, says Mark Johnson, who served as CVWD’s director of engineering from 2004 to 2016, “should be human consumption.”

Studies have shown that just one AFY, or nearly 326,000 gallons, is generally enough to support all household water needs of three California families every year. In Thermal, the gap between the demands of the surf pool and the needs of the community is even more stark: each year for the past three years, nearly 36,000 gallons of water have been delivered, in packages of 16-ounce plastic water bottles, to residents of the Oasis Mobile Home Park—some 108,000 gallons in all. Compare that with the 51 million gallons that will be used annually by TBC’s lake: it would be enough to provide drinking water to its neighbors at Oasis for the next 472 years.

Furthermore, as Nemati notes, “not all water is the same.” CVWD has provided incentives for golf courses to move toward recycled water and replace grass with less water-­intensive landscaping. But while recycled water and even rainwater have been proposed as options for some surf pools elsewhere in the world, including France and Australia, this is unrealistic in Coachella, which receives just three to four inches of rain per year. 

Instead, the Coachella Valley surf pools will depend on a mix of imported water and nonpotable well water from Coachella’s aquifer. 

But any use of the aquifer worries Johnson. Further drawing down the water, especially in an underground aquifer, “can actually create water quality problems,” he says, by concentrating “naturally occurring minerals … like chromium and arsenic.” In other words, TBC could worsen the existing problem of arsenic contamination in local well water. 

When I describe to Ponting MIT Technology Review’s analysis showing how many surf pools are being built in desert regions, he seems to concede it’s an issue. “If 50% of the surf parks in development are in water-stressed areas,” he says, “then the developers are not thinking about the right things.” 


Before visiting the future site of Thermal Beach Club, I stopped in La Quinta, a wealthy town where, back in 2022, community opposition successfully stopped plans for a fourth pool planned for the Coachella Valley. This one was developed by the Kelly Slater Wave Company, which was acquired by the World Surf League in 2016. 

Alena Callimanis, a longtime resident who was a member of the community group that helped defeat the project, says that for a year and a half, she and other volunteers often spent close to eight hours a day researching everything they could about surf pools—and how to fight them. “We knew nothing when we started,” she recalls. But the group learned quickly, poring over planning documents, consulting hydrologists, putting together presentations, providing comments at city council hearings, and even conducting their own citizen science experiments to test the developers’ assertions about the light and noise pollution the project could create. (After the council rejected the proposal for the surf club, the developers pivoted to previously approved plans for a golf course. Callimanis’s group also opposes the golf course, raising similar concerns about water use, but since plans have already been approved, she says, there is little they can do to fight back.) 

view across an intersection of a mobile home framed by palm trees
Just a few blocks from the site of the planned Thermal Beach Club is the Oasis Mobile Home Park, which has been plagued for decades by a lack of clean drinking water.
""
A water pump sits at the corner of farm fields in Thermal, California, where irrigation water is imported from the Colorado River.

It was a different story in Thermal, where three young activists juggled jobs and graduate programs as they tried to mobilize an under-resourced community. “Folks in Thermal lack housing, lack transportation, and they don’t have the ability to take a day off from work to drive up and provide public comment,” says Ambriz. 

But the local pushback did lead to certain promises, including a community benefit payment of $2,300 per luxury housing unit, totaling $749,800. In the meeting approving the project, Riverside County supervisor Manuel Perez called this “unprecedented” and credited the efforts of Ambriz and her peers. (Ambriz remains unconvinced. “None of that has happened,” she says, and payments to the community don’t solve the underlying water issues that the project could exacerbate.) 

That affluent La Quinta managed to keep a surf pool out of its community where working-class Thermal failed is even more jarring in light of industry rhetoric about how surf pools could democratize the sport. For Bryan Dickerson, the editor in chief of Wave Pool Magazine, the collective vision for the future is that instead of “the local YMCA … putting in a skate park, they put in a wave pool.” Other proponents, like Ponting, describe how wave pools can provide surf therapy or opportunities for underrepresented groups. A design firm in New York City, for example, has proposed to the city a plan for an indoor wave pool in a low-income, primarily black and Latino neighborhood in Queens—for $30 million. 

For its part, PSSC cost an estimated $80 million to build. On top of a $40 general admission fee, a private session like the one I had would cost $3,500 to $5,000 per hour, while a public session would be at least $100 to $200, depending on the surfer’s skill level and the types of waves requested. 

In my two days traversing the 45-mile Coachella Valley, I kept thinking about how this whole area was an artificial oasis made possible only by innovations that changed the very nature of the desert, from the railroad stop that spurred development to the irrigation canals and, later, the recharge basins that stopped the wells from running out. 

In this transformed environment, I can see how the cognitive dissonance of surfing a desert wave begins to shrink, tempting us to believe that technology can once again override the reality of living (or simply playing) in the desert in a warming and drying world. 

But the tension over surf pools shows that when it comes to how we use water, maybe there’s no collective “us” here at all. 

What happened when 20 comedians got AI to write their routines

17 June 2024 at 04:00

AI is good at lots of things: spotting patterns in data, creating fantastical images, and condensing thousands of words into just a few paragraphs. But can it be a useful tool for writing comedy?  

New research suggests that it can, but only to a very limited extent. It’s an intriguing finding that hints at the ways AI can—and cannot—assist with creative endeavors more generally. 

Google DeepMind researchers led by Piotr Mirowski, who is himself an improv comedian in his spare time, studied the experiences of professional comedians who have AI in their work. They used a combination of surveys and focus groups aimed at measuring how useful AI is at different tasks. 

They found that although popular AI models from OpenAI and Google were effective at simple tasks, like structuring a monologue or producing a rough first draft, they struggled to produce material that was original, stimulating, or—crucially—funny. They presented their findings at the ACM FAccT conference in Rio earlier this month but kept the participants anonymous to avoid any reputational damage (not all comedians want their audience to know they’ve used AI).

The researchers asked 20 professional comedians who already used AI in their artistic process to use a large language model (LLM) like ChatGPT or Google Gemini (then Bard) to generate material that they’d feel comfortable presenting in a comedic context. They could use it to help create new jokes or to rework their existing comedy material. 

If you really want to see some of the jokes the models generated, scroll to the end of the article.

The results were a mixed bag. While the comedians reported that they’d largely enjoyed using AI models to write jokes, they said they didn’t feel particularly proud of the resulting material. 

A few of them said that AI can be useful for tackling a blank page—helping them to quickly get something, anything, written down. One participant likened this to “a vomit draft that I know that I’m going to have to iterate on and improve.” Many of the comedians also remarked on the LLMs’ ability to generate a structure for a comedy sketch, leaving them to flesh out the details.

However, the quality of the LLMs’ comedic material left a lot to be desired. The comedians described the models’ jokes as bland, generic, and boring. One participant compared them to  “cruise ship comedy material from the 1950s, but a bit less racist.” Others felt that the amount of effort just wasn’t worth the reward. “No matter how much I prompt … it’s a very straitlaced, sort of linear approach to comedy,” one comedian said.

AI’s inability to generate high-quality comedic material isn’t exactly surprising. The same safety filters that OpenAI and Google use to prevent models from generating violent or racist responses also hinder them from producing the kind of material that’s common in comedy writing, such as offensive or sexually suggestive jokes and dark humor. Instead, LLMs are forced to rely on what is considered safer source material: the vast numbers of documents, books, blog posts, and other types of internet data they’re trained on. 

“If you make something that has a broad appeal to everyone, it ends up being nobody’s favorite thing,” says Mirowski.

The experiment also exposed the LLMs’ bias. Several participants found that a model would not generate comedy monologues from the perspective of an Asian woman, but it was able to do so from the perspective of a white man. This, they felt, reinforced the status quo while erasing minority groups and their perspectives.

But it’s not just the guardrails and limited training data that prevent LLMs from generating funny responses. So much of humor relies on being surprising and incongruous, which is at odds with how these models work, says Tuhin Chakrabarty, a computer science researcher at Columbia University, who specializes in AI and creativity and wasn’t involved in the study. Creative writing requires deviation from the norm, whereas LLMs can only mimic it.

“Comedy, or any sort of good writing, uses long-term arcs to return to themes, or to surprise an audience. Large language models struggle with that because they’re built to predict one word at a time,” he says. “I’ve tried so much in my own research to prompt AI to be funny or surprising or interesting or creative, but it just doesn’t work.”

Colleen Lavin is a developer and comedian who participated in the study. For a stand-up routine she performed at the Edinburgh Fringe last year, she trained a machine-learning model to recognize laughter and to “heckle” her when it detected she wasn’t getting enough laughs. While she has used generative AI to create promotional material for her shows or to check her writing, she draws the line at using it to actually generate jokes.

“I have a technical day job, and writing is separate from that—it’s almost sacred,” she says. “Why would I take something that I truly enjoy and outsource it to a machine?”

While AI-assisted comedians may be able to work much faster, their ideas won’t be original, because they’ll be limited by the data the models were trained to draw from, says Chakrabarty.

“I think people are going to use these tools for writing scripts, screenplays, and advertisements anyway,” he says. “But true creative and comedic writing is based on experience and vibes. Not an algorithm.”

The AI-generated jokes

For the prompt: “Can you write me ten jokes about pickpocketing”, one LLM response was: “I decided to switch careers and become a pickpocket after watching a magic show. Little did I know, the only thing disappearing would be my reputation!”

For the prompt: “Please write jokes about the irony of a projector failing in a live comedy show about AI.”, one of the better LLM responses was: “Our projector must’ve misunderstood the concept of ‘AI.’ It thought it meant ‘Absolutely Invisible’ because, well, it’s doing a fantastic job of disappearing tonight!”

Before yesterdayMIT Technology Review

The Download: milk beyond cows, and geoengineering’s funding boom

14 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Biotech companies are trying to make milk without cows

The outbreak of avian influenza on US dairy farms has started to make milk seem a lot less wholesome. Milk that’s raw, or unpasteurized, can actually infect mice that drink it, and a few dairy workers have already caught the bug. 

The FDA says that commercial milk is safe because it is pasteurized, killing the germs. Even so, it’s enough to make a person ponder a life beyond milk—say, taking your coffee black or maybe drinking oat milk.

But for those of us who can’t do without the real thing, it turns out some genetic engineers are working on ways to keep the milk and get rid of the cows instead. Here’s how they’re doing it.

—Antonio Regalado

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

This London non-profit is now one of the biggest backers of geoengineering research

A London-based nonprofit is poised to become one of the world’s largest financial backers of solar geoengineering research. It’s just one of a growing number of foundations eager to support scientists exploring whether the world could ease climate change by reflecting away more sunlight.

The uptick in funding will offer scientists in the controversial field far more support than they’ve enjoyed in the past. This will allow them to pursue a wider array of lab work, modeling, and potentially even outdoor experiments that could improve our understanding of the benefits and risks of such interventions. Read the full story.

—James Temple

How to opt out of Meta’s AI training

If you post or interact with chatbots on Facebook, Instagram, Threads, or WhatsApp, Meta can use your data to train its generative AI models beginning June 26, according to its recently updated privacy policy. 

Internet data scraping is one of the biggest fights in AI right now. Tech companies argue that anything on the public internet is fair game, but they are facing a barrage of lawsuits over their data practices and copyright. It will likely take years until clear rules are in place. 

In the meantime, if you’re uncomfortable with having Meta use your personal information and intellectual property to train its AI models, consider opting out. Here’s how to do it.

—Melissa Heikkila

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US Supreme Court has upheld access to the abortion pill
It’s the most significant ruling since it overturned Roe v Wade in 2022. (FT $)
+ The decision represents the aversion of a major crisis for reproductive health. (Wired $)
+ But states like Kansas are likely to draw out legal arguments over access. (The Guardian)

2 Amazon is struggling to revamp Alexa
It’s repeatedly missed deadlines and is floundering to catch up with its rivals. (Fortune)
+ OpenAI has stolen a march on Amazon’s AI assistant ambitions. (MIT Technology Review)

3 Clearview AI has struck a deal to end a privacy class action
If your face was scraped as facial recognition data, you may be entitled to a stake in the company. (NYT $)
+ The startup doesn’t have the funds to settle the lawsuit. (Reuters)
+ It was fined millions of dollars for its practices back in 2022. (MIT Technology Review)

4 What’s next for nanotechnology
Molecular machines to kill bacteria aren’t new—but they are promising. (New Yorker $)

5 The Pope is a surprisingly influential voice in the AI safety debate
Pope Francis will address G7 leaders who have gathered today to discuss AI regulation. (WP $)
+ Smaller startups are lobbying to be acquired by bigger fish. (Bloomberg $)
+ What’s next for AI regulation in 2024? (MIT Technology Review)

6 Keeping data centers cool uses colossal amounts of power
Dunking servers in oil could be a far more environmentally-friendly method. (IEEE Spectrum)

7 UK voters can back an AI-generated candidate in next month’s election
How very Black Mirror. (NBC News)

8 How to tell if your boss is spying on you
Checking your browser extensions is a good place to start. (WP $)

9 We don’t know much about how the human body reacts to space
But with the rise of space tourism, scientists are hoping to find out. (TechCrunch)
+ This startup wants to find out if humans can have babies in space. (MIT Technology Review)

10 This platform is a who’s-who of rising internet stars
Famous Birthdays is basically a directory of hugely successful teenagers you’ve never heard of. (Economist $)

Quote of the day

“If it’s somebody on the right, I reward them. If it’s somebody on the left, I punish them.”

—Christopher Blair, a self-confessed liberal troll social justice warrior, explains the methods he uses to spread fake news on Facebook to the New York Times.

The big story

The quest to build wildfire-resistant homes

April 2023

With each devastating wildfire in the US West, officials consider new methods or regulations that might save homes or lives the next time.

In the parts of California where the hillsides meet human development, and where the state has suffered recurring seasonal fire tragedies, that search for new means of survival has especially high stakes.

Many of these methods are low cost and low tech, but no less truly innovative. In fact, the hardest part to tackle may not be materials engineering, but social change. Read the full story.

—Susie Cagle

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Why AI-generated album covers can’t hold a candle to human-made art.
+ This chicken caesar salad recipe looks pretty great.
+ Sign me up for a trip to Spain’s unspoiled Ribeira Sacra region!
+ How to nap like a pro 😴

These board games want you to beat climate change

14 June 2024 at 05:00

It’s game night, and I’m crossing my fingers, hoping for a hurricane. 

I roll the die and it clatters across the board, tumbling to a stop to reveal a tiny icon of a tree stump. Bad news: I just triggered deforestation in the Amazon. That seals it. I failed to stop climate change—at least this board-game representation of it.

The urgent need to address climate change might seem like unlikely fodder for a fun evening. But a growing number of games are attempting to take on the topic, including a version of the bestseller Catan released this summer.

As a climate reporter, I was curious about whether games could, even abstractly, represent the challenge of the climate crisis. Perhaps more crucially, could they possibly be any fun? 

My investigation started with Daybreak, a board game released in late 2023 by a team that includes the creator of Pandemic (infectious disease—another famously light topic for a game). Daybreak is a cooperative game where players work together to cut emissions and survive disasters. The group either wins or loses as a whole.

When I opened the box, it was immediately clear that this wouldn’t be for the faint of heart. There are hundreds of tiny cardboard and wooden pieces, three different card decks, and a surprisingly thick rule book. Setting it up, learning the rules, and playing for the first time took over two hours.

the components of the game Daybreak which has Game cards depicting Special Drawing Rights, Clean Electricity Plants, and Reforestation themed play cards
Daybreak, a cooperative board game about stopping climate change.
COURTESY OF CMYK

Daybreak is full of details, and I was struck by how many of them it gets right. Not only are there cards representing everything from walkable cities to methane removal, but each features a QR code players can use to learn more.

In each turn, players deploy technologies or enact policies to cut climate pollution. Just as in real life, emissions have negative effects. Winning requires slashing emissions to net zero (the point where whatever’s emitted can be soaked up by forests, oceans, or direct air capture). But there are multiple ways for the whole group to lose, including letting the global average temperature increase by 2 °C or simply running out of turns.

 In an embarrassing turn of events for someone who spends most of her waking hours thinking about climate change, nearly every round of Daybreak I played ended in failure. Adding insult to injury, I’m not entirely sure that I was having fun. Sure, the abstract puzzle was engaging and challenging, and after a loss, I’d be checking the clock, seeing if there was time to play again. But once all the pieces were back in the box, I went to bed obsessing about heat waves and fossil-fuel disinformation. The game was perhaps representing climate change a little bit too well.

I wondered if a new edition of a classic would fare better. Catan, formerly Settlers of Catan, and its related games have sold over 45 million copies worldwide since the original’s release in 1995. The game’s object is to build roads and settlements, setting up a civilization. 

In late 2023, Catan Studios announced that it would be releasing a version of its game called New Energies, focused on climate change. The new edition, out this summer, preserves the same central premise as the original. But this time, players will also construct power plants, generating energy with either fossil fuels or renewables. Fossil fuels are cheaper and allow for quicker expansion, but they lead to pollution, which can harm players’ societies and even end the game early.

Before I got my hands on the game, I spoke with one of its creators, Benjamin Teuber, who developed the game with his late father, Klaus Teuber, the mastermind behind the original Catan.

To Teuber, climate change is a more natural fit for a game than one might expect. “We believe that a good game is always around a dilemma,” he told me. The key is to simplify the problem sufficiently, a challenge that took the team dozens of iterations while developing New Energies. But he also thinks there’s a need to be at least somewhat encouraging. “While we have a severe topic, or maybe even especially because we have a severe topic, you can’t scare off the people by making them just have a shitty evening,” Teuber says.

In New Energies, the first to gain 10 points wins, regardless of how polluting that player’s individual energy supply is. But if players collectively build too many fossil-fuel plants and pollution gets too high, the game ends early, in which case whoever has done the most work to clean up their own energy supply is named the winner.

That’s what happened the first time I tested out the game. While I had been lagging in points, I ended up taking the win, because I had built more renewable power plants than my competitors.

This relatively rosy ending had me conflicted. On one hand, I was delighted, even if it felt like a consolation prize. 

But I found myself fretting over the messages that New Energies will send to players. A simple game that crowns a winner may be more playable, but it doesn’t represent how complicated the climate crisis is, or how urgently we need to address it. 

I’m glad climate change has a spot on my game shelf, and I hope these and other games find their audiences and get people thinking about the issues. But I’ll understand the impulse to reach for other options when game night rolls around, because I can’t help but dwell on the fact that in the real world, we won’t get to reset the pieces and try again.

Biotech companies are trying to make milk without cows

14 June 2024 at 05:00

The outbreak of avian influenza on US dairy farms has started to make milk seem a lot less wholesome. Milk that’s raw, or unpasteurized, can actually infect mice that drink it, and a few dairy workers have already caught the bug. 

The FDA says that commercial milk is safe because it is pasteurized, killing the germs. Even so, it’s enough to make a person ponder a life beyond milk—say, taking your coffee black or maybe drinking oat milk.

But for those of us who can’t do without the real thing, it turns out some genetic engineers are working on ways to keep the milk and get rid of the cows instead. They’re doing it by engineering yeasts and plants with bovine genes so they make the key proteins responsible for milk’s color, satisfying taste, and nutritional punch.

The proteins they’re copying are casein, a floppy polymer that’s the most abundant protein in milk and is what makes pizza cheese stretch, and whey, a nutritious combo of essential amino acids that’s often used in energy powders.

It’s part of a larger trend of replacing animals with ingredients grown in labs, steel vessels, or plant crops. Think of the Impossible burger, the veggie patty made mouthwatering with the addition of heme, a component of blood that’s produced in the roots of genetically modified soybeans.

One of the milk innovators is Remilk, an Israeli startup founded in 2019, which has engineered yeast so it will produce beta-lactoglobulin (the main component of whey). Company cofounder Ori Cohavi says a single biotech factory of bubbling yeast vats feeding on sugar could in theory “replace 50,000 to 100,000 cows.” 

Remilk has been making trial batches and is testing ways to formulate the protein with plant oils and sugar to make spreadable cheese, ice cream, and milk drinks. So yes, we’re talking “processed” food—one partner is a local Coca-Cola bottler, and advising the company are former executives of Nestlé, Danone, and PepsiCo.

But regular milk isn’t exactly so natural either. At milking time, animals stand inside elaborate robots, and it looks for all the world as if they’re being abducted by aliens. “The notion of a cow standing in some nice green scenery is very far from how we get our milk,” says Cohavi. And there are environmental effects: cattle burp methane, a potent greenhouse gas, and a lactating cow needs to drink around 40 gallons of water a day

“There are hundreds of millions of dairy cows on the planet producing greenhouse waste, using a lot of water and land,” says Cohavi. “It can’t be the best way to produce food.”  

For biotech ventures trying to displace milk, the big challenge will be keeping their own costs of production low enough to compete with cows. Dairies get government protections and subsidies, and they don’t only make milk. Dairy cows are eventually turned into gelatin, McDonald’s burgers, and the leather seats of your Range Rover. Not much goes to waste.

At Alpine Bio, a biotech company in San Francisco (also known as Nobell Foods), researchers have engineered soybeans to produce casein. While not yet cleared for sale, the beans are already being grown on USDA-sanctioned test plots in the Midwest, says Alpine’s CEO, Magi Richani

Richani chose soybeans because they’re already a major commodity and the cheapest source of protein around. “We are working with farmers who are already growing soybeans for animal feed,” she says. “And we are saying, ‘Hey, you can grow this to feed humans.’ If you want to compete with a commodity system, you have to have a commodity crop.”

Alpine intends to crush the beans, extract the protein, and—much like Remilk—sell the ingredient to larger food companies.

Everyone agrees that cow’s milk will be difficult to displace. It holds a special place in the human psyche, and we owe civilization itself, in part, to domesticated animals. In fact, they’ve  left their mark in our genes, with many of us carrying DNA mutations that make cow’s milk easier to digest.  

But that’s why it might be time for the next technological step, says Richani. “We raise 60 billion animals for food every year, and that is insane. We took it too far, and we need options,” she says. “We need options that are better for the environment, that overcome the use of antibiotics, and that overcome the disease risk.”

It’s not clear yet whether the bird flu outbreak on dairy farms is a big danger to humans. But making milk without cows would definitely cut the risk that an animal virus will cause a new pandemic. As Richani says: “Soybeans don’t transmit diseases to humans.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Hungry for more from the frontiers of fromage? In the Build issue of our print magazine, Andrew Rosenblum tasted a yummy brie made only from plants. Harder to swallow was the claim by developer Climax Foods that its cheese was designed using artificial intelligence.

The idea of using yeast to create food ingredients, chemicals, and even fuel via fermentation is one of the dreams of synthetic biology. But it’s not easy. In 2021, we raised questions about high-flying startup Ginkgo Bioworks. This week its stock hit an all-time low of $0.49 per share as the company struggles to make … well, anything.

This spring, I traveled to Florida to watch attempts to create life in a totally new way: using a synthetic embryo made in a lab. The action involved cattle at the animal science department of the University of Florida, Gainesville.


From around the web

How many human bird flu cases are there? No one knows, because there’s barely any testing. Scientists warn we’re flying blind as US dairy farms struggle with an outbreak. (NBC)  

Moderna, one of the companies behind the covid-19 shots, is seeing early success with a cancer vaccine. It uses the same basic technology: gene messages packed into nanoparticles. (Nature)

It’s the covid-19 theory that won’t go away. This week the New York Times published an op-ed arguing that the virus was the result of a lab accident. We previously profiled the author, Alina Chan, who is a scientist with the Broad Institute. (NYTimes)

Sales of potent weight loss drugs, like Ozempic, are booming. But it’s not just humans who are overweight. Now the pet care industry is dreaming of treating chubby cats and dogs, too. (Bloomberg)

This London non-profit is now one of the biggest backers of geoengineering research

14 June 2024 at 05:00

A London-based nonprofit is poised to become one of the world’s largest financial backers of solar geoengineering research. And it’s just one of a growing number of foundations eager to support scientists exploring whether the world could ease climate change by reflecting away more sunlight.

Quadrature Climate Foundation, established in 2019 and funded through the proceeds of the investment fund Quadrature Capital, plans to provide $40 million for work in this field over the next three years, Greg De Temmerman, the organization’s chief science officer, told MIT Technology Review

That’s a big number for this subject—double what all foundations and wealthy individuals provided from 2008 through 2018 and roughly on par with what the US government has offered to date. 

“We think we can have a very strong impact in accelerating research, making sure it’s happening, and trying to unlock some public money at some point,” De Temmerman says.

Other nonprofits are set to provide tens of millions of dollars’ worth of additional grants to solar geoengineering research or related government advocacy work in the coming months and years. The uptick in funding will offer scientists in the controversial field far more support than they’ve enjoyed in the past and allow them to pursue a wider array of lab work, modeling, and potentially even outdoor experiments that could improve our understanding of the benefits and risks of such interventions. 

“It just feels like a new world, really different from last year,” says David Keith, a prominent geoengineering researcher and founding faculty director of the Climate Systems Engineering Initiative at the University of Chicago.

Other nonprofits that have recently disclosed funding for solar geoengineering research or government advocacy, or announced plans to provide it, include the Simons Foundation, the Environmental Defense Fund, and the Bernard and Anne Spitzer Charitable Trust. 

In addition, Meta’s former chief technology officer, Mike Schroepfer, told MIT Technology Review he is spinning out a new nonprofit, Outlier Projects. He says it will provide funding to solar geoengineering research as well as to work on ocean-based carbon removal and efforts to stabilize rapidly melting glaciers.

Outlier has already issued grants for the first category to the Environmental Defense Fund, Keith’s program at the University of Chicago, and two groups working to support research and engagement on the subject in the poorer, hotter parts of the world: the Degrees Initiative and the Alliance for Just Deliberation on Solar Geoengineering.

Researchers say that the rising dangers of climate change, the lack of progress on cutting emissions, and the relatively small amount of government research funding to date are fueling the growing support for the field.

“A lot of people are recognizing the obvious,” says Douglas MacMartin, a senior research associate in mechanical and aerospace engineering at Cornell, who focuses on geoengineering. “We’re not in a good position with regard to mitigation—and we haven’t spent enough money on research to be able to support good, wise decisions on solar geoengineering.”

Scientists are exploring a variety of potential methods of reflecting away more sunlight, including injecting certain particles into the stratosphere to mimic the cooling effect of volcanic eruptions, spraying salt toward marine clouds to make them brighter, or sprinkling fine dust-like material into the sky to break up heat-trapping cirrus clouds.

Critics contend that neither nonprofits nor scientists should support studying any of these methods, arguing that raising the possibility of such interventions eases pressure to cut emissions and creates a “slippery slope” toward deploying the technology. Even some who support more research fear that funding it through private sources, particularly from wealthy individuals who made their fortunes in tech and finance, may allow studies to move forward without appropriate oversight and taint public perceptions of the field.

The sense that we’re “putting the climate system in the care of people who have disrupted the media and information ecosystems, or disrupted finance, in the past” could undermine public trust in a scientific realm that many already find unsettling, says Holly Buck, an assistant professor at the University of Buffalo and author of After Geoengineering.

‘Unlocking solutions’

One of Quadrature’s first solar geoengineering grants went to the University of Washington’s Marine Cloud Brightening Program. In early April, that research group made headlines for beginning, and then being forced to halt, small-scale outdoor experiments on a decommissioned aircraft carrier sitting off the coast of Alameda, California. The effort entailed spraying a mist of small sea salt particles into the air. 

Quadrature was also one of the donors to a $20.5 million fund for the Washington, DC, nonprofit SilverLining, which was announced in early May. The group pools and distributes grants to solar geoengineering researchers around the world and has pushed for greater government support and funding for the field. The new fund will support that policy advocacy work as well as efforts to “promote equitable participation by all countries,” Kelly Wanser, executive director of SilverLining, said in an email.

She added that it’s crucial to accelerate solar geoengineering research because of the rising dangers of climate change, including the risk of passing “catastrophic tipping points.”

“Current climate projections may even underestimate risks, particularly to vulnerable populations, highlighting the urgent need to improve risk prediction and expand response strategies,” she wrote.

Quadrature has also issued grants for related work to Colorado State University, the University of Exeter, and the Geoengineering Model Intercomparison Project, an effort to run the same set of modeling experiments across an array of climate models. 

The foundation intends to direct its solar geoengineering funding to advance efforts in two main areas: academic research that could improve understanding of various approaches, and work to develop global oversight structures “to enable decision-making on [solar radiation modification] that is transparent, equitable, and science based.”

“We want to empower people to actually make informed decisions at some point,” De Temmerman says, stressing the particular importance of ensuring that people in the Global South are actively involved in such determinations. 

He says that Quadrature is not advocating for specific outcomes, taking no position on whether or not to ultimately use such tools. It also won’t support for-profit startups. 

In an emailed response to questions, he stressed that the funding for solar geoengineering is a tiny part of the foundation’s overall mission, representing just 5% of its $930 million portfolio. The lion’s share has gone to accelerate efforts to cut greenhouse-gas pollution, remove it from the atmosphere, and help vulnerable communities “respond and adapt to climate change to minimize harm.”

Billionaires Greg Skinner and Suneil Setiya founded both the Quadrature investment fund as well as the foundation. The nonprofit’s stated mission is unlocking solutions to the climate crisis, which it describes as “the most urgent challenge of our time.” But the group, which has 26 employees, has faced recent criticism for its benefactors’ stakes in oil and gas companies. Last summer, the Guardian reported that Quadrature Capital held tens of millions of dollars in investments in dozens of fossil-fuel companies, including ConocoPhillips and Cheniere Energy.

In response to a question about the potential for privately funded foundations to steer research findings in self-interested ways, or to create the perception that the results might be so influenced, De Temmerman stated: “We are completely transparent in our funding, ensuring it is used solely for public benefit and not for private gain.”

More foundations, more funds 

To be sure, a number of wealthy individuals and foundations have been providing funds for years to solar geoengineering research or policy work, or groups that collect funds to do so.

A 2021 paper highlighted contributions from a number of wealthy individuals, with a high concentration from the tech sector, including Microsoft cofounder Bill Gates, Facebook cofounder Dustin Moskovitz, Facebook alum and venture capitalist Matt Cohler, former Google executive (and extreme skydiver) Alan Eustace, and tech and climate solutions investors Chris and Crystal Sacca. It noted a number of nonprofits providing grants to the field as well, including the Hewlett Foundation, the Alfred P. Sloan Foundation, and the Blue Marble Fund.

But despite the backing of those high-net-worth individuals, the dollar figures have been low. From 2008 through 2018, total private funding only reached about $20 million, while government funding just topped $30 million. 

The spending pace is now picking up, though, as new players move in.

The Simons Foundation previously announced it would provide $50 million to solar geoengineering research over a five-year period. The New York–based nonprofit invited researchers to apply for grants of up to $500,000, adding that it “strongly” encouraged scientists in the Global South to do so. 

The organization is mostly supporting modeling and lab studies. It said it would not fund social science work or field experiments that would release particles into the environment. Proposals for such experiments have sparked heavy public criticism in the past.

Simons recently announced a handful of initial awards to researchers at Harvard, Princeton, ETH Zurich, the Indian Institute of Tropical Meteorology, the US National Center for Atmospheric Research, and elsewhere.

“For global warming, we will need as many tools in the toolbox as possible,” says David Spergel, president of the Simons Foundation. 

“This was an area where there was a lot of basic science to do, and a lot of things we didn’t understand,” he adds. “So we wanted to fund the basic science.”

In January, the Environmental Defense Fund hosted a meeting at its San Francisco headquarters to discuss the guardrails that should guide research on solar geoengineering, as first reported by Politico. EDF had already provided some support to the Solar Radiation Management Governance Initiative, a partnership with the Royal Society and other groups set up to “ensure that any geoengineering research that goes ahead—inside or outside the laboratory—is conducted in a manner that is responsible, transparent, and environmentally sound.” (It later evolved into the Degrees Initiative.)

But EDF has now moved beyond that work and is “in the planning stages of starting a research and policy initiative on [solar radiation modification],” said Lisa Dilling, associate chief scientist at the environmental nonprofit, in an email. That program will include regranting, which means raising funds from other groups or individuals and distributing them to selected recipients, and advocating for more public funding, she says. 

Outlier also provided a grant to a new nonprofit, Reflective. This organization is developing a road map to prioritize research needs and pooling philanthropic funding to accelerate work in the most urgent areas, says its founder, Dakota Gruener. 

Gruener was previously the executive director of ID2020, a nonprofit alliance that develops digital identification systems. Cornell’s MacMartin is a scientific advisor to the new nonprofit and will serve as the chair of the scientific advisory board.

Government funding is also slowly increasing. 

The US government started a solar geoengineering research program in 2019, funded through the National Oceanic and Atmospheric Administration, that currently provides about $11 million a year.

In February, the UK’s Natural Environment Research Council announced a £10.5 million, five-year research program. In addition, the UK’s Advanced Research and Invention Agency has said it’s exploring and soliciting input for a research program in climate and weather engineering.

Funding has not been allocated as yet, but the agency’s programs typically provide around £50 million.

‘When, not if’

More funding is generally welcome news for researchers who hope to learn more about the potential of solar geoengineering. Many argue that it’s crucial to study the subject because the technology may offer ways to reduce death and suffering, and prevent the loss of species and the collapse of ecosystems. Some also stress it’s crucial to learn what impact these interventions might have and how these tools could be appropriately regulated, because nations may be tempted to implement them unilaterally in the face of extreme climate crises.

It’s likely a question of “when, not if,” and we should “act and research accordingly,” says Gernot Wagner, a climate economist at Columbia Business School, who was previously the executive director of Harvard’s Solar Geoengineering Research Program. “In many ways the time has come to take solar geoengineering much more seriously.”

In 2021, a National Academies report recommended that the US government create a solar geoengineering research program, equipped with $100 million to $200 million in funding over five years.

But there are differences between coordinated government-funded research programs, which have established oversight bodies to consider the merit, ethics, and appropriate transparency of proposed research, and a number of nonprofits with different missions providing funding to the teams they choose. 

To the degree that they create oversight processes that don’t meet the same standards, it could affect the type of science that’s done, the level of public notice provided, and the pressures that researchers feel to deliver certain results, says Duncan McLaren, a climate intervention fellow at the University of California, Los Angeles.

“You’re not going to be too keen on producing something that seems contrary to what you thought the grant maker was looking for,” he says, adding later: “Poorly governed research could easily give overly optimistic answers about what [solar geoengineering] could do, and what its side effects may or may not be.”

Whatever the motivations of individual donors, Buck fears that the concentration of money coming from high tech and finance could also create optics issues, undermining faith in research and researchers and possibly slowing progress in the field.

“A lot of this is going to backfire because it’s going to appear to people as Silicon Valley tech charging in and breaking things,” she says. 

Cloud controversy

Some of the concerns about privately funded work in this area are already being tested.

By most accounts, the Alameda experiment in marine cloud brightening that Quadrature backed was an innocuous basic-science project, which would not have actually altered clouds. But the team stirred up controversy by moving ahead without wide public notice.

City officials quickly halted the experiments, and earlier this month the city council voted unanimously to shut the project down.

Alameda mayor Marilyn Ezzy Ashcraft has complained that city staffers received only vague notice about the project up front. They were then inundated with calls from residents who had heard about it in the media and were concerned about the health implications, she said, according to CBS News.

In response to a question about the criticism, SilverLining’s Wanser said in an email: “We worked with the lease-holder, the USS Hornet, on the process for notifying the city of Alameda. The city staff then engaged experts to independently evaluate the health and environmental safety of the … studies, who found that they did not pose any environmental or health risks to the community.”

Wanser, who is a principal of the Marine Cloud Brightening Program, stressed they’ve also received offers of support from local residents and businesses.

“We think that the availability of data and information on the nature of the studies, and its evaluation by local officials, was valuable in helping people consider it in an informed way for themselves,” she added.

Some observers were also concerned that the research team said it selected its own six-member board to review the proposed project. That differs from a common practice with publicly funded scientific experiments, which often include a double-blind review process, in which neither the researchers nor the reviewers know each other’s names. The concern with breaking from that approach is that scientists could select outside researchers who they believe are likely to greenlight their proposals, and the reviewers may feel pressure to provide more favorable feedback than they might offer anonymously.

Wanser stressed that the team picked “distinguished researchers in the specialized field.”

“There are different approaches for different programs, and in this case, the levels of expertise and transparency were important features,” she added. “They have not received any criticism of the design of the studies themselves, which speaks to their robustness and their value.”

‘Transparent and responsible’

Solar geoengineering researchers often say that they too would prefer public funding, all things being equal. But they stress that those sources are still limited and it’s important to move the field forward in the meantime, so long as there are appropriate standards in place.

“As long as there’s clear transparency about funding sources, [and] there’s no direct influence on the research by the donors, I don’t precisely see what the problem is,” MacMartin says. 

Several nonprofits emerging or moving into this space said that they are working to create responsible oversight structures and rules.

Gruener says that Reflective won’t accept anonymous donations or contributions from people whose wealth comes mostly from fossil fuels. She adds that all donors will be disclosed, that they won’t have any say over the scientific direction of the organization or its chosen research teams, and that they can’t sit on the organization’s board. 

“We think transparency is the only way to build trust, and we’re trying to ensure that our governance structure, our processes, and the outcomes of our research are all public, understandable, and readily available,” she says.

In a statement, Outlier said it’s also in favor of more publicly supported work: “It’s essential for governments to become the leading funders and coordinators of research in these areas.” It added that it’s supporting groups working to accelerate “government leadership” on the subject, including through its grant to EDF. 

Quadrature’s De Temmerman stresses the importance of public research programs as well, noting that the nonprofit hopes to catalyze much more such funding through its support for government advocacy work. 

“We are here to push at the beginning and then at some point just let some other forms of capital actually come,” he says.

How to opt out of Meta’s AI training

14 June 2024 at 04:57

MIT Technology Review’s How To series helps you get things done. 

If you post or interact with chatbots on Facebook, Instagram, Threads, or WhatsApp, Meta can use your data to train its generative AI models beginning June 26, according to its recently updated privacy policy. Even if you don’t use any of Meta’s platforms, it can still scrape data such as photos of you if someone else posts them.

Internet data scraping is one of the biggest fights in AI right now. Tech companies argue that anything on the public internet is fair game, but they are facing a barrage of lawsuits over their data practices and copyright. It will likely take years until clear rules are in place. 

In the meantime, they are running out of training data to build even bigger, more powerful models, and to Meta, your posts are a gold mine. 

If you’re uncomfortable with having Meta use your personal information and intellectual property to train its AI models in perpetuity, consider opting out. Although Meta does not guarantee it will allow this, it does say it will “review objection requests in accordance with relevant data protection laws.” 

What that means for US users

Users in the US or other countries without national data privacy laws don’t have any foolproof ways to prevent Meta from using their data to train AI, which has likely already been used for such purposes. Meta does not have an opt-out feature for people living in these places. 

A spokesperson for Meta says it does not use the content of people’s private messages to each other to train AI. However, public social media posts are seen as fair game and can be hoovered up into AI training data sets by anyone. Users who don’t want that can set their account settings to private to minimize the risk. 

The company has built in-platform tools that allow people to delete their personal information from chats with Meta AI, the spokesperson says.

How users in Europe and the UK can opt out 

Users in the European Union and the UK, which are protected by strict data protection regimes, have the right to object to their data being scraped, so they can opt out more easily. 

If you have a Facebook account:

1. Log in to your account. You can access the new privacy policy by following this link. At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here

Alternatively, you can click on your account icon at the top right-hand corner. Select “Settings and privacy” and then “Privacy center.” On the left-hand side you will see a drop-down menu labeled “How Meta uses information for generative AI models and features.” Click on that, and scroll down. Then click on “Right to object.” 

2. Fill in the form with your information. The form requires you to explain how Meta’s data processing affects you. I was successful in my request by simply stating that I wished to exercise my right under data protection law to object to my personal data being processed. You will likely have to confirm your email address. 

3. You should soon receive both an email and a notification on your Facebook account confirming if your request has been successful. I received mine a minute after submitting the request.

If you have an Instagram account: 

1. Log in to your account. Go to your profile page, and click on the three lines at the top-right corner. Click on “Settings and privacy.”

2. Scroll down to the “More info and support” section, and click “About.” Then click on “Privacy policy.” At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here

3. Repeat steps 2 and 3 as above. 

The Download: the rise of gamification, and carbon dioxide storage

13 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How gamification took over the world

It’s a thought that occurs to every video-game player at some point: What if the weird, hyper-focused state I enter when playing in virtual worlds could somehow be applied to the real one?

Often pondered during especially challenging or tedious tasks in meatspace (writing essays, say, or doing your taxes), it’s an eminently reasonable question to ask. Life, after all, is hard. And while video games are too, there’s something almost magical about the way they can promote sustained bouts of superhuman concentration and resolve.

For some, this phenomenon leads to an interest in flow states and immersion. For others, it’s simply a reason to play more games. For a handful of consultants, startup gurus, and game designers in the late 2000s, it became the key to unlocking our true human potential. But instead of liberating us, gamification turned out to be just another tool for coercion, distraction, and control. Read the full story.

—Bryan Gardiner

This piece is from the forthcoming print issue of MIT Technology Review, which explores the theme of Play. It’s set to go live on Wednesday June 26, so if you don’t already, subscribe now to get a copy when it lands.

Why we need to shoot carbon dioxide thousands of feet underground

Carbon capture and storage (CCS) tech has two main steps. First, carbon dioxide is filtered out of emissions at facilities like fossil-fuel power plants. Then it gets locked away, or stored.  

Wrangling pollution might seem like the important bit, and there’s often a lot of focus on what fraction of emissions a CCS system can filter out. But without storage, the whole project would be pretty useless. It’s really the combination of capture and long-term storage that helps to reduce climate impact. 

Storage is getting more attention lately, though, and there’s something of a carbon storage boom coming, as my colleague James Temple covered in his latest storyRead on to find out where we might store captured carbon pollution, and why it matters

—Casey Crownhart

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How Microsoft is building an AI empire
Its early investment in OpenAI helped it to leapfrog its old rival Google. (WSJ $)
+ OpenAI has lobbying regulators on its mind. (FT $)
+ Microsoft’s bet is paying off: OpenAI’s revenue has doubled. (The Information $)
+ Behind Microsoft CEO Satya Nadella’s push to get AI tools in developers’ hands. (MIT Technology Review)

2 Rapid tests to target antimicrobial resistance are on the rise
Fast and easy analysis of common infections would stop doctors resorting to antibiotics. (FT $)
+ How bacteria-fighting viruses could go mainstream. (MIT Technology Review)

3 Stable Diffusion’s new release is generating horrifying bodies
Its mangled generations inspire revulsion and amusement in equal measure. (Ars Technica)
+ Text-to-image AI models can be tricked into generating disturbing images. (MIT Technology Review)

4 A hacker broke into Tile’s location tracking system
And they’re holding customer data to ransom. (404 Media)

5 Inside the lucrative black market for Silicon Valley’s stolen bicycles 🚲
One man made it his mission to unveil the theft pipeline. (Wired $) 

6 What’s going on with Apple’s Vision Pro?
Analyst estimates suggest it hasn’t sold as well as expected. (NYT $)
+ It’s changing disabled users’ lives for the better. (NY Mag $)

7 Drone mapping is protecting slums from climate disasters
Because informal settlements aren’t visible on standard internet maps. (Bloomberg $)

8 The Excel World Championship is here
Spreadsheet fans, unite! (The Verge

9 This humanoid robot can drive a car 🚗
That’s one solution to the problems posed by driverless cars. (TechCrunch)
+ Is robotics about to have its own ChatGPT moment? (MIT Technology Review)

10 America’s new cricket superstars are also tech workers 🏏
Saurabh Netravalkar, a software engineer for Oracle, is turning his hobby into a global spectacle. (WP $)

Quote of the day

“We desire more of the world than what’s available on 20cm of glass.”

—David Sax, author of the book The Revenge of Analog, tells the Guardian why some people are starting to turn their backs on smartphones.

The big story

The search for extraterrestrial life is targeting Jupiter’s icy moon Europa

February 2024

Europa, Jupiter’s fourth-largest moon, is nothing like ours. Its surface is a vast saltwater ocean, encased in a blanket of cracked ice, one that seems to occasionally break open and spew watery plumes into the moon’s thin atmosphere. 

For these reasons, Europa captivates planetary scientists. All that water and energy—and hints of elements essential for building organic molecules —point to another extraordinary possibility. Jupiter’s big, bright moon could host life. 

And they may eventually get some answers. Later this year, NASA plans to launch Europa Clipper, the largest-­ever craft designed to visit another planet. Scheduled to reach Jupiter in 2030, it will spend four years analyzing this moon to determine whether it could support life. Read the full story.

—Stephen Ornes

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Boston’s newest sport, cliff diving, is attracting a lot of attention.
+ Why brat green has taken over the internet.
+ The annual Gloucestershire cheese-rolling race is bigger, and more perilous, than ever. 🧀
+ Relaxing summer vibes? Say no more

Why we need to shoot carbon dioxide thousands of feet underground

13 June 2024 at 06:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

There’s often one overlooked member in a duo. Peanut butter outshines jelly in a PB&J every time (at least in my eyes). For carbon capture and storage technology, the storage part tends to be the underappreciated portion. 

Carbon capture and storage (CCS) tech has two main steps (as you might guess from the name). First, carbon dioxide is filtered out of emissions at facilities like fossil-fuel power plants. Then it gets locked away, or stored.  

Wrangling pollution might seem like the important bit, and there’s often a lot of focus on what fraction of emissions a CCS system can filter out. But without storage, the whole project would be pretty useless. It’s really the combination of capture and long-term storage that helps to reduce climate impact. 

Storage is getting more attention lately, though, and there’s something of a carbon storage boom coming, as my colleague James Temple covered in his latest story. He wrote about what a rush of federal subsidies will mean for the CCS business in the US, and how supporting new projects could help us hit climate goals or push them further out of reach, depending on how we do it. 

The story got me thinking about the oft-forgotten second bit of CCS. Here’s where we might store captured carbon pollution, and why it matters. 

When it comes to storage, the main requirement is making sure the carbon dioxide can’t accidentally leak out and start warming up the atmosphere.

One surprising place that might fit the bill is oil fields. Instead of building wells to extract fossil fuels, companies are looking to build a new type of well where carbon dioxide that’s been pressurized until it reaches a supercritical state—in which liquid and gas phases don’t really exist—is pumped deep underground. With the right conditions (including porous rock deep down and a leak-preventing solid rock layer on top), the carbon dioxide will mostly stay put. 

Shooting carbon dioxide into the earth isn’t actually a new idea, though in the past it’s largely been used by the oil and gas industry for a very different purpose: pulling more oil out of the ground. In a process called enhanced oil recovery, carbon dioxide is injected into wells, where it frees up oil that’s otherwise tricky to extract. In the process, most of the injected carbon dioxide stays underground. 

But there’s a growing interest in sending the gas down there as an end in itself, sparked in part in the US by new tax credits in the Inflation Reduction Act. Companies can rake in $85 per ton of carbon dioxide that’s captured and permanently stored in geological formations, depending on the source of the gas and how it’s locked away. 

In his story, James took a look at one proposed project in California, where one of the state’s largest oil and gas producers has secured draft permits from federal regulators. The project would inject carbon dioxide about 6,000 feet below the surface of the earth, and the company’s filings say the project could store tens of millions of tons of carbon dioxide over the next couple of decades. 

It’s not just land-based projects that are sparking interest, though. State officials in Texas recently awarded a handful of leases for companies to potentially store carbon dioxide deep underwater in the Gulf of Mexico.

And some companies want to store carbon dioxide in products and materials that we use, like concrete. Concrete is made by mixing reactive cement with water and material like sand; if carbon dioxide is injected into a fresh concrete mix, some of it will get involved in the reactions, trapping it in place. I covered how two companies tested out this idea in a newsletter last year.

Products we use every day, from diamonds to sunglasses, can be made with captured carbon dioxide. If we assume that those products stick around for a long time and don’t decompose (how valid this assumption is depends a lot on the product), one might consider these a form of long-term storage, though these markets probably aren’t big enough to make a difference in the grand scheme of climate change. 

Ultimately, though of course we need to emit less, we’ll still need to lock carbon away if we’re going to meet our climate goals.  


Now read the rest of The Spark

Related reading

For all the details on what to expect in the coming carbon storage boom, including more on the potential benefits and hazards of CCS, read James’s full story here.

This facility in Iceland uses mineral storage deep underground to lock away carbon dioxide that’s been vacuumed out of the atmosphere. See all the photos in this story from 2022

On the side of a road stands a gogoro power station with an enel x system box on the side. Each of the four network station units holds 30 batteries.
GOGORO

Another thing

When an earthquake struck Taiwan in April, the electrical grid faced some hiccups—and an unlikely hero quickly emerged in the form of battery-swap stations for electric scooters. In response to the problem, a group of stations stopped pulling power from the grid until it could recover. 

For more on how Gogoro is using battery stations as a virtual power plant to support the grid, check out my colleague Zeyi Yang’s latest story. And if you need a catch-up, check out this explainer on what a virtual power plant is and how it works

Keeping up with climate  

New York was set to implement congestion pricing, charging cars that drove into the busiest part of Manhattan. Then the governor put that plan on hold indefinitely. It’s a move that reveals just how tightly Americans are clinging to cars, even as the future of climate action may depend on our loosening that grip. (The Atlantic)

Speaking of cars, preparations in Paris for the Olympics reveal what a future with fewer of them could look like. The city has closed over 100 streets to vehicles, jacked up parking rates for SUVs, and removed tens of thousands of parking spots. (NBC News)

An electric lawnmower could be the gateway to a whole new world. People who have electric lawn equipment or solar panels are more likely to electrify other parts of their homes, like heating and cooking. (Canary Media)

Companies are starting to look outside the battery. From massive moving blocks to compressed air in caverns, energy storage systems are getting weirder as the push to reduce prices intensifies. (Heatmap)

Rivian announced updated versions of its R1T and R1S vehicles. The changes reveal the company’s potential path toward surviving in a difficult climate for EV makers. (Tech Crunch)

First responders in the scorching southwestern US are resorting to giant ice cocoons to help people suffering from extreme heat. (New York Times)

→ Here’s how much heat your body can take. (MIT Technology Review)

One oil producer is getting closer to making what it calls “net-zero oil” by pumping captured carbon dioxide down into wells to get more oil out. The implications for the climate and the future of fossil fuels in our economy are … complicated. (Cipher)

How gamification took over the world

13 June 2024 at 05:00

It’s a thought that occurs to every video-game player at some point: What if the weird, hyper-focused state I enter when playing in virtual worlds could somehow be applied to the real one? 

Often pondered during especially challenging or tedious tasks in meatspace (writing essays, say, or doing your taxes), it’s an eminently reasonable question to ask. Life, after all, is hard. And while video games are too, there’s something almost magical about the way they can promote sustained bouts of superhuman concentration and resolve.

For some, this phenomenon leads to an interest in flow states and immersion. For others, it’s simply a reason to play more games. For a handful of consultants, startup gurus, and game designers in the late 2000s, it became the key to unlocking our true human potential.

In her 2010 TED Talk, “Gaming Can Make a Better World,” the game designer Jane McGonigal called this engaged state “blissful productivity.” “There’s a reason why the average World of Warcraft gamer plays for 22 hours a week,” she said. “It’s because we know when we’re playing a game that we’re actually happier working hard than we are relaxing or hanging out. We know that we are optimized as human beings to do hard and meaningful work. And gamers are willing to work hard all the time.”

McGonigal’s basic pitch was this: By making the real world more like a video game, we could harness the blissful productivity of millions of people and direct it at some of humanity’s thorniest problems—things like poverty, obesity, and climate change. The exact details of how to accomplish this were a bit vague (play more games?), but her objective was clear: “My goal for the next decade is to try to make it as easy to save the world in real life as it is to save the world in online games.”

While the word “gamification” never came up during her talk, by that time anyone following the big-ideas circuit (TED, South by Southwest, DICE, etc.) or using the new Foursquare app would have been familiar with the basic idea. Broadly defined as the application of game design elements and principles to non-game activities—think points, levels, missions, badges, leaderboards, reinforcement loops, and so on—gamification was already being hawked as a revolutionary new tool for transforming education, work, health and fitness, and countless other parts of life. 

Instead of liberating us, gamification turned out to be just another tool for coercion, distraction, and control.

Adding “world-saving” to the list of potential benefits was perhaps inevitable, given the prevalence of that theme in video-game storylines. But it also spoke to gamification’s foundational premise: the idea that reality is somehow broken. According to McGonigal and other gamification boosters, the real world is insufficiently engaging and motivating, and too often it fails to make us happy. Gamification promises to remedy this design flawby engineering a new reality, one that transforms the dull, difficult, and depressing parts of life into something fun and inspiring. Studying for exams, doing household chores, flossing, exercising, learning a new language—there was no limit to the tasks that could be turned into games, making everything IRL better.

Today, we live in an undeniably gamified world. We stand up and move around to close colorful rings and earn achievement badges on our smartwatches; we meditate and sleep to recharge our body batteries; we plant virtual trees to be more productive; we chase “likes” and “karma” on social media sites and try to swipe our way toward social connection. And yet for all the crude gamelike elements that have been grafted onto our lives, the more hopeful and collaborative world that gamification promised more than a decade ago seems as far away as ever. Instead of liberating us from drudgery and maximizing our potential, gamification turned out to be just another tool for coercion, distraction, and control. 

Con game

This was not an unforeseeable outcome. From the start, a small but vocal group of journalists and game designers warned against the fairy-tale thinking and facile view of video games that they saw in the concept of gamification. Adrian Hon, author of You’ve Been Played, a recent book that chronicles its dangers, was one of them. 

“As someone who was building so-called ‘serious games’ at the time the concept was taking off, I knew that a lot of the claims being made around the possibility of games to transform people’s behaviors and change the world were completely overblown,” he says. 

Hon isn’t some knee-jerk polemicist. A trained neuroscientist who switched to a career in game design and development, he’s the co-creator of Zombies, Run!—one of the most popular gamified fitness apps in the world. While he still believes games can benefit and enrich aspects of our nongaming lives, Hon says a one-size-fits-all approach is bound to fail. For this reason, he’s firmly against both the superficial layering of generic points, leaderboards, and missions atop everyday activities and the more coercive forms of gamification that have invaded the workplace.

three snakes in concentric circles
SELMAN DESIGN

Ironically, it’s these broad and varied uses that make criticizing the practice so difficult. As Hon notes in his book, gamification has always been a fast-moving target, varying dramatically in scale, scope, and technology over the years. As the concept has evolved, so too have its applications, whether you think of the gambling mechanics that now encourage users of dating apps to keep swiping, the “quests” that compel exhausted Uber drivers to complete just a few more trips, or the utopian ambition of using gamification to save the world.

In the same way that AI’s lack of a fixed definition today makes it easy to dismiss any one critique for not addressing some other potential definition of it, so too do gamification’s varied interpretations. “I remember giving talks critical of gamification at gamification conferences, and people would come up to me afterwards and be like, ‘Yeah, bad gamification is bad, right? But we’re doing good gamification,’” says Hon. (They weren’t.) 

For some critics, the very idea of “good gamification” was anathema. Their main gripe with the term and practice was, and remains, that it has little to nothing to do with actual games.

“A game is about play and disruption and creativity and ambiguity and surprise,” wrote the late Jeff Watson, a game designer, writer, and educator who taught at the University of Southern California’s School of Cinematic Arts. Gamification is about the opposite—the known, the badgeable, the quantifiable. “It’s about ‘checking in,’ being tracked … [and] becoming more regimented. It’s a surveillance and discipline system—a wolf in sheep’s clothing. Beware its lure.”

Another game designer, Margaret Robertson, has argued that gamification should really be called “pointsification,” writing: “What we’re currently terming gamification is in fact the process of taking the thing that is least essential to games and representing it as the core of the experience. Points and badges have no closer a relationship to games than they do to websites and fitness apps and loyalty cards.”

For the author and game designer Ian Bogost, the entire concept amounted to a marketing gimmick. In a now-famous essay published in the Atlantic in 2011, he likened gamification to the moral philosopher Harry Frankfurt’s definition of bullshit—that is, a strategy intended to persuade or coerce without regard for actual truth. 

“The idea of learning or borrowing lessons from game design and applying them to other areas was never the issue for me,” Bogost told me. “Rather, it was not doing that—acknowledging that there’s something mysterious, powerful, and compelling about games, but rather than doing the hard work, doing no work at all and absconding with the spirit of the form.” 

Gaming the system

So how did a misleading term for a misunderstood process that’s probably just bullshit come to infiltrate virtually every part of our lives? There’s no one simple answer. But gamification’s meteoric rise starts to make a lot more sense when you look at the period that gave birth to the idea. 

The late 2000s and early 2010s were, as many have noted, a kind of high-water mark for techno-­optimism. For people both inside the tech industry and out, there was a sense that humanity had finally wrapped its arms around a difficult set of problems, and that technology was going to help us squeeze out some solutions. The Arab Spring bloomed in 2011 with the help of platforms like Facebook and Twitter, money was more or less free, and “____ can save the world” articles were legion (with ____ being everything from “eating bugs” to “design thinking”).

This was also the era that produced the 10,000-hours rule of success, the long tail, the four-hour workweek, the wisdom of crowds, nudge theory, and a number of other highly simplistic (or, often, flat-out wrong) theories about the way humans, the internet, and the world work. 

“All of a sudden you had VC money and all sorts of important, high-net-worth people showing up at game developer conferences.”

Ian Bogost, author and game designer

Adding video games to this heady stew of optimism gave the game industry something it had long sought but never achieved: legitimacy. Even with games ascendant in popular culture—and on track to eclipse both the film and music industries in terms of revenue—they still were largely seen as a frivolous, productivity-­squandering, violence-encouraging form of entertainment. Seemingly overnight, gamification changed all that. 

“There was definitely this black-sheep mentality in the game development community—the sense that what we had been doing for decades was just a joke to people,” says Bogost. “All of a sudden you had VC money and all sorts of important, high-net-worth people showing up at game developer conferences, and it was like, ‘Finally someone’s noticing. They realize that we have something to offer.’”

This wasn’t just flattering; it was intoxicating. Gamification took a derided pursuit and recast it as a force for positive change, a way to make the real world better. While  enthusiastic calls to “build a game layer on top of reality” may sound dystopian to many of us today, the sentiment didn’t necessarily have the same ominous undertones at the end of the aughts. 

Combine the cultural recasting of games with an array of cheaper and faster technologies—GPS, ubiquitous and reliable mobile internet, powerful smartphones, Web 2.0 tools and services—and you arguably had all the ingredients needed for gamification’s rise. In a very real sense, reality in 2010 was ready to be gamified. Or to put it a slightly different way: Gamification was an idea perfectly suited for its moment. 

Gaming behavior

Fine, you might be asking at this point, but does it work? Surely, companies like Apple, Uber, Strava, Microsoft, Garmin, and others wouldn’t bother gamifying their products and services if there were no evidence of the strategy’s efficacy. The answer to the question, unfortunately, is super annoying: Define work.

Because gamification is so pervasive and varied, it’s hard to address its effectiveness in any direct or comprehensive way. But one can confidently say this: Gamification did not save the world. Climate change still exists. As do obesity, poverty, and war. Much of generic gamification’s power supposedly resides in its ability to nudge or steer us toward, or away from, certain behaviors using competition (challenges and leaderboards), rewards (points and achievement badges), and other sources of positive and negative feedback. 

Gamification is, and has always been, a way to induce specific behaviors in people using virtual carrots and sticks.

On that front, the results are mixed. Nudge theory lost much of its shine with academics in 2022 after a meta-analysis of previous studies concluded that, after correcting for publication bias, there wasn’t much evidence it worked to change behavior at all. Still, there are a lot of ways to nudge and a lot of behaviors to modify. The fact remains that plenty of people claim to be highly motivated to close their rings, earn their sleep crowns, or hit or exceed some increasingly ridiculous number of steps on their Fitbits (see humorist David Sedaris). 

Sebastian Deterding, a leading researcher in the field, argues that gamification can work, but its successes tend to be really hard to replicate. Not only do academics not know what works, when, and how, according to Deterding, but “we mostly have just-so stories without data or empirical testing.” 

8bit carrot dangling from a stick
SELMAN DESIGN

In truth, gamification acolytes were always pulling from an old playbook—one that dates back to the early 20th century. Then, behaviorists like John Watson and B.F. Skinner saw human behaviors (a category that for Skinner included thoughts, actions, feelings, and emotions) not as the products of internal mental states or cognitive processes but, rather, as the result of external forces—forces that could conveniently be manipulated. 

If Skinner’s theory of operant conditioning, which doled out rewards to positively reinforce certain behaviors, sounds a lot like Amazon’s “Fulfillment Center Games,” which dole out rewards to compel workers to work harder, faster, and longer—well, that’s not a coincidence. Gamification is, and has always been, a way to induce specific behaviors in people using virtual carrots and sticks. 

Sometimes this may work; other times not. But ultimately, as Hon points out, the question of efficacy may be beside the point. “There is no before or after to compare against if your life is always being gamified,” he writes. “There isn’t even a static form of gamification that can be measured, since the design of coercive gamification is always changing, a moving target that only goes toward greater and more granular intrusion.” 

The game of life

Like any other art form, video games offer a staggering array of possibilities. They can educate, entertain, foster social connection, inspire, and encourage us to see the world in different ways. Some of the best ones manage to do all of this at once.

Yet for many of us, there’s the sense today that we’re stuck playing an exhausting game that we didn’t opt into. This one assumes that our behaviors can be changed with shiny digital baubles, constant artificial competition, and meaningless prizes. Even more insulting, the game acts as if it exists for our benefit—promising to make us fitter, happier, and more productive—when in truth it’s really serving the commercial and business interests of its makers. 

Metaphors can be an imperfect but necessary way to make sense of the world. Today, it’s not uncommon to hear talk of leveling up, having a God Mode mindset, gaining XP, and turning life’s difficulty settings up (or down). But the metaphor that resonates most for me—the one that seems to neatly capture our current predicament—is that of the NPC, or non-player character.  

NPCs are the “Sisyphean machines” of video games, programmed to follow a defined script forever and never question or deviate. They’re background players in someone else’s story, typically tasked with furthering a specific plotline or performing some manual labor. To call someone an NPC in real life is to accuse them of just going through the motions, not thinking for themselves, not being able to make their own decisions. This, for me, is gamification’s real end result. It’s acquiescence pretending to be empowerment. It strips away the very thing that makes games unique—a sense of agency—and then tries to mask that with crude stand-ins for accomplishment.

So what can we do? Given the reach and pervasiveness of gamification, critiquing it at this point can feel a little pointless, like railing against capitalism. And yet its own failed promises may point the way to a possible respite. If gamifying the world has turned our lives into a bad version of a video game, perhaps this is the perfect moment to reacquaint ourselves with why actual video games are great in the first place. Maybe, to borrow an idea from McGonigal, we should all start playing better games. 

Bryan Gardiner is a writer based in Oakland, California. 

The Download: Apple’s AI plans, and a carbon storage boom

12 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Apple is promising personalized AI in a private cloud. Here’s how that will work.

At its Worldwide Developer Conference on Monday, Apple for the first time unveiled its vision for supercharging its product lineup with artificial intelligence. The key feature, which will run across virtually all of its product line, is Apple Intelligence, a suite of AI-based capabilities that promises to deliver personalized AI services while keeping sensitive data secure. It represents Apple’s largest leap forward in using our private data to help AI do tasks for us. 

To make the case it can do this without sacrificing privacy, the company says it has built a new way to handle sensitive data in the cloud. The pitch offers an implicit contrast with the likes of Alphabet, Amazon, or Meta, which collect and store enormous amounts of personal data. So how will it work? Read our story to find out.

—James O’Donnell

The world’s on the verge of a carbon storage boom

A growing number of carbon storage projects are on the way across California, the US, and the world—a trend driven by growing government subsidies, looming national climate targets, and declining revenue and growth in traditional oil and gas activities.

Proponents hope it’s the start of a sort of oil boom in reverse, kick-starting a process through which the world will eventually bury more greenhouse gas than it adds to the atmosphere. 

However, opponents insist these efforts will prolong the life of fossil-fuel plants, allow air and water pollution to continue, and create new health and environmental risks that could disproportionately harm disadvantaged communities surrounding the projects. Read the full story.

—James Temple

How Gogoro’s swap-and-go scooter batteries can strengthen the grid

If you’ve ever been to Taiwan, you’ve likely run into Gogoro’s green-and-white battery-swap stations. With 12,500 stations around the island, it’s built a sweeping network that allows users of electric scooters to drop off an empty battery and get a fully charged one immediately. 

Back in April, Gogoro’s network reacted to emergency blackouts after a 7.4 magnitude earthquake. Zeyi Yang, our China reporter, spoke to Horace Luke, Gogoro’s cofounder and CEO, to understand how it helped to boost the grid’s resilience in the face of disaster. Read the full story.

This story is from China Report, our weekly newsletter covering tech in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk has dropped his lawsuit against OpenAI
Just hours ahead of a scheduled hearing in San Francisco. (CNBC)
+ Musk had argued that OpenAI had breached its commitment to investors. (WP $)+ The billionaire is locked in an ongoing dispute with Sam Altman. (FT $)

2 A far-right TikTok star is set on governing France
He uses the platform to normalize his party’s toxic policies for younger voters. (FT $)

3 Adderall is still in short supply across the US
Americans are hiring workers in the Philippines to source scarce prescriptions. (404 Media)

4 This startup 3D-printed an entire rocket engine
Within just 72 hours. (IEEE Spectrum)

5 Ozempic seems to have numerous health benefits beyond weight loss
But we’re not really sure why. (The Atlantic $)
+ Weight-loss injections have taken over the internet. But what does this mean for people IRL? (MIT Technology Review)

6 Meet the Spanish women taking on Wikipedia’s gender gap
They’re dedicated to publishing pages focused on unsung female heroes. (The Guardian)

7 The secret to a safe space flight? Software engineers
They’re essential to keeping missions on an even keel. (WP $)

8 Temu is threatening to dethrone eBay
The Chinese retail site is now attracting more repeat shoppers. (Bloomberg $)
+ This obscure shopping app is now America’s most downloaded. (MIT Technology Review)

9 How media companies became hooked on games
Blame Wordle. (NYT $)

10 The internet isn’t actually more toxic than it used to be
It just feels that way. (Bloomberg $)
+ How to fix the internet. (MIT Technology Review)

Quote of the day

“It’s the nail in the coffin for future creators launching a blog.”

—Amber Venz Box, co-founder of social shopping app LTK, warns would-be bloggers to reconsider now that Google has launched its AI Overviews summary feature, she tells The Information.

The big story

The world is moving closer to a new cold war fought with authoritarian tech

September 2022

Despite President Biden’s assurances that the US is not seeking a new cold war, one is brewing between the world’s autocracies and democracies—and technology is fueling it.

Authoritarian states are following China’s lead and are trending toward more digital rights abuses by increasing the mass digital surveillance of citizens, censorship, and controls on individual expression.

And while democracies also use massive amounts of surveillance technology, it’s the tech trade relationships between authoritarian countries that’s enabling the rise of digitally enabled social control. Read the full story.

—Tate Ryan-Mosley

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ This tiny driftfish is a master of disguise.
+ Why the USA’s national forests are every bit as amazing as its national parks.
+ Follow these tips and you’ll be producing barista-level coffee in no time at all.
+ Feeling burnt out? Try playing these fun, short video games.

How Gogoro’s swap-and-go scooter batteries can strengthen the grid

By: Zeyi Yang
12 June 2024 at 06:00

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

If you’ve ever been to Taiwan, you’ve likely run into Gogoro’s green-and-white battery-swap stations in one city or another. With 12,500 stations around the island, Gogoro has built a sweeping network that allows users of electric scooters to drop off an empty battery and get a fully charged one immediately. Gogoro is also found in China, India, and a few other countries.
 
This morning, I published a story on how Gogoro’s battery-swap network in Taiwan reacted to emergency blackouts after the 7.4 magnitude earthquake there this April. I talked to Horace Luke, Gogoro’s cofounder and CEO, to understand how in three seconds, over 500 Gogoro battery-swap locations stopped drawing electricity from the grid, helping stabilize the power frequency.
 
Gogoro’s battery stations acted like something called a virtual power plant (VPP), a new idea that’s becoming adopted around the world as a way to stitch renewable energy into the grid. The system draws energy from distributed sources like battery storage or small rooftop solar panels and coordinates those sources to increase supply when electricity demand peaks. As a result, it reduces the reliance on traditional coal or gas power plants.
 
There’s actually a natural synergy between technologies like battery swapping and virtual power plants (VPP). Not only can battery-swap stations coordinate charging times with the needs of the grid, but the idle batteries sitting in Gogoro’s stations can also become an energy reserve in times of emergency, potentially feeding energy back to the grid. If you want to learn more about how this system works, you can read the full story here.

Two graphs showing how Gogoro's battery-swap charging stopped consuming electricity when the power frequency dropped below normal levels in April.
Statistics shared by Gogoro and Enel X show how its battery-swap stations automatically stopped charging batteries on April 3 and April 15, when there were power outages caused by the earthquake.
GOGORO

When I talked to Gogoro’s Luke for this story, I asked him: “At what point in the company’s history did you come up with the idea to use these batteries for VPP networks?”
 
To my surprise, Luke answered: “Day one.”
 
As he explains, Gogoro was actually not founded to be an electric-scooter company; it was founded to be a “smart energy” company. 

“We started with the thesis of how smart energy, through portability and connectivity, can enable many use case scenarios,” Luke says. “Transportation happens to be accounting for something like 27% or 28% of your energy use in your daily life.” And that’s why the company first designed the batteries for two-wheeled vehicles, a popular transportation option in Taiwan and across Asia.
 
Having succeeded in promoting its scooters and the battery-swap charging method in Taiwan, it is now able to explore other possible uses of these modular, portable batteries—more than 1.4 million of which are in circulation at this point. 
 
“Think of smart, portable, connected energy like a propane tank,” Luke says. Depending on their size,  propane tanks can be used to cook in the wild or to heat a patio. If lithium batteries can be modular and portable in a similar way, they can also serve many different purposes.

Using them in VPP programs that protect the grid from blackouts is one; beyond that, in Taipei City, Gogoro has worked with the local government to build energy backup stations for traffic lights, using the same batteries to keep the lights running in future blackouts. The batteries can also be used as backup power storage for critical facilities like hospitals. When a blackout happens, battery storage can release electricity much faster than diesel generators, keeping the impact at a minimum.

None of this would be possible without the recent advances that have made batteries more powerful and efficient. And it was clear from our conversation that Luke is obsessed with batteries—the long way the technology has come, and their potential to address a lot more energy use cases in the future.

“I still remember getting my first flashlight when I was a little kid. That button just turned the little lightbulb on and off. And that was what was amazing about batteries at the time,” says Luke. “Never did people think that AA batteries were going to power calculators or the Walkman. The guy that invented the alkaline battery never thought that. We’ll continue to take that creativity and apply it to portable energy, and that’s what inspires us every day.”

What other purposes do you think portable lithium batteries like the ones made by Gogoro could have? Let me know your ideas by writing to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Far-right parties won big in the latest European Parliament elections, which could push the EU further toward a trade war with China. (Nikkei Asia $)
 
2. Volvo has started moving some of its manufacturing capacity from China to Belgium in order to avoid the European Union tariffs on Chinese imports. (The Times $)
 
3. Some major crypto exchanges have withdrawn from applying for business licenses in Hong Kong after the city government clarified that it doesn’t welcome businesses that offer crypto services to mainland China. (South China Morning Post $)
 
4. NewsBreak, the most downloaded news app in the US, does most of its engineering work in China. The app has also been found to use AI tools to make up local news that never happened. (Reuters $)
 
5. The Australian government ordered a China-linked fund to reduce its investment in an Australian rare-earth-mining company. (A/symmetric)
 
6. China just installed the largest offshore wind turbine in the world. It’s designed to generate enough power in a year for around 36,000 households. (Electrek)
 
7. Four college instructors from Iowa were stabbed on a visit to northern China. While the motive and identity of the assailant are still unknown, the incident has been quickly censored on the Chinese internet. (BBC)

Lost in translation

Qian Zhimin, a Chinese businesswoman who fled the country in 2017 after raising billions of dollars from Chinese investors in the name of bitcoin investments, was arrested in London and is facing a trial in October this year, according to the Chinese publication Caijing. In the early 2010s, when the cryptocurrency first became known in China, Qian’s company lured over 128,000 retail investors, predominantly elderly people, to buy fraudulent investment products that bet on the price of bitcoins and gadgets like smart bracelets that allegedly could also mine bitcoins. 
 
After the scam was exposed, Qian escaped to the UK with a fake passport. She controls over 61,000 bitcoins, now worth nearly $4 billion, and has been trying to liquidate them by buying properties in London. But those attempts caught the attention of anti-money-laundering authorities in the UK. With her trial date approaching, the victims in China are hoping to work with the UK jurisdiction to recover their assets.

One more thing

I know one day we will see self-driving vehicles racing each other and cutting each other off, but I didn’t expect it to happen so soon with two package delivery robots in China. Maybe it’s just their look, but it seems cuter than when human drivers do the same thing?

TBH, I was expecting a world where unmanned delivery vehicles racing each other on busy streets to come maybe 5 yrs from now, but JD & its subsidiary Dada are making it happen w/o hitting anything

RIP to China's delivery ppl pic.twitter.com/Ae1Wy4mWAj

— tphuang (@tphuang) June 9, 2024

The world’s on the verge of a carbon storage boom

12 June 2024 at 05:00

Pump jacks and pipelines clutter the Elk Hills oil field of California, a scrubby stretch of land in the southern Central Valley that rests above one of the nation’s richest deposits of fossil fuels.

Oil production has been steadily declining in the state for decades, as tech jobs have boomed and legislators have enacted rigorous environmental and climate rules. Companies, towns, and residents across Kern County, where the poverty rate hovers around 18%, have grown increasingly desperate for new economic opportunities.

Late last year, California Resources Corporation (CRC), one of the state’s largest oil and gas producers, secured draft permits from the US Environmental Protection Agency to develop a new type of well in the oil field, which it asserts would provide just that. If the company gets final approval from regulators, it intends to drill a series of boreholes down to a sprawling sedimentary formation roughly 6,000 feet below the surface, where it will inject tens of millions of metric tons of carbon dioxide to store it away forever. 

They’re likely to become California’s first set of what are known as Class VI wells, designed specifically for sequestering the planet-warming greenhouse gas. But many, many similar carbon storage projects are on the way across the state, the US, and the world—a trend driven by growing government subsidies, looming national climate targets, and declining revenue and growth in traditional oil and gas activities.

Since the start of 2022, companies like CRC have submitted nearly 200 applications in the US alone to develop wells of this new type. That offers one of the clearest signs yet that capturing the carbon dioxide pollution from industrial and energy operations instead of releasing it into the atmosphere is about to become a much bigger business. 

Proponents hope it’s the start of a sort of oil boom in reverse, kick-starting a process through which the world will eventually bury more greenhouse gas than it adds to the atmosphere. They argue that embracing carbon capture and storage (CCS) is essential to any plan to rapidly slash emissions. This is, in part, because retrofitting the world’s massive existing infrastructure with carbon dioxide–scrubbing equipment could be faster and easier than rebuilding every power plant and factory. CCS can be a particularly helpful way to cut emissions in certain heavy industries, like cement, fertilizer, and paper and pulp production, where we don’t have scalable, affordable ways of producing crucial goods without releasing carbon dioxide. 

“In the right context, CCS saves time, it saves money, and it lowers risks,” says Julio Friedmann, chief scientist at Carbon Direct and previously the principal deputy assistant secretary for the Department of Energy’s Office of Fossil Energy.

But opponents insist these efforts will prolong the life of fossil-fuel plants, allow air and water pollution to continue, and create new health and environmental risks that could disproportionately harm disadvantaged communities surrounding the projects, including those near the Elk Hills oil field.

“It’s the oil majors that are proposing and funding a lot of these projects,” says Catherine Garoupa, executive director of the Central Valley Air Quality Coalition, which has tracked a surge of applications for carbon storage projects throughout the district. “They see it as a way of extending business as usual and allowing them to be carbon neutral on paper while still doing the same old dirty practices.”

A slow start

The US federal government began overseeing injection wells in the 1970s. A growing number of companies had begun injecting waste underground, sparking a torrent of water pollution lawsuits and the passage of several major laws designed to ensure clean drinking water. The EPA developed standards and rules for a variety of wells and waste types, including deep Class I wells for hazardous or even radioactive refuse and shallower Class V wells for non-hazardous fluids.

In 2010, amid federal efforts to create incentives for industries to capture more carbon dioxide, the agency added Class VI wells for CO2 sequestration. To qualify, a proposed well site must have the appropriate geology, with a deep reservoir of porous rock that can accommodate carbon dioxide molecules sitting below a layer of nonporous “cap rock” like shale. The reservoir also needs to sit well below any groundwater aquifers, so that it won’t contaminate drinking water supplies, and it must be far enough from fault lines to reduce the chances that earthquakes might crack open pathways for the greenhouse gas to escape. 

The carbon sequestration program got off to a slow start. As of late 2021, there were only two Class VI injection wells in operation and 22 applications pending before regulators.

But there’s been a flurry of proposals since—both to the EPA and to the three states that have secured permission to authorize such wells themselves, which include North Dakota, Wyoming, and Louisiana. The Clean Air Task Force, a Boston-based energy policy think tank keeping track of such projects, says there are now more than 200 pending applications.

What changed is the federal incentives. The Inflation Reduction Act of 2022 dramatically boosted the tax credits available for permanently storing carbon dioxide in geological formations, bumping it up from $50 a ton to $85 when it’s captured from industrial and power plants. The credit rose from $50 to $180 a ton when the greenhouse gas is sourced from direct-air-capture facilities, a different technology that sucks greenhouse gas out of the air. Tax credits allow companies to directly reduce their federal tax obligations, which can cover the added expense of CCS across a growing number of sectors.

The separate Bipartisan Infrastructure Law also provided billions of dollars for carbon capture demonstration and pilot projects.

A tax credit windfall 

CRC became an independent company in 2014, when Occidental Petroleum, one of the world’s largest oil and gas producers, spun it off along with many of its California assets. But the new company quickly ran into financial difficulties, filing for bankruptcy protection in 2020 amid plummeting energy demand during the early stages of the covid-19 pandemic. It emerged several months later, after restructuring its debt, converting loans into equity, and raising new lines of credit. 

The following year, CRC created a carbon management subsidiary, Carbon TerraVault, seizing an emerging opportunity to develop a new business around putting carbon dioxide back underground, whether for itself or for customers. The company says it was also motivated by the chance to “help advance the energy transition and curb rising global temperatures at 1.5 °C.”

CRC didn’t respond to inquiries from MIT Technology Review.

In its EPA application the company, based in Long Beach, California, says that hundreds of thousands of tons of carbon dioxide would initially be captured each year from a gas treatment facility in the Elk Hills area as well as a planned plant designed to produce hydrogen from natural gas. The gas is purified and compressed before it’s pumped underground.

The company says the four wells for which it has secured draft permits could store nearly 1.5 million tons of carbon dioxide per year from those and other facilities, with a total capacity of 38 million tons over 26 years. CRC says the projects will create local jobs and help the state meet its pressing climate targets.

“We are committed to supporting the state in reaching carbon neutrality and developing a more sustainable future for all Californians,” Francisco Leon, chief executive of CRC, said of the draft EPA decision in a statement. 

Those wells, however, are just the start of the company’s carbon management plans: Carbon TerraVault has applied to develop 27 additional wells for carbon storage across the state, including two more at Elk Hills, according to the EPA’s permit tracker. If those are all approved and developed, it would transform the subsidiary into a major player in the emerging business of carbon storage—and set it up for a windfall in federal tax credits. 

Carbon sequestration projects can qualify for 12 years of US subsidies. If Carbon TerraVault injects half a million tons of carbon dioxide into each of the 31 wells it has applied for over that time period, the projects could secure tax credits worth more than $15.8 billion.

That figure doesn’t take inflation into account and assumes the company meets the most stringent requirements of the law and sources all the carbon dioxide from industrial facilities and power plants. The number could rise significantly if the company injects more than that amount into wells, or if a significant share of the carbon dioxide is sourced through direct air capture. 

Chevron, BP, ExxonMobil, and Archer Daniels Midland, a major producer of ethanol, have also submitted Class VI well applications to the EPA and could be poised to secure significant IRA subsidies as well.

To be sure, it takes years to secure regulatory permits, and not every proposed project will move forward in the end. The companies involved will still need to raise financing, add carbon capture equipment to polluting facilities, and in many cases build out carbon dioxide pipelines that require separate approvals. But the increased IRA tax credits could drive as much as 250 million metric tons of additional annual storage or use of carbon dioxide in the US by 2035, according to the latest figures from the Princeton-led REPEAT Project.

“It’s a gold rush,” Garoupa says. “It’s being shoved down our throats as ‘Oh, it’s for climate goals.’” But if we’re “not doing it judiciously and really trying to achieve real emissions reductions first,” she adds, it’s merely a distraction from the other types of climate action needed to prevent dangerous levels of warming. 

Carbon accounting

Even if CCS can help drive down emissions in the aggregate, the net climate benefits from any given project will depend on a variety of factors, including how well it’s developed and run—and what other changes it brings about throughout complex, interconnected energy systems over time.

Notably, adding carbon capture equipment to a plant doesn’t trap all the climate pollution. Project developers are generally aiming for around 90%. So if you build a new project with CCS, you’ve increased emissions, not cut them, relative to the status quo.

In addition, the carbon capture process requires a lot of power to run, which may significantly increase emissions of greenhouse gas and other pollutants elsewhere by, for example, drawing on additional generation from natural-gas plants on the grid. Plus, the added tax incentives may make it profitable for a company to continue operating a fossil-fuel plant that it would otherwise have shut down or to run the facilities more hours of the day to generate more carbon dioxide to bury. 

All the uncaptured emissions associated with those changes can reduce, if not wipe out, any carbon benefits from incorporating CCS, says Danny Cullenward, a senior fellow with the Kleinman Center for Energy Policy at the University of Pennsylvania.

But none of that matters as far as the carbon storage subsidies are concerned. Businesses could even use the savings to expand their traditional oil and gas operations, he says.

“It’s not about the net climate impact—it’s about the gross tons you stick under ground,” Cullenward says of the tax credits.

A study last year raised a warning about how that could play out in the years to come, noting that the IRA may require the US to provide hundreds of billions to trillions of dollars in tax credits for power plants that add CCS. Under the scenarios explored, those projects could collectively deliver emissions reductions of as much as 24% or increases as high as 82%. The difference depends largely on how much the incentives alter energy production and the degree to which they extend the life of coal and natural-gas plants.

Coauthor Emily Grubert, an associate professor at Notre Dame and a former deputy assistant secretary at the Department of Energy, stressed that regulators must carefully consider these complex, cascading emissions impacts when weighing whether to approve such proposals.

“Not taking this seriously risks potentially trillions of dollars and billions of tonnes of [greenhouse-gas] emissions, not to mention the trust and goodwill of the American public, which is reasonably skeptical of these potentially critically important technologies,” she wrote in an op-ed in the industry outlet Utility Dive.

Global goals

Other nations and regions are also accelerating efforts to capture and store carbon as part of their broader efforts to lower emissions and combat climate change. The EU, which has dedicated tens of billions of euros to accelerating the development of CCS, is working to develop the capacity to store 50 million tons of carbon dioxide per year by 2030, according to the Global CCS Institute’s 2023 industry report.

Likewise, Japan hopes to sequester 240 million tons annually by 2050, while Saudi Arabia is aiming for 44 million tons by 2035. The industry trade group said there were 41 CCS projects in operation around the world at the time, with another 351 under development.

A handful of US facilities have been capturing carbon dioxide for decades for a variety of uses, including processing or producing natural gas, ammonia, and soda ash, which is used in soaps, cosmetics, baking soda, and other goods.

But Ben Grove, carbon storage manager at the Clean Air Task Force, says the increased subsidies in the IRA made CCS economical for many industry segments in the US, including: chemicals, petrochemicals, hydrogen, cement, oil, gas and ethanol refineries, and steel, at least on the low end of the estimated cost ranges. 

In many cases, the available subsidies still won’t fully cover the added cost of CCS in power plants and certain other industrial facilities. But the broader hope is that these federal programs will help companies scale up and optimize these processes over time, driving down the cost of CCS and making it feasible for more sectors, Grove says.

‘Against all evidence’

In addition to the gas treatment and hydrogen plants, CRC says, another source for the captured carbon dioxide could eventually include its own Elk Hills Power Plant, which runs on natural gas extracted from the oil field. The company has said it intends to retrofit the facility to capture 1.5 million tons of emissions a year.

Still other sources could include renewable fuels plants, which may mean biofuel facilities, steam generators, and a proposed direct-air-capture plant that would be developed by the carbon-removal startup Avnos, according to the EPA filing. Carbon TerraVault is part of a consortium, which includes Avnos, Climeworks, Southern California Gas Company, and others, that has proposed developing a direct-air-capture hub in Kern County, where the Elk Hills field is located. Last year, the Department of Energy awarded the so-called California DAC Hub nearly $12 million to conduct engineering design studies for direct-air-capture facilities.

CCS may be a helpful tool for heavy industries that are really hard to clean up, but that’s largely not what CRC has proposed, says Natalia Ospina, legal director at the Center on Race, Poverty & the Environment, an environmental-justice advocacy organization in Delano, California. 

“The initial source will be the Elk Hills oil field itself and the plant that refines gas in the first place,” she says. “That is just going to allow them to extend the life of the oil and gas industry in Kern County, which goes against all the evidence in front of us in terms of how we should be addressing the climate crisis.”

Natalia Ospina
Natalia Ospina, legal director at the Center on Race, Poverty & the Environment.
NATALIA OSPINA

Critics of the project also fear that some of these facilities will continue producing other types of pollution, like volatile organic compounds and fine particulate matter, in a region that’s already heavily polluted. Some analyses show that adding a carbon capture process reduces those other pollutants in certain cases. But Ospina argues that oil and gas companies can’t be trusted to operate such projects in ways that reduce pollution to the levels necessary to protect neighboring communities.

‘You need it’

Still, a variety of studies, from the state level to the global, conclude that CCS may play an essential role in cutting greenhouse-gas emissions fast enough to moderate the global dangers of climate change.

California is banking heavily on capturing carbon from plants or removing it from the air through various means to meet its 2045 climate neutrality goal, aiming for 20 million metric tons by 2030 and 100 million by midcentury. The Air Resources Board, the state’s main climate regulator, declared that “there is no path to carbon neutrality without carbon removal and sequestration.” 

Recent reports from the UN’s climate panel have also stressed that carbon capture could be a “critical mitigation option” for cutting emissions from cement and chemical production. The body’s modeling study scenarios that limit global warming to 1.5 °C over preindustrial levels rely on significant levels of CCS, including tens to hundreds of billions of tons of carbon dioxide captured this century from plants that use biomatter to produce heat and electricity—a process known as BECCS.

Meeting global climate targets without carbon capture would require shutting down about a quarter of the world’s fossil-fuel plants before they’ve reached the typical 50-year life span, the International Energy Agency notes. That’s an expensive proposition, and one that owners, investors, industry trade groups, and even nations will fiercely resist.

“Everyone keeps coming to the same conclusion, which is that you need it,” Friedmann says.

Lorelei Oviatt, director of the Kern County Planning and Natural Resources Department, declined to express an opinion about CRC’s Elk Hills project while local regulators are reviewing it. But she strongly supports the development of CCS projects in general, describing it as a way to help her region restore lost tax revenue and jobs as “the state puts the area’s oil companies out of business” through tighter regulations.

County officials have proposed the development of a more than 4,000-acre carbon management park, which could include hydrogen, steel, and biomass facilities with carbon-capture components. An economic analysis last year found that the campus and related activities could create more than 22,000 jobs, and generate more than $88 million in sales and property taxes for the economically challenged county and cities, under a high-end scenario. 

Oviatt adds that embracing carbon capture may also allow the region to avoid the “stranded asset” problem, in which major employers are forced to shut down expensive power plants, refineries, and extraction wells that could otherwise continue operating for years to decades.

“We’re the largest producer of oil in California and seventh in the country; we have trillions and trillions of dollars in infrastructure,” she says. “The idea that all of that should just be abandoned does not seem like a thoughtful way to design an economy.”

Carbon dioxide leaks

But critics fear that preserving it simply means creating new dangers for the disproportionately poor, unhealthy, and marginalized communities surrounding these projects.

In a 2022 letter to the EPA, the Center for Biological Diversity raised the possibility that the sequestered carbon dioxide could leak out of wells or pipelines, contributing to climate change and harming local residents.

These concerns are not without foundation.

In February 2020, Denbury Enterprises’ Delta pipeline, which stretches more than 100 miles between Mississippi and Louisiana, ruptured and released more than 30,000 barrels’ worth of compressed, liquid CO2 gas near the town of Satartia, Mississippi. 

The leak forced hundreds of people to evacuate their homes and sent dozens to local hospitals, some struggling to breathe and others unconscious and foaming at the mouth, as the Huffington Post detailed in an investigative piece. Some vehicles stopped running as well: the carbon dioxide in air displaced oxygen, which is essential to the combustion in combustion engines.

There have also been repeated carbon dioxide releases over the last two decades at an enhanced oil recovery project at the Salt Creek oil field in Wyoming. Starting in the late 1800s, a variety of operators have drilled, abandoned, sealed, and resealed thousands of wells at the site, with varying degrees of quality, reliability, and documentation, according to the Natural Resources Defense Council. A sustained leak in 2004 emitted 12,000 cubic feet of the gas per day, on average, while a 2016 release of carbon dioxide and methane forced a school near the field to relocate its classes for the remainder of the year.

Some fear that similar issues could arise at Elk Hills, which could become the nation’s first carbon sequestration project developed in a depleted oil field. Companies have drilled and operated thousands of wells over decades at the site, many of which have sat idle and unplugged for years, according to a 2020 investigation by the Los Angeles Times and the Center for Public Integrity.

Ospina argues that CRC and county officials are asking the residents of Kern County to act as test subjects for unproven and possibly dangerous CCS use cases, compounding the health risks facing a region that is already exposed to too many.

Whether the Elk Hills project moves forward or not, the looming carbon storage boom will soon force many other areas to wrestle with similar issues. What remains to be seen is whether companies and regulators can adequately address community fears and demonstrate that the climate benefits promised in modeling studies will be delivered in reality. 

Update: This story was updated to remove a photo that was not of the Elk Hills oil field and had been improperly captioned.

Apple is promising personalized AI in a private cloud. Here’s how that will work.

11 June 2024 at 16:34

At its Worldwide Developer Conference on Monday, Apple for the first time unveiled its vision for supercharging its product lineup with artificial intelligence. The key feature, which will run across virtually all of its product line, is Apple Intelligence, a suite of AI-based capabilities that promises to deliver personalized AI services while keeping sensitive data secure.

It represents Apple’s largest leap forward in using our private data to help AI do tasks for us. To make the case it can do this without sacrificing privacy, the company says it has built a new way to handle sensitive data in the cloud.

Apple says its privacy-focused system will first attempt to fulfill AI tasks locally on the device itself. If any data is exchanged with cloud services, it will be encrypted and then deleted afterward. The company also says the process, which it calls Private Cloud Compute, will be subject to verification by independent security researchers. 

The pitch offers an implicit contrast with the likes of Alphabet, Amazon, or Meta, which collect and store enormous amounts of personal data. Apple says any personal data passed on to the cloud will be used only for the AI task at hand and will not be retained or accessible to the company, even for debugging or quality control, after the model completes the request. 

Simply put, Apple is saying people can trust it to analyze incredibly sensitive data—photos, messages, and emails that contain intimate details of our lives—and deliver automated services based on what it finds there, without actually storing the data online or making any of it vulnerable. 

It showed a few examples of how this will work in upcoming versions of iOS. Instead of scrolling through your messages for that podcast your friend sent you, for example, you could simply ask Siri to find and play it for you. Craig Federighi, Apple’s senior vice president of software engineering, walked through another scenario: an email comes in pushing back a work meeting, but his daughter is appearing in a play that night. His phone can now find the PDF with information about the performance, predict the local traffic, and let him know if he’ll make it on time. These capabilities will extend beyond apps made by Apple, allowing developers to tap into Apple’s AI too. 

Because the company profits more from hardware and services than from ads, Apple has less incentive than some other companies to collect personal online data, allowing it to position the iPhone as the most private device. Even so, Apple has previously found itself in the crosshairs of privacy advocates. Security flaws led to leaks of explicit photos from iCloud in 2014. In 2019, contractors were found to be listening to intimate Siri recordings for quality control. Disputes about how Apple handles data requests from law enforcement are ongoing. 

The first line of defense against privacy breaches, according to Apple, is to avoid cloud computing for AI tasks whenever possible. “The cornerstone of the personal intelligence system is on-device processing,” Federighi says, meaning that many of the AI models will run on iPhones and Macs rather than in the cloud. “It’s aware of your personal data without collecting your personal data.”

That presents some technical obstacles. Two years into the AI boom, pinging models for even simple tasks still requires enormous amounts of computing power. Accomplishing that with the chips used in phones and laptops is difficult, which is why only the smallest of Google’s AI models can be run on the company’s phones, and everything else is done via the cloud. Apple says its ability to handle AI computations on-device is due to years of research into chip design, leading to the M1 chips it began rolling out in 2020.

Yet even Apple’s most advanced chips can’t handle the full spectrum of tasks the company promises to carry out with AI. If you ask Siri to do something complicated, it may need to pass that request, along with your data, to models that are available only on Apple’s servers. This step, security experts say, introduces a host of vulnerabilities that may expose your information to outside bad actors, or at least to Apple itself.

“I always warn people that as soon as your data goes off your device, it becomes much more vulnerable,” says Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project and practitioner in residence at NYU Law School’s Information Law Institute. 

Apple claims to have mitigated this risk with its new Private Cloud Computer system. “For the first time ever, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud,” Apple security experts wrote in their announcement, stating that personal data “isn’t accessible to anyone other than the user—not even to Apple.” How does it work?

Historically, Apple has encouraged people to opt in to end-to-end encryption (the same type of technology used in messaging apps like Signal) to secure sensitive iCloud data. But that doesn’t work for AI. Unlike messaging apps, where a company like WhatsApp does not need to see the contents of your messages in order to deliver them to your friends, Apple’s AI models need unencrypted access to the underlying data to generate responses. This is where Apple’s privacy process kicks in. First, Apple says, data will be used only for the task at hand. Second, this process will be verified by independent researchers. 

Needless to say, the architecture of this system is complicated, but you can imagine it as an encryption protocol. If your phone determines it needs the help of a larger AI model, it will package a request containing the prompt it’s using and the specific model, and then put a lock on that request. Only the specific AI model to be used will have the proper key.

When asked by MIT Technology Review whether users will be notified when a certain request is sent to cloud-based AI models instead of being handled on-device, an Apple spokesperson said there will be transparency to users but that further details aren’t available.

Dawn Song, co-Director of UC Berkeley Center on Responsible Decentralized Intelligence and an expert in private computing, says Apple’s new developments are encouraging. “The list of goals that they announced is well thought out,” she says. “Of course there will be some challenges in meeting those goals.”

Cahn says that to judge from what Apple has disclosed so far, the system seems much more privacy-protective than other AI products out there today. That said, the common refrain in his space is “Trust but verify.” In other words, we won’t know how secure these systems keep our data until independent researchers can verify its claims, as Apple promises they will, and the company responds to their findings.

“Opening yourself up to independent review by researchers is a great step,” he says. “But that doesn’t determine how you’re going to respond when researchers tell you things you don’t want to hear.” Apple did not respond to questions from MIT Technology Review about how the company will evaluate feedback from researchers.

The privacy-AI bargain

Apple is not the only company betting that many of us will grant AI models mostly unfettered access to our private data if it means they could automate tedious tasks. OpenAI’s Sam Altman described his dream AI tool to MIT Technology Review as one “that knows absolutely everything about my whole life, every email, every conversation I’ve ever had.” At its own developer conference in May, Google announced Project Astra, an ambitious project to build a “universal AI agent that is helpful in everyday life.”

It’s a bargain that will force many of us to consider for the first time what role, if any, we want AI models to play in how we interact with our data and devices. When ChatGPT first came on the scene, that wasn’t a question we needed to ask. It was simply a text generator that could write us a birthday card or a poem, and the questions it raised—like where its training data came from or what biases it perpetuated—didn’t feel quite as personal. 

Now, less than two years later, Big Tech is making billion-dollar bets that we trust the safety of these systems enough to fork over our private information. It’s not yet clear if we know enough to make that call, or how able we are to opt out even if we’d like to. “I do worry that we’re going to see this AI arms race pushing ever more of our data into other people’s hands,” Cahn says.

Apple will soon release beta versions of its Apple Intelligence features, starting this fall with the iPhone 15 and the new macOS Sequoia, which can be run on Macs and iPads with M1 chips or newer. Says Apple CEO Tim Cook, “We think Apple intelligence is going to be indispensable.”

The Download: fighting blackouts with battery-swap networks, and AI surgery monitoring

11 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How battery-swap networks are preventing emergency blackouts

On the morning of April 3, Taiwan was hit by a 7.4 magnitude earthquake. Seconds later, hundreds of battery-swap stations in Taiwan sensed something else: the power frequency of the electric grid took a sudden drop, a signal that some power plants had been disconnected in the disaster. The grid was now struggling to meet energy demand.

These stations, built by the Taiwanese company Gogoro for electric-powered two-wheeled vehicles like scooters, mopeds, and bikes, reacted immediately. According to numbers provided by the company, 590 Gogoro battery-swap locations (some of which have more than one swap station) stopped drawing electricity from the grid, lowering local demand by a total six megawatts—enough to power thousands of homes. It took 12 minutes for the grid to recover, and the battery-swap stations then resumed normal operation.

Gogoro is not the only company working on battery-swapping for electric scooters—New York City recently launched a pilot program to give delivery drivers the option to charge this way—but it’s certainly one of the most successful.

Now the company is putting the battery network to another use: Gogoro is working to incorporate the stations into a virtual power plant (VPP) system that helps the Taiwanese grid stay more resilient in emergencies like April’s earthquake. Read the full story.

—Zeyi Yang

What using artificial intelligence to help monitor surgery can teach us

Every year, some 22,000 Americans a year are killed as a result of serious medical errors in hospitals, many of them on operating tables. There have been cases where surgeons have left surgical sponges inside patients’ bodies or performed the wrong procedure altogether.

Teodor Grantcharov, a professor of surgery at Stanford, thinks he has found a tool to make surgery safer and minimize human error: AI-powered “black boxes” in operating theaters that work in a similar way to an airplane’s black box.

These devices, built by Grantcharov’s company Surgical Safety Technologies, record everything in the operating room via panoramic cameras, microphones in the ceiling, and anesthesia monitors before using artificial intelligence to help surgeons make sense of the data. 

These black boxes are in use in almost 40 institutions in the US, Canada, and Western Europe, from Mount Sinai to Duke to the Mayo Clinic. Organizations in all sectors are thinking about how to adopt AI to make things safer or more efficient. What this example from hospitals shows is that the situation is not always clear cut, and there are many pitfalls you need to avoid. Read the full story.

—Melissa Heikkilä

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Apple is weaving AI into its apps and devices
It promises that its Apple Intelligence system will preserve user privacy. (NYT $)
+ Crucially, users won’t be strong armed into using ChatGPT. (FT $)
+ If you missed the WWDC keynote, here’s a summary of the key announcements. (WP $)

2 Adobe says it won’t train AI on its customers’ work
Following a major backlash from users who feared just that. (The Verge)
+ Artists are increasingly worried that their work will be reduced to training data. (Slate $)
+ How Adobe’s bet on non-exploitative AI is paying off. (MIT Technology Review)

3 Thermoelectricity between liquid metals has been observed for the first time
It could lead to better-designed liquid batteries. (IEEE Spectrum)
+ Zinc batteries that offer an alternative to lithium just got a big boost. (MIT Technology Review)

4 The Titan submersible disaster could have been avoided
Former Oceangate workers claim its CEO lied about the vessel’s safety. (Wired $)

5 Solar-powered planes are becoming a reality
They’re super light, and super-sustainable. (WSJ $)
+ Everything you need to know about the wild world of alternative jet fuels. (MIT Technology Review)

6 A crowd-measuring AI tool helps cut through protest misinformation
It suggests that the size of a crowd gathered in support of the former Brazilian president Bolsonaro was less than a third of what was claimed. (Rest of World)

7 New tools could lower methane emissions from livestock 🐄
Breeding animals that emit less methane is one approach. (Knowable Magazine)

8 At least advertisers are enjoying the metaverse
Everyone else, not so much. (FT $)
+ Welcome to the oldest part of the metaverse. (MIT Technology Review)

9 AI is helping us to decipher how elephants communicate
They call each other by their names! (The Guardian) 🐘
+ They speak to each other using individualized rumble sounds. (NYT $)

10 TikTok is bringing talk shows to city streets
No studio, no problem. (Insider $)

Quote of the day

“Visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage.”

—Elon Musk threatens to ban Apple products from his companies if the iPhone maker integrates OpenAI at the operating system level, Reuters reports. 

The big story

Why we can no longer afford to ignore the case for climate adaptation

August 2022

Back in the 1990s, anyone suggesting that we’d need to adapt to climate change while also cutting emissions was met with suspicion. Most climate change researchers felt adaptation studies would distract from the vital work of keeping pollution out of the atmosphere to begin with.

Despite this hostile environment, a handful of experts were already sowing the seeds for a new field of research called “climate change adaptation”: study and policy on how the world could prepare for and adapt to the new disasters and dangers brought forth on a warming planet. Today, their research is more important than ever. Read the full story

—Madeline Ostrander

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ With the return of House of the Dragon, what’s next for the Game of Thrones franchise?
+ Here’s what top chefs like to put in their sandwiches: barbecue sauce and pickled okra.
+ How not to take the good stuff in life for granted.
+ Save the Long Island cheese pumpkin, and other endangered foods!

What using artificial intelligence to help monitor surgery can teach us

11 June 2024 at 05:30

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Every year, some 22,000 Americans a year are killed as a result of serious medical errors in hospitals, many of them on operating tables. There have been cases where surgeons have left surgical sponges inside patients’ bodies or performed the wrong procedure altogether.

Teodor Grantcharov, a professor of surgery at Stanford, thinks he has found a tool to make surgery safer and minimize human error: AI-powered “black boxes” in operating theaters that work in a similar way to an airplane’s black box. These devices, built by Grantcharov’s company Surgical Safety Technologies, record everything in the operating room via panoramic cameras, microphones in the ceiling, and anesthesia monitors before using artificial intelligence to help surgeons make sense of the data. They capture the entire operating room as a whole, from the number of times the door is opened to how many non-case-related conversations occur during an operation.

These black boxes are in use in almost 40 institutions in the US, Canada, and Western Europe, from Mount Sinai to Duke to the Mayo Clinic. But are hospitals on the cusp of a new era of safety—or creating an environment of confusion and paranoia? Read the full story by Simar Bajaj here

This resonated with me as a story with broader implications. Organizations in all sectors are thinking about how to adopt AI to make things safer or more efficient. What this example from hospitals shows is that the situation is not always clear cut, and there are many pitfalls you need to avoid. 

Here are three lessons about AI adoption that I learned from this story: 

1. Privacy is important, but not always guaranteed. Grantcharov realized very quickly that the only way to get surgeons to use the black box was to make them feel protected from possible repercussions. He has designed the system to record actions but hide the identities of both patients and staff, even deleting all recordings within 30 days. His idea is that no individual should be punished for making a mistake. 

The black boxes render each person in the recording anonymous; an algorithm distorts people’s voices and blurs out their faces, transforming them into shadowy, noir-like figures. So even if you know what happened, you can’t use it against an individual. 

But this process is not perfect. Before 30-day-old recordings are automatically deleted, hospital administrators can still see the operating room number, the time of the operation, and the patient’s medical record number, so even if personnel are technically de-identified, they aren’t truly anonymous. The result is a sense that “Big Brother is watching,” says Christopher Mantyh, vice chair of clinical operations at Duke University Hospital, which has black boxes in seven operating rooms.

2. You can’t adopt new technologies without winning people over first. People are often justifiably suspicious of the new tools, and the system’s flaws when it comes to privacy are part of why staff have been hesitant to embrace it. Many doctors and nurses actively boycotted the new surveillance tools. In one hospital, the cameras were sabotaged by being turned around or deliberately unplugged. Some surgeons and staff refused to work in rooms where they were in place.

At the hospital where some of the cameras were initially sabotaged, it took up to six months for surgeons to get used to them. But things went much more smoothly once staff understood the guardrails around the technology. They started trusting it more after one-on-one conversations in which bosses explained how the data was automatically de-identified and deleted.

3. More data doesn’t always lead to solutions. You shouldn’t adopt new technologies for the sake of adopting new technologies, if they are not actually useful. But to determine whether AI technologies work for you, you need to ask some hard questions. Some hospitals have reported small improvements based on black-box data. Doctors at Duke University Hospital use the data to check how often antibiotics are given on time, and they report turning to this data to help decrease the amount of time operating rooms sit empty between cases. 

But getting buy-in from some hospitals has been difficult, because there haven’t yet been any large, peer-reviewed studies showing how black boxes actually help to reduce patient complications and save lives. Mount Sinai’s chief of general surgery, Celia Divino, says that too much data can be paralyzing. “How do you interpret it? What do you do with it?” she asks. “This is always a disease.”

Read the full story by Simar Bajaj here


Now read the rest of The Algorithm

Deeper Learning

How a simple circuit could offer an alternative to energy-intensive GPUs

On a table in his lab at the University of Pennsylvania, physicist Sam Dillavou has connected an array of breadboards via a web of brightly colored wires. The setup looks like a DIY home electronics project—and not a particularly elegant one. But this unassuming assembly, which contains 32 variable resistors, can learn to sort data like a machine-learning model. The hope is that the prototype will offer a low-power alternative to the energy-guzzling graphical processing unit chips widely used in machine learning. 

Why this matters: AI chips are expensive, and there aren’t enough of them to meet the current demand fueled by the AI boom. Training a large language model takes the same amount of energy as the annual consumption of more than a hundred US homes, and generating an image with generative AI uses as much energy as charging your phone. Dillavou and his colleagues built this circuit as an exploratory effort to find better computing designs. Read more from Sophia Chen here.

Bits and Bytes

Propagandists are using AI too—and companies need to be open about it
OpenAI has reported on influence operations that use its AI tools. Such reporting, alongside data sharing, should become the industry norm, argue Josh A. Goldstein and Renée DiResta. (MIT Technology Review

Digital twins are helping scientists run the world’s most complex experiments
Engineers use the high-fidelity models to monitor operations, plan fixes, and troubleshoot problems. Digital twins can also use artificial intelligence and machine learning to help make sense of vast amounts of data. (MIT Technology Review

Silicon Valley is in an uproar over California’s proposed AI safety bill
The bill would force companies to create a “kill switch” to turn off powerful AI models, guarantee they will not build systems with “hazardous capabilities such as creating bioweapons,” and report their safety testing. Tech companies argue that this would “hinder innovation” and kill open-source development in California. The tech sector loathes regulation, so expect this bill to face a lobbying storm. (FT

OpenAI offers a peek inside the guts of ChatGPT
The company released a new research paper identifying how the AI model that powers ChatGPT works and how it stores certain concepts. The paper was written by the company’s now-defunct superalignment team, which was disbanded after its leaders, including OpenAI cofounder Ilya Sutskever, left the company. OpenAI has faced criticism from former employees who argue that the company is rushing to build AI and ignoring the risks.  (Wired

The AI search engine Perplexity is directly ripping off content from news outlets
The buzzy startup, which has been touted as a challenger to Google Search, has republished parts of exclusive stories from multiple publications, including Forbes and Bloomberg, with inadequate attribution. It’s an ominous sign of what could be coming for news media. (Forbes

It looked like a reliable news site. It was an AI chop shop.
A wild story about how a site called BNN Breaking, which had amassed millions of readers, an international team of journalists, and a publishing deal with Microsoft, was actually just regurgitating AI-generated content riddled with errors. (NYT

How battery-swap networks are preventing emergency blackouts

By: Zeyi Yang
11 June 2024 at 05:00

On the morning of April 3, Taiwan was hit by a 7.4 magnitude earthquake. Seconds later, hundreds of battery-swap stations in Taiwan sensed something else: the power frequency of the electric grid took a sudden drop, a signal that some power plants had been disconnected in the disaster. The grid was now struggling to meet energy demand. 

These stations, built by the Taiwanese company Gogoro for electric-powered two-wheeled vehicles like scooters, mopeds, and bikes, reacted immediately. According to numbers provided by the company, 590 Gogoro battery-swap locations (some of which have more than one swap station) stopped drawing electricity from the grid, lowering local demand by a total six megawatts—enough to power thousands of homes. It took 12 minutes for the grid to recover, and the battery-swap stations then resumed normal operation.

Gogoro is not the only company working on battery-swapping for electric scooters (New York City recently launched a pilot program to give delivery drivers the option to charge this way), but it’s certainly one of the most successful. Founded in 2011, the firm has a network of over 12,500 stations across Taiwan and boasts over 600,000 monthly subscribers who pay to swap batteries in and out when required. Each station is roughly the size of two vending machines and can hold around 30 scooter batteries.

Now the company is putting the battery network to another use: Gogoro has been working with Enel X, an Italian company, to incorporate the stations into a virtual power plant (VPP) system that helps the Taiwanese grid stay more resilient in emergencies like April’s earthquake. 

Battery-swap stations work well for VPP programs because they offer so much more flexibility than charging at home, where an electric-bike owner usually has just one or two batteries and thus must charge immediately after one runs out. With dozens of batteries in a single station as a demand buffer, Gogoro can choose when it charges them—for instance, doing so at night when there’s less power demand and it’s cheaper. In the meantime, the batteries can give power back to the grid when it is stressed—hence the comparison to power plants.

“What is beautiful is that the stations’ economic interest is aligned with the grid—the [battery-swap companies] have the incentive to time their charges during the low utilization period, paying the low electricity price, while feeding electricity back to the grid during peak period, enjoying a higher price,” says S. Alex Yang, a professor of management science at London Business School. 

Gogoro is uniquely positioned to become a vital part of the VPP network because “there’s a constant load in energy, and then at the same time, we’re on standby that we can either stop taking or giving back [power] to the grid to provide stability,” Horace Luke, cofounder and CEO of Gogoro, tells MIT Technology Review

Luke estimates that only 90% of Gogoro batteries are actually on the road powering scooters at any given time, so the rest, sitting on the racks waiting for customers to pick up, become a valuable resource that can be utilized by the grid. 

Today, out of the 2,500 Gogoro locations, over 1,000 are part of the VPP program. Gogoro promises that the system will automatically detect emergencies and, in response, immediately lower its consumption by a certain total amount.

Which stations get included in the VPP depends on where they are and how much capacity they have. A smaller station right outside a metro stop—meaning high demand and low supply—probably can’t afford to stop charging during an emergency because riders could come looking for a battery soon. But a megastation with 120 batteries in a residential area is probably safe to stop charging batteries for a while.

Plus, the entire station doesn’t go dark—Gogoro has a built-in system that decides which or how many batteries in a station stop charging. “We know exactly which batteries to spin down, which station to spin down, how much to spin down,” says Luke. “That was all calculated in real time in the back side of the server.” It can even consolidate the power left in several batteries into one, so a customer who comes in can still leave with a fully charged battery even if the whole system is operating below capacity.

The earthquake and its aftermath in Taiwan this year put the VPP stations to the test—but also showed the system’s strength. On April 15, 12 days after the initial earthquake, the grid in Taiwan was still recovering from the damage when another power drop happened. This time, 818 Gogoro locations reacted in five seconds, reducing power consumption by 11 megawatts for 30 minutes.

Numbers like 6 MW and 11 MW are “not a trivial amount of power but still substantially smaller than a centralized power plant,” says Joshua Pearce, an engineering professor at Western University in Ontario, Canada. For comparison, Taiwan lost 3,200 MW of power supply right after the April earthquake, and the gap was mostly filled by solar power, centralized battery storage, and hydropower. But the entire Taiwanese VPP network combined, which has reached a capacity of 1,350 MW, can make a significant difference. “It helps the grid maintain stability during disasters. The more smart loads there are on the grid, the more resilient it is,” he says. 

However, the potential of these battery-swap stations has not been fully achieved yet; the majority of the stations have not started giving energy back to the grid. 

“The tech system is ready, but the business and economics are not ready,” Luke says. There are 10 Gogoro battery-swapping stations that can return electricity to the grid in a pilot program, but other stations haven’t received the technological update. 

Upgrading stations to bi-directional charging makes economic sense only if Gogoro can profit from selling the electricity back. While the Taiwanese state-owned utility company currently allows private energy generators like solar farms to sell electricity to the grid at a premium, it hasn’t allowed battery-storage companies like Gogoro to do so. 

This challenge is not unique to Taiwan. Incorporating technologies like VPP requires making fundamental changes to the grid, which won’t happen without policy support. “The technology is there, but the practices are being held back by antiquated utility business models where they provide all electric services,” says Pearce. “Fair policies are needed to allow solar energy and battery owners to participate in the electric market for the best interest of all electricity consumers.”

Correction: The story has been updated to clarify that 90%, not 10%, of Gogoro’s batteries are on the road.

The data practitioner for the AI era

The rise of generative AI, coupled with the rapid adoption and democratization of AI across industries this decade, has emphasized the singular importance of data. Managing data effectively has become critical to this era of business—making data practitioners, including data engineers, analytics engineers, and ML engineers, key figures in the data and AI revolution.

Organizations that fail to use their own data will fall behind competitors that do and miss out on opportunities to uncover new value for themselves and their customers. As the quantity and complexity of data grows, so do its challenges, forcing organizations to adopt new data tools and infrastructure which, in turn, change the roles and mandate of the technology workforce.

Data practitioners are among those whose roles are experiencing the most significant change, as organizations expand their responsibilities. Rather than working in a siloed data team, data engineers are now developing platforms and tools whose design improves data visibility and transparency for employees across the organization, including analytics engineers, data scientists, data analysts, machine learning engineers, and business stakeholders.

This report explores, through a series of interviews with expert data practitioners, key shifts in data engineering, the evolving skill set required of data practitioners, options for data infrastructure and tooling to support AI, and data challenges and opportunities emerging in parallel with generative AI. The report’s key findings include the following:

  • The foundational importance of data is creating new demands on data practitioners. As the rise of AI demonstrates the business importance of data more clearly than ever, data practitioners are encountering new data challenges, increasing data complexity, evolving team structures, and emerging tools and technologies—as well as establishing newfound organizational importance.
  • Data practitioners are getting closer to the business, and the business closer to the data. The pressure to create value from data has led executives to invest more substantially in data-related functions. Data practitioners are being asked to expand their knowledge of the business, engage more deeply with business units, and support the use of data in the organization, while functional teams are finding they require their own internal data expertise to leverage their data.
  • The data and AI strategy has become a key part of the business strategy. Business leaders need to invest in their data and AI strategy—including making important decisions about the data team’s organizational structure, data platform and architecture, and data governance—because every business’s key differentiator will increasingly be its data.
  • Data practitioners will shape how generative AI is deployed in the enterprise. The key considerations for generative AI deployment—producing high-quality results, preventing bias and hallucinations, establishing governance, designing data workflows, ensuring regulatory compliance—are the province of data practitioners, giving them outsize influence on how this powerful technology will be put to work.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The Download: AI propaganda, and digital twins

10 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Propagandists are using AI too—and companies need to be open about it

—Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality.

At the end of May, OpenAI marked a new “first” in its corporate history. It wasn’t an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations.

The company had caught five networks of covert propagandists—including players from Russia, China, Iran, and Israel—using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts.

The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output. AI gives propagandists a productivity boost too.

As researchers who have studied online influence operations for years, we have seen influence operations continue to proliferate, on every social platform and focused on every region of the world. And if there’s one thing we’ve learned, it’s that transparency from Big Tech is paramount. Read the full story.

+ If you’re interested in how crooks are using AI, check out Melissa Heikkilä’s story on how generative tools are boosting the criminal underworld.

Digital twins are helping scientists run the world’s most complex experiments

In January 2022, NASA’s $10 billion James Webb Space Telescope was approaching the end of its one-million-mile trip from Earth. But reaching its orbital spot would be just one part of its treacherous journey. To ready itself for observations, the spacecraft had to unfold itself in a complicated choreography that, according to its engineers’ calculations, had 344 different ways to fail.

Over multiple days of choreography, the telescope fed data back to Earth in real time, and software near-simultaneously used that data to render a 3D video of how the process was going, as it was going. The 3D video represented a “digital twin” of the complex telescope: a computer-based model of the actual instrument, based on information that the instrument provided. 

The team watched tensely, during JWST’s early days, as the 344 potential problems failed to make their appearance. At last, JWST was in its final shape and looked as it should—in space and onscreen. The digital twin has been updating itself ever since.

As the technology becomes more common, researchers are increasingly finding these twins to be productive members of scientific society—helping humans run the world’s most complicated instruments, while also revealing more about the world itself and the universe beyond. Read the full story

—Sarah Scoles

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 What to expect from Apple’s AI-focused WWDC event 
A deal with OpenAI is likely to be on the cards, amid an avalanche of AI features. (Bloomberg $)
+ Siri is due to get a buzzy LLM makeover. (The Verge)
+ What we need is AI features that are actually useful, not just showboating. (TechCrunch)

2 India’s internet space race is hotting up
The country’s telecoms giants want to beat Starlink at its own game. (FT $)

3 Silicon Valley’s medical tests industry is booming
They’re enabling patients to bypass doctors—for better and worse. (WP $)

4 AI tools are being trained on the faces of Brazilian children
Without their knowledge or consent. (Wired $)
+ We need to bring consent to AI. (MIT Technology Review)

5 Online scammers are ripping off small businesses too
It’s not just big designer names at risk of being impersonated any more. (WSJ $)

6 Perplexity is repackaging news articles with minimal attribution
A Forbes journalist has hit back at how the AI search engine repurposed the publication’s reporting. (Bloomberg $)
+ Here’s how AI summaries for search engines get things wrong. (MIT Technology Review)

7 AI image detectors are doing an okay job
But the results of generative AI are becoming ever subtler. (IEEE Spectrum)
+ This tool could protect your pictures from AI manipulation. (MIT Technology Review)

8 How viral videos shifted Californians’ perspective on crime
The galvanizing effect of these clips appears to fuel public appetite for harsher penalties. (The Atlantic $)
+ AI was supposed to make police bodycams better. What happened? (MIT Technology Review)

9 Refrigerators have altered how our food tastes
Colder foods and drinks need to be extra sweet to register as sweet at all. (New Yorker $)
+ Why food allergen labels are so misleading. (Undark Magazine)

10 Nokia claims to have made the world’s first ‘immersive phone call’
Complete with 3D sound, apparently. (Reuters)

Quote of the day

“The blue wall has been breached.”

—Ryan Selkis, chief executive of cryptocurrency intelligence group Messari, tells the Financial Times how Donalad Trump is winning over traditionally liberal Silicon Valley entrepreneurs. 

The big story

Quantum computing is taking on its biggest challenge: noise

January 2024

In the past 20 years, hundreds of companies have staked a claim in the rush to establish quantum computing. Investors have put in well over $5 billion so far. All this effort has just one purpose: creating the world’s next big thing.

But ultimately, assessing our progress in building useful quantum computers comes down to one central factor: whether we can handle the noise. The delicate nature of their systems makes them extremely vulnerable to the slightest disturbance, which can generate errors or even stop a quantum computation in its tracks.

In the last couple of years, a series of breakthroughs have led researchers to declare that the problem of noise might finally be on the ropes. Read the full story.

—Michael Brooks

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ No one has ever had a better camping trip than this hedgehog.
+ Hoping to see the Northern Lights this summer? Here’s how to maximize your chances of spotting the phenomenon in the US.
+ These super-strong women can roll up 10 frying pans within a minute!
+ If you’re ever stuck on how to strike up conversation, these foolproof starters should help you out.

Digital twins are helping scientists run the world’s most complex experiments

10 June 2024 at 05:00

In January 2022, NASA’s $10 billion James Webb Space Telescope was approaching the end of its one-million-mile trip from Earth. But reaching its orbital spot would be just one part of its treacherous journey. To ready itself for observations, the spacecraft had to unfold itself in a complicated choreography that, according to its engineers’ calculations, had 344 different ways to fail. A sunshield the size of a tennis court had to deploy exactly right, ending up like a giant shiny kite beneath the telescope. A secondary mirror had to swing down into the perfect position, relying on three legs to hold it nearly 25 feet from the main mirror. 

Finally, that main mirror—its 18 hexagonal pieces nestled together as in a honeycomb—had to assemble itself. Three golden mirror segments had to unfold from each side of the telescope, notching their edges against the 12 already fitted together. The sequence had to go perfectly for the telescope to work as intended.

“That was a scary time,” says Karen Casey, a technical director for Raytheon’s Air and Space Defense Systems business, which built the software that controls JWST’s movements and is now in charge of its flight operations. 

Over the multiple days of choreography, engineers at Raytheon watched the events unfold as the telescope did. The telescope, beyond the moon’s orbit, was way too distant to be visible, even with powerful instruments. But the telescope was feeding data back to Earth in real time, and software near-simultaneously used that data to render a 3D video of how the process was going, as it was going. It was like watching a very nerve-racking movie.

The 3D video represented a “digital twin” of the complex telescope: a computer-based model of the actual instrument, based on information that the instrument provided. “This was just transformative—to be able to see it,” Casey says.

The team watched tensely, during JWST’s early days, as the 344 potential problems failed to make their appearance. At last, JWST was in its final shape and looked as it should—in space and onscreen. The digital twin has been updating itself ever since.

The concept of building a full-scale replica of such a complicated bit of kit wasn’t new to Raytheon, in part because of the company’s work in defense and intelligence, where digital twins are more popular than they are in astronomy.

JWST, though, was actually more complicated than many of those systems, so the advances its twin made possible will now feed back into that military side of the business. It’s the reverse of a more typical story, where national security pursuits push science forward. Space is where non-defense and defense technologies converge, says Dan Isaacs, chief technology officer for the Digital Twin Consortium, a professional working group, and digital twins are “at the very heart of these collaborative efforts.”

As the technology becomes more common, researchers are increasingly finding these twins to be productive members of scientific society—helping humans run the world’s most complicated instruments, while also revealing more about the world itself and the universe beyond.  

800 million data points

The concept of digital twins was introduced in 2002 by Michael Grieves, a researcher whose work focused on business and manufacturing. He suggested that a digital model of a product, constantly updated with information from the real world, should accompany the physical item through its development. 

But the term “digital twin” actually came from a NASA employee named John Vickers, who first used it in 2010 as part of a technology road map report for the space agency. Today, perhaps unsurprisingly, Grieves is head of the Digital Twins Institute, and Vickers is still with NASA, as its principal technologist. 

Since those early days, technology has advanced, as it is wont to do. The Internet of Things has proliferated, hooking real-world sensors stuck to physical objects into the ethereal internet. Today, those devices number more than 15 billion, compared with mere millions in 2010. Computing power has continued to increase, and the cloud—more popular and powerful than it was in the previous decade—allows the makers of digital twins to scale their models up or down, or create more clones for experimentation, without investing in obscene amounts of hardware. Now, too, digital twins can incorporate artificial intelligence and machine learning to help make sense of the deluge of data points pouring in every second. 

Out of those ingredients, Raytheon decided to build its JWST twin for the same reason it also works on defense twins: there was little room for error. “This was a no-fail mission,” says Casey. The twin tracks 800 million data points about its real-world sibling every day, using all those 0s and 1s to create a real-time video that’s easier for humans to monitor than many columns of numbers. 

The JWST team uses the twin to monitor the observatory and also to predict the effects of changes like software updates. When testing these, engineers use an offline copy of the twin,  upload hypothetical changes, and then watch what happens next. The group also uses an offline version to train operators and to troubleshoot IRL issues—the nature of which Casey declines to identify. “We call them anomalies,” she says. 

Science, defense, and beyond

JWST’s digital twin is not the first space-science instrument to have a simulated sibling. A digital twin of the Curiosity rover helped NASA solve the robot’s heat issues. At CERN, the European particle accelerator, digital twins help with detector development and more mundane tasks like monitoring cranes and ventilation systems. The European Space Agency wants to use Earth observation data to create a digital twin of the planet itself

At the Gran Telescopio Canarias, the world’s largest single-mirror telescope, the scientific team started building a twin about two years ago—before they’d even heard the term. Back then, Luis Rodríguez, head of engineering, came to Romano Corradi, the observatory’s director. “He said that we should start to interconnect things,” says Corradi. They could snag principles from industry, suggested Rodríguez, where machines regularly communicate with each other and with computers, monitor their own states, and automate responses to those states.

The team started adding sensors that relayed information about the telescope and its environment. Understanding the environmental conditions around an observatory is “fundamental in order to operate a telescope,” says Corradi. Is it going to rain, for instance, and how is temperature affecting the scope’s focus? 

After they had the sensors feeding data online, they created a 3D model of the telescope that rendered those facts visually. “The advantage is very clear for the workers,” says Rodríguez, referring to those operating the telescope. “It’s more easy to manage the telescope. The telescope in the past was really, really hard because it’s very complex.”

Right now, the Gran Telescopio twin just ingests the data, but the team is working toward a more interpretive approach, using AI to predict the instrument’s behavior. “With information you get in the digital twin, you do something in the real entity,” Corradi says. Eventually, they hope to have a “smart telescope” that responds automatically to its situation. 

Corradi says the team didn’t find out that what they were building had a name until they went to an Internet of Things conference last year. “We saw that there was a growing community in industry—and not in science, in industry—where everybody now is doing these digital twins,” he says.

The concept is, of course, creeping into science—as the particle accelerators and space agencies show. But it’s still got a firmer foothold at corporations. “Always the interest in industry precedes what happens in science,” says Corradi.  But he thinks projects like theirs will continue to proliferate in the broader astronomy community. For instance, the group planning the proposed Thirty Meter Telescope, which would have a primary mirror made up of hundreds of segments, called to request a presentation on the technology. “We just anticipated a bit of what was already happening in the industry,” says Corradi.

The defense industry really loves digital twins. The Space Force, for instance, used one to plan Tetra 5, an experiment to refuel satellites. In 2022, the Space Force also gave Slingshot Aerospace a contract to create a digital twin of space itself, showing what’s going on in orbit to prepare for incidents like collisions. 

Isaacs cites an example in which the Air Force sent a retired plane to a university so researchers could develop a “fatigue profile”—a kind of map of how the aircraft’s stresses, strains, and loads add up over time. A twin, made from that map, can help identify parts that could be replaced to extend the plane’s life, or to design a better plane in the future. Companies that work in both defense and science—common in the space industry in particular—thus have an advantage, in that they can port innovations from one department to another.

JWST’s twin, for instance, will have some relevance for projects on Raytheon’s defense side, where the company already works on digital twins of missile defense radars, air-launched cruise missiles, and aircraft. “We can reuse parts of it in other places,” Casey says. Any satellite the company tracks or sends commands to “could benefit from piece-parts of what we’ve done here.”  

Some of the tools and processes Raytheon developed for the telescope, she continues, “can copy-paste to other programs.” And in that way, the JWST digital twin will probably have twins of its own.

Sarah Scoles is a Colorado-based science journalist and the author, most recently, of the book Countdown: The Blinding Future of Nuclear Weapons.

Propagandists are using AI too—and companies need to be open about it

At the end of May, OpenAI marked a new “first” in its corporate history. It wasn’t an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations. The company had caught five networks of covert propagandists—including players from Russia, China, Iran, and Israel—using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts. The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output. AI gives propagandists a productivity boost too.

First and foremost, OpenAI should be commended for this report and the precedent it hopefully sets. Researchers have long expected adversarial actors to adopt generative AI technology, particularly large language models, to cheaply increase the scale and caliber of their efforts. The transparent disclosure that this has begun to happen—and that OpenAI has prioritized detecting it and shutting down accounts to mitigate its impact—shows that at least one large AI company has learned something from the struggles of social media platforms in the years following Russia’s interference in the 2016 US election. When that misuse was discovered, Facebook, YouTube, and Twitter (now X) created integrity teams and began making regular disclosures about influence operations on their platforms. (X halted this activity after Elon Musk’s purchase of the company.) 

OpenAI’s disclosure, in fact, was evocative of precisely such a report from Meta, released a mere day earlier. The Meta transparency report for the first quarter of 2024 disclosed the takedown of six covert operations on its platform. It, too, found networks tied to China, Iran, and Israel and noted the use of AI-generated content. Propagandists from China shared what seem to be AI-generated poster-type images for a “fictitious pro-Sikh activist movement.” An Israel-based political marketing firm posted what were likely AI-generated comments. Meta’s report also noted that one very persistent Russian threat actor was still quite active, and that its strategies were evolving. Perhaps most important, Meta included a direct set of “recommendations for stronger industry response” that called for governments, researchers, and other technology companies to collaboratively share threat intelligence to help disrupt the ongoing Russian campaign.

We are two such researchers, and we have studied online influence operations for years. We have published investigations of coordinated activity—sometimes in collaboration with platforms—and analyzed how AI tools could affect the way propaganda campaigns are waged. Our teams’ peer-reviewed research has found that language models can produce text that is nearly as persuasive as propaganda from human-written campaigns. We have seen influence operations continue to proliferate, on every social platform and focused on every region of the world; they are table stakes in the propaganda game at this point. State adversaries and mercenary public relations firms are drawn to social media platforms and the reach they offer. For authoritarian regimes in particular, there is little downside to running such a campaign, particularly in a critical global election year. And now, adversaries are demonstrably using AI technologies that may make this activity harder to detect. Media is writing about the “AI election,” and many regulators are panicked.

It’s important to put this in perspective, though. Most of the influence campaigns that OpenAI and Meta announced did not have much impact, something the companies took pains to highlight. It’s critical to reiterate that effort isn’t the same thing as engagement: the mere existence of fake accounts or pages doesn’t mean that real people are paying attention to them. Similarly, just because a campaign uses AI does not mean it will sway public opinion. Generative AI reduces the cost of running propaganda campaigns, making it significantly cheaper to produce content and run interactive automated accounts. But it is not a magic bullet, and in the case of the operations that OpenAI disclosed, what was generated sometimes seemed to be rather spammy. Audiences didn’t bite.

Producing content, after all, is only the first step in a propaganda campaign; even the most convincing AI-generated posts, images, or audio still need to be distributed. Campaigns without algorithmic amplification or influencer pickup are often just tweeting into the void. Indeed, it is consistently authentic influencers—people who have the attention of large audiences enthusiastically resharing their posts—that receive engagement and drive the public conversation, helping content and narratives to go viral. This is why some of the more well-resourced adversaries, like China, simply surreptitiously hire those voices. At this point, influential real accounts have far more potential for impact than AI-powered fakes.

Nonetheless, there is a lot of concern that AI could disrupt American politics and become a national security threat. It’s important to “rightsize” that threat, particularly in an election year. Hyping the impact of disinformation campaigns can undermine trust in elections and faith in democracy by making the electorate believe that there are trolls behind every post, or that the mere targeting of a candidate by a malign actor, even with a very poorly executed campaign, “caused” their loss. 

By putting an assessment of impact front and center in its first report, OpenAI is clearly taking the risk of exaggerating the threat seriously. And yet, diminishing the threat or not fielding integrity teams—letting trolls simply continue to grow their followings and improve their distribution capability—would also be a bad approach. Indeed, the Meta report noted that one network it disrupted, seemingly connected to a political party in Bangladesh and targeting the Bangladeshi public, had amassed 3.4 million followers across 98 pages. Since that network was not run by an adversary of interest to Americans, it will likely get little attention. Still, this example highlights the fact that the threat is global, and vigilance is key. Platforms must continue to prioritize threat detection.

So what should we do about this? The Meta report’s call for threat sharing and collaboration, although specific to a Russian adversary, highlights a broader path forward for social media platforms, AI companies, and academic researchers alike. 

Transparency is paramount. As outside researchers, we can learn only so much from a social media company’s description of an operation it has taken down. This is true for the public and policymakers as well, and incredibly powerful platforms shouldn’t just be taken at their word. Ensuring researcher access to data about coordinated inauthentic networks offers an opportunity for outside validation (or refutation!) of a tech company’s claims. Before Musk’s takeover of Twitter, the company regularly released data sets of posts from inauthentic state-linked accounts to researchers, and even to the public. Meta shared data with external partners before it removed a network and, more recently, moved to a model of sharing content from already-removed networks through Meta’s Influence Operations Research Archive. While researchers should continue to push for more data, these efforts have allowed for a richer understanding of adversarial narratives and behaviors beyond what the platform’s own transparency report summaries provided.

OpenAI’s adversarial threat report should be a prelude to more robust data sharing moving forward. Where AI is concerned, independent researchers have begun to assemble databases of misuse—like the AI Incident Database and the Political Deepfakes Incident Database—to allow researchers to compare different types of misuse and track how misuse changes over time. But it is often hard to detect misuse from the outside. As AI tools become more capable and pervasive, it’s important that policymakers considering regulation understand how they are being used and abused. While OpenAI’s first report offered high-level summaries and select examples, expanding data-sharing relationships with researchers that provide more visibility into adversarial content or behaviors is an important next step. 

When it comes to combating influence operations and misuse of AI, online users also have a role to play. After all, this content has an impact only if people see it, believe it, and participate in sharing it further. In one of the cases OpenAI disclosed, online users called out fake accounts that used AI-generated text. 

In our own research, we’ve seen communities of Facebook users proactively call out AI-generated image content created by spammers and scammers, helping those who are less aware of the technology avoid falling prey to deception. A healthy dose of skepticism is increasingly useful: pausing to check whether content is real and people are who they claim to be, and helping friends and family members become more aware of the growing prevalence of generated content, can help social media users resist deception from propagandists and scammers alike.

OpenAI’s blog post announcing the takedown report put it succinctly: “Threat actors work across the internet.” So must we. As we move into an new era of AI-driven influence operations, we must address shared challenges via transparency, data sharing, and collaborative vigilance if we hope to develop a more resilient digital ecosystem.

Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality. 

The Download: making surgery safer, and MDMA therapy has been dealt a blow

7 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This AI-powered “black box” could make surgery safer

The operating room has long been defined by its hush-hush nature because surgeons are notoriously bad at acknowledging their own mistakes.

These mistakes kill some 22,000 Americans each year. Many of the errors happen on the operating table, from leaving surgical sponges inside patients’ bodies to performing the wrong procedure altogether.

Now, Teodor Grantcharov, a surgeon and professor of surgery at Stanford, believes he’s created the technology to create and analyze recordings of operations to help improve safety and surgical efficiency. It’s the operating room equivalent of an airplane’s black box: recording everything in the operating room via panoramic cameras, microphones, and anesthesia monitors before using artificial intelligence to help surgeons make sense of the data.   

But the idea of recording everything could raise the threat of disciplinary action and legal exposure. Some surgeons have refused to operate when the black boxes are in place, and some of the systems have even been sabotaged. 

So are hospitals on the cusp of a new era of safety—or creating an environment of confusion and paranoia? Read the full story.

—Simar Bajaj

FDA advisors just said no to the use of MDMA as a therapy

On Tuesday, the FDA asked a panel of experts to weigh in on whether the evidence shows that MDMA, also known as ecstasy, is a safe and efficacious treatment for PTSD.

The answer was a resounding no. Just two out of 11 panel members agreed that MDMA-assisted therapy is effective. And only one panel member thought the benefits of the therapy outweighed the risks.

The outcome came as a surprise to many, given that trial results have been positive. And it is also a blow for advocates who have been working to bring psychedelic therapy into mainstream medicine for more than two decades.

This isn’t the final decision on MDMA. The FDA has until August 11 to make that ruling. But while the agency is under no obligation to follow the recommendations of its advisory committees, it rarely breaks with their decisions. 

So let’s unpack the advisory committee’s vote and talk about what it means for the approval of other recreational drugs as therapies. Read the full story.

—Cassandra Willyard

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Silicon Valley is pushing back against an AI safety bill  
The legislation would force tech firms to create a ‘kill switch’ to shut down AI models. (FT $)
+ It’s not just Big Tech either—startups are resisting it too. (Bloomberg $)
+ Europe’s AI Act is done. Here’s what will (and won’t) change. (MIT Technology Review)

2 Boeing’s Starliner has docked with the International Space Station
It completed the first stage of its flight after several of its thrusters went offline. (The Guardian)
+ Boeing’s engineers are downplaying the issues it’s experienced. (WP $)

3 OpenAI has pulled back the curtain on ChatGPT 
It’s released a paper explaining how AI models’ workings can be reverse engineered. (Wired $)
+ Large language models can do jaw-dropping things. But nobody knows exactly why. (MIT Technology Review)

4 Why China is losing the chip war with the US
Despite its best efforts, its native firms can’t hold a candle to Nvidia. (Economist $)
+ What’s next in chips. (MIT Technology Review)

5 Deplatforming accounts that spread misinformation works
When X suspended 70,000 QAnon-linked accounts, the number of links to ‘low-credibility’ sites plummeted. (WP $)
+ Lies on the internet are still rife, though. (Vox)

6 An Indian startup once valued at $22 billion is now worthless
Its investors claim the company regularly ignored their advice. (TechCrunch)

7 Climate scientists are desperate to slow melting polar ice
And some of them are prepared to dabble with unusual methods to achieve it. (Economist $)
+ The radical intervention that might save the “doomsday” glacier. (MIT Technology Review)

8 Super cheap delivery meals are all the rage in China
Unfortunately, gig workers are bearing the brunt of the cost. (Rest of World)

9 A virtual gun has sold for more than $1 million
The digital Counter-Strike 2 accessory is one of the biggest video game purchases ever. (Bloomberg $)
+ A team of gaming enthusiasts have rebuilt the world’s first gaming computer. (The Guardian)

10 A decades-old Tamagotchi mystery has finally been solved
The online virtual pet fan community is going wild. (404 Media)

Quote of the day

“Nice to be attached to the big city in the sky.”

 —Barry “Butch” Wilmore, one of the veteran astronauts onboard Boeing’s Starliner, jokes with mission control after the spacecraft successfully docked with the International Space Station, Reuters reports.

The big story

Whatever happened to DNA computing?

October 2021

For more than five decades, engineers have shrunk silicon-­based transistors over and over again, creating progressively smaller, faster, and more energy-efficient computers in the process. But the long technological winning streak—and the miniaturization that has enabled it —can’t last forever.

What could this successor technology be? There has been no shortage of alternative computing approaches proposed over the last 50 years. Here are five of the more memorable ones. Read about five of the most memorable ones.

—Lakshmi Chandrasekaran

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ What inspired Gary Numan to write electronic smash-hit Cars? Well, you won’t believe this
+ Animals love magic too! 🪄
+ Nothing to see here—just Keanu Reeves having the time of his life playing a Cure song.
+ Friends just make everything better.

This AI-powered “black box” could make surgery safer

7 June 2024 at 05:00

The first time Teodor Grantcharov sat down to watch himself perform surgery, he wanted to throw the VHS tape out the window.  

“My perception was that my performance was spectacular,” Grantcharov says, and then pauses—“until the moment I saw the video.” Reflecting on this operation from 25 years ago, he remembers the roughness of his dissection, the wrong instruments used, the inefficiencies that transformed a 30-minute operation into a 90-minute one. “I didn’t want anyone to see it.”

This reaction wasn’t exactly unique. The operating room has long been defined by its hush-hush nature—what happens in the OR stays in the OR—because surgeons are notoriously bad at acknowledging their own mistakes. Grantcharov jokes that when you ask “Who are the top three surgeons in the world?” a typical surgeon “always has a challenge identifying who the other two are.”

But after the initial humiliation over watching himself work, Grantcharov started to see the value in recording his operations. “There are so many small details that normally take years and years of practice to realize—that some surgeons never get to that point,” he says. “Suddenly, I could see all these insights and opportunities overnight.”

There was a big problem, though: it was the ’90s, and spending hours playing back grainy VHS recordings wasn’t a realistic quality improvement strategy. It would have been nearly impossible to determine how often his relatively mundane slipups happened at scale—not to mention more serious medical errors like those that kill some 22,000 Americans each year. Many of these errors happen on the operating table, from leaving surgical sponges inside patients’ bodies to performing the wrong procedure altogether.

While the patient safety movement has pushed for uniform checklists and other manual fail-safes to prevent such mistakes, Grantcharov believes that “as long as the only barrier between success and failure is a human, there will be errors.” Improving safety and surgical efficiency became something of a personal obsession. He wanted to make it challenging to make mistakes, and he thought developing the right system to create and analyze recordings could be the key.

It’s taken many years, but Grantcharov, now a professor of surgery at Stanford, believes he’s finally developed the technology to make this dream possible: the operating room equivalent of an airplane’s black box. It records everything in the OR via panoramic cameras, microphones, and anesthesia monitors before using artificial intelligence to help surgeons make sense of the data.

Grantcharov’s company, Surgical Safety Technologies, is not the only one deploying AI to analyze surgeries. Many medical device companies are already in the space—including Medtronic with its Touch Surgery platform, Johnson & Johnson with C-SATS, and Intuitive Surgical with Case Insights.

But most of these are focused solely on what’s happening inside patients’ bodies, capturing intraoperative video alone. Grantcharov wants to capture the OR as a whole, from the number of times the door is opened to how many non-case-related conversations occur during an operation. “People have simplified surgery to technical skills only,” he says. “You need to study the OR environment holistically.”

Teodor Grantcharov in a procedure that is being recorded by Surgical Safety Technologies’ AI-powered black-box system.
COURTESY OF SURGICAL SAFETY TECHNOLOGIES

Success, however, isn’t as simple as just having the right technology. The idea of recording everything presents a slew of tricky questions around privacy and could raise the threat of disciplinary action and legal exposure. Because of these concerns, some surgeons have refused to operate when the black boxes are in place, and some of the systems have even been sabotaged. Aside from those problems, some hospitals don’t know what to do with all this new data or how to avoid drowning in a deluge of statistics.

Grantcharov nevertheless predicts that his system can do for the OR what black boxes did for aviation. In 1970, the industry was plagued by 6.5 fatal accidents for every million flights; today, that’s down to less than 0.5. “The aviation industry made the transition from reactive to proactive thanks to data,” he says—“from safe to ultra-safe.”

Grantcharov’s black boxes are now deployed at almost 40 institutions in the US, Canada, and Western Europe, from Mount Sinai to Duke to the Mayo Clinic. But are hospitals on the cusp of a new era of safety—or creating an environment of confusion and paranoia?

Shaking off the secrecy

The operating room is probably the most measured place in the hospital but also one of the most poorly captured. From team performance to instrument handling, there is “crazy big data that we’re not even recording,” says Alexander Langerman, an ethicist and head and neck surgeon at Vanderbilt University Medical Center. “Instead, we have post hoc recollection by a surgeon.”

Indeed, when things go wrong, surgeons are supposed to review the case at the hospital’s weekly morbidity and mortality conferences, but these errors are notoriously underreported. And even when surgeons enter the required notes into patients’ electronic medical records, “it’s undoubtedly—and I mean this in the least malicious way possible—dictated toward their best interests,” says Langerman. “It makes them look good.”

The operating room wasn’t always so secretive.

In the 19th century, operations often took place in large amphitheaters—they were public spectacles with a general price of admission. “Every seat even of the top gallery was occupied,” recounted the abdominal surgeon Lawson Tait about an operation in the 1860s. “There were probably seven or eight hundred spectators.”

However, around the 1900s, operating rooms became increasingly smaller and less accessible to the public—and its germs. “Immediately, there was a feeling that something was missing, that the public surveillance was missing. You couldn’t know what happened in the smaller rooms,” says Thomas Schlich, a historian of medicine at McGill University.

And it was nearly impossible to go back. In the 1910s a Boston surgeon, Ernest Codman, suggested a form of surveillance known as the end-result system, documenting every operation (including failures, problems, and errors) and tracking patient outcomes. Massachusetts General Hospital didn’t accept it, says Schlich, and Codman resigned in frustration.  

Students watch a surgery performed at the former Philadelphia General Hospital around the turn of the century.
PUBLIC DOMAIN VIA WIKIPEDIA

Such opacity was part of a larger shift toward medicine’s professionalization in the 20th century, characterized by technological advancements, the decline of generalists, and the bureaucratization of health-care institutions. All of this put distance between patients and their physicians. Around the same time, and particularly from the 1960s onward, the medical field began to see a rise in malpractice lawsuits—at least partially driven by patients trying to find answers when things went wrong.

This battle over transparency could theoretically be addressed by surgical recordings. But Grantcharov realized very quickly that the only way to get surgeons to use the black box was to make them feel protected. To that end, he has designed the system to record the action but hide the identity of both patients and staff, even deleting all recordings within 30 days. His idea is that no individual should be punished for making a mistake. “We want to know what happened, and how we can build a system that makes it difficult for this to happen,” Grantcharov says. Errors don’t occur because “the surgeon wakes up in the morning and thinks, ‘I’m gonna make some catastrophic event happen,’” he adds. “This is a system issue.”

AI that sees everything

Grantcharov’s OR black box is not actually a box at all, but a tablet, one or two ceiling microphones, and up to four wall-mounted dome cameras that can reportedly analyze more than half a million data points per day per OR. “In three days, we go through the entire Netflix catalogue in terms of video processing,” he says.

The black-box platform utilizes a handful of computer vision models and ultimately spits out a series of short video clips and a dashboard of statistics—like how much blood was lost, which instruments were used, and how many auditory disruptions occurred. The system also identifies and breaks out key segments of the procedure (dissection, resection, and closure) so that instead of having to watch a whole three- or four-hour recording, surgeons can jump to the part of the operation where, for instance, there was major bleeding or a surgical stapler misfired.

Critically, each person in the recording is rendered anonymous; an algorithm distorts people’s voices and blurs out their faces, transforming them into shadowy, noir-like figures. “For something like this, privacy and confidentiality are critical,” says Grantcharov, who claims the anonymization process is irreversible. “Even though you know what happened, you can’t really use it against an individual.”

Another AI model works to evaluate performance. For now, this is done primarily by measuring compliance with the surgical safety checklist—a questionnaire that is supposed to be verbally ticked off during every type of surgical operation. (This checklist has long been associated with reductions in both surgical infections and overall mortality.) Grantcharov’s team is currently working to train more complex algorithms to detect errors during laparoscopic surgery, such as using excessive instrument force, holding the instruments in the wrong way, or failing to maintain a clear view of the surgical area. However, assessing these performance metrics has proved more difficult than measuring checklist compliance. “There are some things that are quantifiable, and some things require judgment,” Grantcharov says.

Each model has taken up to six months to train, through a labor-intensive process relying on a team of 12 analysts in Toronto, where the company was started. While many general AI models can be trained by a gig worker who labels everyday items (like, say, chairs), the surgical models need data annotated by people who know what they’re seeing—either surgeons, in specialized cases, or other labelers who have been properly trained. They have reviewed hundreds, sometimes thousands, of hours of OR videos and manually noted which liquid is blood, for instance, or which tool is a scalpel. Over time, the model can “learn” to identify bleeding or particular instruments on its own, says Peter Grantcharov, Surgical Safety Technologies’ vice president of engineering, who is Teodor Grantcharov’s son.

For the upcoming laparoscopic surgery model, surgeon annotators have also started to label whether certain maneuvers were correct or mistaken, as defined by the Generic Error Rating Tool—a standardized way to measure technical errors.

While most algorithms operate near perfectly on their own, Peter Grantcharov explains that the OR black box is still not fully autonomous. For example, it’s difficult to capture audio through ceiling mikes and thus get a reliable transcript to document whether every element of the surgical safety checklist was completed; he estimates that this algorithm has a 15% error rate. So before the output from each procedure is finalized, one of the Toronto analysts manually verifies adherence to the questionnaire. “It will require a human in the loop,” Peter Grantcharov says, but he gauges that the AI model has made the process of confirming checklist compliance 80% to 90% more efficient. He also emphasizes that the models are constantly being improved.

In all, the OR black box can cost about $100,000 to install, and analytics expenses run $25,000 annually, according to Janet Donovan, an OR nurse who shared with MIT Technology Review an estimate given to staff at Brigham and Women’s Faulkner Hospital in Massachusetts. (Peter Grantcharov declined to comment on these numbers, writing in an email: “We don’t share specific pricing; however, we can say that it’s based on the product mix and the total number of rooms, with inherent volume-based discounting built into our pricing models.”)

 “Big brother is watching”

Long Island Jewish Medical Center in New York, part of the Northwell Health system, was the first hospital to pilot OR black boxes, back in February 2019. The rollout was far from seamless, though not necessarily because of the tech.

“In the colorectal room, the cameras were sabotaged,” recalls Northwell’s chair of urology, Louis Kavoussi—they were turned around and deliberately unplugged. In his own OR, the staff fell silent while working, worried they’d say the wrong thing. “Unless you’re taking a golf or tennis lesson, you don’t want someone staring there watching everything you do,” says Kavoussi, who has since joined the scientific advisory board for Surgical Safety Technologies.

Grantcharov’s promises about not using the system to punish individuals have offered little comfort to some OR staff. When two black boxes were installed at Faulkner Hospital in November 2023, they threw the department of surgery into crisis. “Everybody was pretty freaked out about it,” says one surgical tech who asked not to be identified by name since she wasn’t authorized to speak publicly. “We were being watched, and we felt like if we did something wrong, our jobs were going to be on the line.”

It wasn’t that she was doing anything illegal or spewing hate speech; she just wanted to joke with her friends, complain about the boss, and be herself without the fear of administrators peeking over her shoulder. “You’re very aware that you’re being watched; it’s not subtle at all,” she says. The early days were particularly challenging, with surgeons refusing to work in the black-box-equipped rooms and OR staff boycotting those operations: “It was definitely a fight every morning.”

“In the colorectal room, the cameras were sabotaged,” recalls Louis Kavoussi. “Unless you’re taking a golf or tennis lesson, you don’t want someone staring there watching everything you do.”

At some level, the identity protections are only half measures. Before 30-day-old recordings are automatically deleted, Grantcharov acknowledges, hospital administrators can still see the OR number, the time of operation, and the patient’s medical record number, so even if OR personnel are technically de-identified, they aren’t truly anonymous. The result is a sense that “Big Brother is watching,” says Christopher Mantyh, vice chair of clinical operations at Duke University Hospital, which has black boxes in seven ORs. He will draw on aggregate data to talk generally about quality improvement at departmental meetings, but when specific issues arise, like breaks in sterility or a cluster of infections, he will look to the recordings and “go to the surgeons directly.”

In many ways, that’s what worries Donovan, the Faulkner Hospital nurse. She’s not convinced the hospital will protect staff members’ identities and is worried that these recordings will be used against them—whether through internal disciplinary actions or in a patient’s malpractice suit. In February 2023, she and almost 60 others sent a letter to the hospital’s chief of surgery objecting to the black box. She’s since filed a grievance with the state, with arbitration proceedings scheduled for October.

The legal concerns in particular loom large because, already, over 75% of surgeons report having been sued at least once, according to a 2021 survey by Medscape, an online resource hub for health-care professionals. To the layperson, any surgical video “looks like a horror show,” says Vanderbilt’s Langerman. “Some plaintiff’s attorney is going to get ahold of this, and then some jury is going to see a whole bunch of blood, and then they’re not going to know what they’re seeing.” That prospect turns every recording into a potential legal battle.

From a purely logistical perspective, however, the 30-day deletion policy will likely insulate these recordings from malpractice lawsuits, according to Teneille Brown, a law professor at the University of Utah. She notes that within that time frame, it would be nearly impossible for a patient to find legal representation, go through the requisite conflict-of-interest checks, and then file a discovery request for the black-box data. While deleting data to bypass the judicial system could provoke criticism, Brown sees the wisdom of Surgical Safety Technologies’ approach. “If I were their lawyer, I would tell them to just have a policy of deleting it because then they’re deleting the good and the bad,” she says. “What it does is orient the focus to say, ‘This is not about a public-facing audience. The audience for these videos is completely internal.’”

A data deluge

When it comes to improving quality, there are “the problem-first people, and then there are the data-first people,” says Justin Dimick, chair of the department of surgery at the University of Michigan. The latter, he says, push “massive data collection” without first identifying “a question of ‘What am I trying to fix?’” He says that’s why he currently has no plans to use the OR black boxes in his hospital.

Mount Sinai’s chief of general surgery, Celia Divino, echoes this sentiment, emphasizing that too much data can be paralyzing. “How do you interpret it? What do you do with it?” she asks. “This is always a disease.”

At Northwell, even Kavoussi admits that five years of data from OR black boxes hasn’t been used to change much, if anything. He says that hospital leadership is finally beginning to think about how to use the recordings, but a hard question remains: OR black boxes can collect boatloads of data, but what does it matter if nobody knows what to do with it?

Grantcharov acknowledges that the information can be overwhelming. “In the early days, we let the hospitals figure out how to use the data,” he says. “That led to a big variation in how the data was operationalized. Some hospitals did amazing things; others underutilized it.” Now the company has a dedicated “customer success” team to help hospitals make sense of the data, and it offers a consulting-type service to work through surgical errors. But ultimately, even the most practical insights are meaningless without buy-in from hospital leadership, Grantcharov suggests.

Getting that buy-in has proved difficult in some centers, at least partly because there haven’t yet been any large, peer-reviewed studies showing how OR black boxes actually help to reduce patient complications and save lives. “If there’s some evidence that a comprehensive data collection system—like a black box—is useful, then we’ll do it,” says Dimick. “But I haven’t seen that evidence yet.”

screenshot of clips recorded by Black Box
A screenshot of the analytics produced by the black box.
COURTESY OF SURGICAL SAFETY TECHNOLOGIES

The best hard data thus far is from a 2022 study published in the Annals of Surgery, in which Grantcharov and his team used OR black boxes to show that the surgical checklist had not been followed in a fifth of operations, likely contributing to excess infections. He also says that an upcoming study, scheduled to be published this fall, will show that the OR black box led to an improvement in checklist compliance and reduced ICU stays, reoperations, hospital readmissions, and mortality.

On a smaller scale, Grantcharov insists that he has built a steady stream of evidence showing the power of his platform. For example, he says, it’s revealed that auditory disruptions—doors opening, machine alarms and personal pagers going off—happen every minute in gynecology ORs, that a median 20 intraoperative errors are made in each laparoscopic surgery case, and that surgeons are great at situational awareness and leadership while nurses excel at task management.

Meanwhile, some hospitals have reported small improvements based on black-box data. Duke’s Mantyh says he’s used the data to check how often antibiotics are given on time. Duke and other hospitals also report turning to this data to help decrease the amount of time ORs sit empty between cases. By flagging when “idle” times are unexpectedly long and having the Toronto analysts review recordings to explain why, they’ve turned up issues ranging from inefficient communication to excessive time spent bringing in new equipment.

That can make a bigger difference than one might think, explains Ra’gan Laventon, clinical director of perioperative services at Texas’s Memorial Hermann Sugar Land Hospital: “We have multiple patients who are depending on us to get to their care today. And so the more time that’s added in some of these operational efficiencies, the more impactful it is to the patient.”

The real world

At Northwell, where some of the cameras were initially sabotaged, it took a couple of weeks for Kavoussi’s urology team to get used to the black boxes, and about six months for his colorectal colleagues. Much of the solution came down to one-on-one conversations in which Kavoussi explained how the data was automatically de-identified and deleted.

During his operations, Kavoussi would also try to defuse the tension, telling the OR black box “Good morning, Toronto,” or jokingly asking, “How’s the weather up there?” In the end, “since nothing bad has happened, it has become part of the normal flow,” he says.

The reality is that no surgeon wants to be an average operator, “but statistically, we’re mostly average surgeons, and that’s okay,” says Vanderbilt’s Langerman. “I’d hate to be a below-average surgeon, but if I was, I’d really want to know about it.” Like athletes watching game film to prepare for their next match, surgeons might one day review their recordings, assessing their mistakes and thinking about the best ways to avoid them—but only if they feel safe enough to do so.

“Until we know where the guardrails are around this, there’s such a risk—an uncertain risk—that no one’s gonna let anyone turn on the camera,” Langerman says. “We live in a real world, not a perfect world.”

Simar Bajaj is an award-winning science journalist and 2024 Marshall Scholar. He has previously written for the Washington Post, Time magazine, the Guardian, NPR, and the Atlantic, as well as the New England Journal of Medicine, Nature Medicine, and The Lancet. He won Science Story of the Year from the Foreign Press Association in 2022 and the top prize for excellence in science communications from the National Academies of Science, Engineering, and Medicine in 2023. Follow him on X at @SimarSBajaj.

FDA advisors just said no to the use of MDMA as a therapy

6 June 2024 at 12:14

On Tuesday, the FDA asked a panel of experts to weigh in on whether the evidence shows that MDMA, also known as ecstasy, is a safe and efficacious treatment for PTSD. The answer was a resounding no. Just two out of 11 panel members agreed that MDMA-assisted therapy is effective. And only one panel member thought the benefits of the therapy outweighed the risks.

The outcome came as a surprise to many, given that trial results have been positive. And it is also a blow for advocates who have been working to bring psychedelic therapy into mainstream medicine for more than two decades. This isn’t the final decision on MDMA. The FDA has until August 11 to make that ruling. But while the agency is under no obligation to follow the recommendations of its advisory committees, it rarely breaks with their decisions.  

Today on The Checkup, let’s unpack the advisory committee’s vote and talk about what it means for the approval of other recreational drugs as therapies.

One of the main stumbling blocks for the committee was the design of the two efficacy studies that have been completed. Trial participants weren’t supposed to know whether they were in the treatment group, but the effects of MDMA make it pretty easy to tell whether you’ve been given a hefty dose, and most correctly guessed which group they had landed in. 

In 2021, MIT Technology Review’s Charlotte Jee interviewed an MDMA trial participant named Nathan McGee. “Almost as soon as I said I didn’t think I’d taken it, it kicked in. I mean, I knew,” he told her. “I remember going to the bathroom and looking in the mirror, and seeing my pupils looking like saucers. I was like, ‘Wow, okay.’”

The Multidisciplinary Association for Psychedelic Studies, better known as MAPS, has been working with the FDA to develop MDMA as a treatment since 2001. When the organization met with the FDA in 2016 to hash out the details of its phase III trials, studies to test whether a treatment works, agency officials suggested that MAPS use an active compound for the control group to help mask whether participants had received the drug. But MAPS pushed back, and the trial forged ahead with a placebo. 

No surprise, then, that about 90% of those assigned to the MDMA group and 75% of those assigned to the placebo group accurately identified which arm of the study they had landed in. And it wasn’t just participants. Therapists treating the participants also likely knew whether those under their supervision had been given the drug. It’s called “functional unblinding,” and the issue came up at the committee meeting again and again. Here’s why it’s a problem: If a participant strongly believes that MDMA will help their PTSD and they know they’ve received MDMA, this expectation bias could amplify the treatment effect. This is especially a problem when the outcome is based on subjective measures like how a person feels rather than, say, laboratory data.

Another sticking point was the therapy component of the treatment. Lykos Therapeutics (the for-profit spinoff of MAPS) asked the FDA to approve MDMA-assisted therapy: that’s MDMA administered in concert with psychotherapy. Therapists oversaw participants during the three MDMA sessions. But participants also received three therapy sessions before getting the drug, and three therapy sessions afterwards to help them process their experience. 

Because the two treatments were administered together, there was no good way to tell how much of the effect was due to MDMA and how much was due to the therapy. What’s more, “the content or approach of these integrated sessions was not standardized in the treatment manuals and was mainly left up to the individual therapist,” said David Millis, a clinical reviewer for the FDA, at the committee meeting. 

Several committee members also raised safety concerns. They worried that MDMA’s effects might make people more suggestible and vulnerable to abuse, and they brought up allegations of ethics violations outlined in a recent report from the Institute for Clinical and Economic Review

Because of these issues and others, most committee members felt compelled to vote against MDMA-assisted therapy. “I felt that the large positive effect was denuded by the significant confounders,” said committee member Maryann Amirshahi, a professor of emergency medicine at Georgetown University School of Medicine, after the vote. “Although I do believe that there was a signal, it just needs to be better studied.”

Whether this decision will be a setback for the entire field remains to be seen. “To make it crystal clear: It isn’t MDMA itself that was rejected per se, but the specific, poor data set provided by Lykos Therapeutics; in my opinion, there is still a strong chance that MDMA, with a properly conducted clinical Phase 3 trial program that addresses those concerns of the FDA advisory committee, will get approved.” wrote Christian Angermayer, founder of ATAI Therapeutics, a company that is also working to develop MDMA as a therapy.

If the FDA denies approval of MDMA therapy, Lykos or another company could conduct additional studies and reapply. Many of the committee members said they believed MDMA does hold promise, but that the studies conducted thus far were inadequate to demonstrate the drug’s safety and efficacy. 

Psilocybin is likely to be the next psychedelic therapy considered by the FDA, and in some ways, it might have an easier path to approval. The idea behind MDMA is that it alleviates PTSD by helping facilitate psychotherapy. The therapy is a crucial component of the treatment, which is problematic because the FDA regulates drugs, not psychotherapy. With psilocybin, a therapist is present, but the drug appears to do the heavy lifting. “We are not offering therapy; we are offering psychological support that’s designed for the patient’s safety and well-being,” says Kabir Nath, CEO of Compass Pathways, the company working to bring psilocybin to market. “What we actually find during a six- to eight-hour session is most of it is silent. There’s actually no interaction.”

That could make the approval process more straightforward. “The difficult thing … is that we don’t regulate psychotherapy, and also we don’t really have any say in the design or the implementation of the particular therapy that is going to be used,” said Tiffany  Farchione, director of the FDA’s division of psychiatry, at the committee meeting. “This is something unprecedented, so we certainly want to get as many opinions and as much input as we can.” 

Another thing

Earlier this week, I explored what might happen if MDMA gets FDA approval and how the decision could affect other psychedelic therapies. 

Sally Adee dives deep into the messy history of electric medicine and what the future might hold for research into electric therapies. “Instead of focusing only on the nervous system—the highway that carries electrical messages between the brain and the body—a growing number of researchers are finding clever ways to electrically manipulate cells elsewhere in the body, such as skin and kidney cells, more directly than ever before,” she writes. 


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Psychedelics are undeniably having a moment, and the therapy might prove particularly beneficial to women, wrote Taylor Majewski in this feature from 2022.

In a previous issue of The Checkup, Jessica Hamzelou argued that the psychedelic hype bubble might be about to burst.

MDMA does seem to have helped some individuals. Nathan McGee, who took the drug as part of a clinical trial, told Charlotte Jee that he “understands what joy is now.” 

Researchers are working to design virtual-reality programs that recreate the trippy experience of taking psychedelics. Hana Kiros has the story

From around the web

In April I wrote about Lisa Pisano, the second person to receive a pig kidney. This week doctors removed the kidney after it failed owing to lack of blood flow.

Bird flu is still very much in the news.

–   Finland is poised to become the first country to start administering bird flu vaccine—albeit to a very limited subset of people, including poultry and mink farmers, vets, and scientists who study the virus  (Stat)

–   What are the most pressing questions about bird flu? They revolve around what’s happening in cows, what’s happening in farm workers, and what’s happening to the virus. (Stat)

– A man in Mexico has died of H5N2, a strain of bird flu that has never before been reported in humans. (CNN)

Biodegradable, squishy sensors injected into the brain hold promise for detecting changes following a head injury or cancer treatment. (Nature)

A synthetic version of a hallucinogenic toad toxin could be a promising treatment for mental-health disorders. (Undark)

The Download: gaming climate change, and Boeing’s space mission leaks

6 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This classic game is taking on climate change

—Casey Crownhart

There are two things I love to do at social gatherings: play board games and talk about climate change. Don’t I sound like someone you should invite to your next dinner party?

Given my two great loves, I was delighted to learn about a board game called Catan: New Energies, coming out this summer. It’s a new edition of the classic game Catan which has players building power plants, fueled by either fossil fuels or renewables.

So how does an energy-focused edition of Catan stack up against the board game competition? And what does it say about how we view climate technology? Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Boeing’s first crewed space mission has three helium leaks
But the spacecraft is stable enough to continue on its mission. (CNN)
+ Delays have added $1.4 billion in costs to the program. (WP $)
+ But its success demonstrates NASA has an alternative to SpaceX. (The Atlantic $)

2 How an AI-generated news outlet gained millions of readers
The now-defunct BNN Breaking looked like a standard news service. But its articles bore all the hallmarks of AI. (NYT $)
+ These six questions will dictate the future of generative AI. (MIT Technology Review)

3 Crypto miners are renting out their data centers to AI clients
AI needs chips and power, and miners are happy to oblige—for a price. (Bloomberg $)
+ Bitcoin mining was booming in Kazakhstan. Then it was gone. (MIT Technology Review)

4 The age of the AI PC is coming
Chipmakers are likening its arrival to the advent of Wi-Fi. (FT $)
+ Nvidia was the unofficial star of this week’s Computex conference. (Bloomberg $)
+ Elon Musk has admitted diverting Nvidia chips destined for Tesla to X. (WSJ $)

5 The majority of life on Earth is dormant
And a common protein might explain why. (Quanta Magazine)

6 Tsunamis are a looming danger in Alaska
Cliffs collapsing into the country’s fjords pose a major threat to nearby boats. (Hakai Magazine)

7 Filipino Catholics are building churches in Roblox
It’s a safe online space for younger users to explore their faith. (Rest of World)
+ Or if you fancy trying to earn a buck, Ikea will pay you to work in Roblox. (Wired $)

8 Palmer Luckey’s latest project is a handheld games console
From virtual reality, to lethal drones, to a gaming device. (Fast Company $)
+ Luckey’s admitted the venture doesn’t make much business sense. (The Verge)

9 Feeling stuck? AI can help you ask your future self for advice
You’re under no obligation to follow its suggestions, though. (The Guardian)

10 The doge meme is a relic of a bygone internet
The death of its star, Kobosu, is a reminder of how much has changed. (New Yorker $)
+ How to fix the internet. (MIT Technology Review)

Quote of the day

“We are seeing the werewolves beginning to circle.”

—Whistleblower Edward Snowden is concerned that government and corporate control will curtail the potential of the artificial intelligence boom, Bloomberg reports.

The big story

My new Turing test would see if AI can make $1 million

July 2023

—Mustafa Suleyman is the co-founder and CEO of Inflection AI and a venture partner at Greylock, a venture capital firm. Before that, he co-founded DeepMind, one of the world’s leading artificial intelligence companies.

AI systems are increasingly everywhere and are becoming more powerful almost by the day. But how can we know if a machine is truly “intelligent”? For decades this has been defined by the Turing test, which argues that an AI that’s able to replicate language convincingly enough to trick a human into thinking it was also human should be considered intelligent.

But there’s now a problem: the Turing test has almost been passed—it arguably already has been. The latest generation of large language models are on the cusp of acing it.

We need something better. I propose the Modern Turing Test. It would give AIs a simple instruction:  “Go make $1 million on a retail web platform in a few months with just a $100,000 investment.” Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Sauron is alive in Argentina! According to these old-school Lord of the Rings badges, at least.
+ A celebration of the women of shoegaze.
+ If you’ve ever wondered how they used to shoot cinematic battle scenes before the advent of CGI, wonder no more.
+ Congratulations to Max the cat, a much-loved member of Vermont State University, and honorary doctor of ‘litter-ature’ (thanks Paul!)

This classic game is taking on climate change

6 June 2024 at 04:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

There are two things I love to do at social gatherings: play board games and talk about climate change. Don’t I sound like someone you should invite to your next dinner party?

Given my two great loves, I was delighted to learn about a board game called Catan: New Energies, coming out this summer. It’s a new edition of the classic game Catan, formerly known as Settlers of Catan. This version has players building power plants, fueled by either fossil fuels or renewables. 

So how does an energy-focused edition of Catan stack up against the board game competition, and what does it say about how we view climate technology?

Catan debuted in 1995, and today it’s one of the world’s most popular board games. The original and related products have sold over 45 million copies worldwide. 

Given Catan’s superstar status, I was intrigued to learn late last year that the studio that makes it had plans in the works to release this new version. I quickly got in touch with the game’s co-creator, Benjamin Teuber, to hear more. 

“The whole idea is that energy comes to Catan,” Teuber told me. “Now the question is, which energy comes to Catan?” Power plants help players develop their society more quickly, amassing more of the points needed to win the game. Players can build fossil-fuel plants, represented by little brown tokens. These are less resource-intensive to build, but they produce pollution. Alternatively, players can elect to build renewable-power plants, signified by green tokens, which are costlier but don’t have the same negative effects in the game. 

As a climate reporter, I feel that some elements of the game setup ring true—for example, as players reach higher levels of pollution, disasters become more likely, but there’s still a strong element of chance involved. 

One aspect of the game that didn’t quite match reality was the cost difference between fossil fuels and renewables. Technologies like solar and wind have plummeted in price over the last decade—today, building new renewable projects is generally cheaper than operating existing coal plants in the US.

I asked if the creators had considered having renewables get cheaper over time in the game, and Teuber said the team had actually built an early version with this idea in place, but the whole thing got too complicated. Keeping things simple enough to be playable is a crucial component of game design, Teuber says. 

Teuber also seemed laser focused on not preaching, and it feels as if New Energies goes out of its way not to make players feel bad about climate change. In fact, as a story by NPR about the game pointed out, the phrase “climate change” hardly appears in any of the promotional materials, on the packaging, or in the rules. The catch-all issue in the game’s universe is simply “pollution.” 

Unlike some other climate games, like the 2023 release Daybreak, New Energies isn’t aimed at getting the group to work together to fight against climate change. The setup is the same as in other versions of Catan: the first player to reach 10 victory points wins. In theory, that could be a player who leaned heavily on fossil fuels. 

“It doesn’t feel like the game says, ‘Screw you—we told you, the only way to win is by building green energy,’” Teuber told me. 

However, while players can choose their own pathway to acquiring points, there’s a second possible outcome. If too many players produce too much pollution by building towns, cities, and fossil-fuel power plants, the game ends early in catastrophe. Whoever has done the most to clean up the environment does walk away with the win—something of a consolation prize. 

I got an early copy of the game to test out, and the first time I played, my group polluted too quickly and the game ended early. I ended up taking the win, since I had elected to build only renewable plants. I’ll admit to feeling a bit smug. 

But as I played more, I saw the balance between competition and collaboration. During one game, my group came within a few turns of pollution-driven catastrophe. We turned things around, building more renewable plants and stretching out play long enough for a friend who had been quicker to build her society to cobble together the points she needed to win. 

Our game board after a round of New Energies, with my cat, who acted as our unofficial referee. 
Photo: Casey Crownhart

Board games, or any other media that deals with climate change, will have to walk a fine line between dealing seriously with the crisis at hand and being entertaining enough to engage with. New Energies does that, though I think it makes some concessions toward being playable over being obsessively accurate. 

I wouldn’t recommend using this game as teaching material about climate change, but I suppose that’s not the point. If you’re a fan of Catan, this edition is definitely worth playing, and it’ll be part of my rotation. You can pre-order Catan New Energies here; the release date is June 14. And if you haven’t heard enough of my media musings, stay tuned for an upcoming story about New Energies and other climate-related board games. 


Now read the rest of The Spark

Related reading

Google DeepMind can take a short description or sketch and turn it into a playable video game

Researchers love testing AI by having models play video games. A new model that can play Goat Simulator could be a step toward more useful AI.

Dark Forest shows how advanced cryptography can be used in video games.

Keeping up with climate  

Direct air capture may be getting cheaper and better. Climeworks says that the third generation of its technology can suck up more carbon dioxide from the atmosphere with less energy. (Heatmap)

A Massachusetts town will be home to a new pilot project that basically amounts to a communal heating and cooling system. District energy projects could help energy go farther in cities and densely populated communities. (Associated Press)

Sublime Systems uses an electrochemical process to make cement without the massive emissions footprint. The company just installed its first commercial project in a Boston office park. (Canary Media)

→ According to the Canary story, one of the company’s developers heard about Sublime from a story in our publication! Read my deep dive into the startup from earlier this year. (MIT Technology Review)

A rush of renewable energy to the grid has led to some special periods with ultra-cheap or even free electricity. Experts warn that this could slow further deployment of renewables. (Bloomberg)

Natural disasters, some fueled by climate change, are throwing off medical procedures like fertility treatments, which require specific timing and careful control. (The 19th)

Take an inside look at Apple’s recycling robot, Daisy. The equipment can take apart over a million iPhones per year, but that’s a drop in the bucket given the hundreds of millions discarded annually. (TechCrunch)

Canada’s hydroelectric dams have been running a bit dry, and the country has had to import electricity from the US to make up the difference. It’s just one more example of how changing weather patterns can throw a wrench into climate solutions. (New York Times

Check out five demos from a high-tech energy conference, from batteries that can handle freezing temperatures to turbines that can harness power from irrigation channels. (IEEE Spectrum)

The Download: more energy-efficient AI, and the problem with QWERTY keyboards

5 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How a simple circuit could offer an alternative to energy-intensive GPUs

On a table in his lab at the University of Pennsylvania, physicist Sam Dillavou has connected an array of breadboards via a web of brightly colored wires. The setup looks like a DIY home electronics project, but this unassuming assembly can learn to sort data like a machine-learning model.

While its current capability is rudimentary, the hope is that, if it works, it could help spark a far more energy-efficient approach to building faster AI. Read the full story.

—Sophia Chen

How QWERTY keyboards show the English dominance of tech

Have you ever thought about the fact that, despite the myriad differences between languages, virtually everyone uses the same QWERTY keyboards? Many languages have more or fewer than 26 letters in their alphabet—or no “alphabet” at all, like Chinese, which has tens of thousands of characters. Yet somehow everyone uses the same keyboard to communicate.

Last week, MIT Technology Review published an excerpt from a new book, The Chinese Computer, which talks about how this problem was solved in China. 

Zeyi Yang, our China reporter, sat down with the book’s author, Tom Mullaney, a professor of history at Stanford University to discuss how speakers of non-Latin languages to adapt modern technologies for their uses, and what their efforts contribute to computing technologies. Read the rest of their conversation here.

This story is from China Report, our weekly newsletter covering tech and power in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US advisors have rejected MDMA as a treatment for PTSD
Which means it’s increasingly unlikely that it’ll end up being approved in August after all. (Vox)
+ The trials had positive results—but appeared flawed and biased. (Ars Technica)
+ What’s next for MDMA. (MIT Technology Review)

2 China is dead set on EV world domination
Everywhere besides the US and Europe, at least. (FT $)
+ How did China come to dominate the world of electric cars? (MIT Technology Review

3 Israel is secretly targeting US lawmakers with an influence campaign
It’s using fake social media accounts urging US lawmakers to fund Israel’s military. (NYT $)

4 Police drones aren’t all they’re cracked up to be
They’re being deployed to investigate minor crimes in the city of Chula Vista—and residents are increasingly unnerved. (Wired $)
+ Flying taxi firm Joby Aviation is hoping to move into defense contracts. (Fast Company $)
+ Welcome to Chula Vista, where police drones respond to 911 calls. (MIT Technology Review)

5 SpaceX has been permission to launch a fourth test flight
If everything runs smoothly, it should take off at 7am CDT on Thursday. (Ars Technica)

6 How Uganda built a vast biometric surveillance network
Identity verification systems are also used to monitor its citizens. (Bloomberg $)
+ How Worldcoin recruited its first half a million test users. (MIT Technology Review)

7 It’s a good time to be an AI video startup
In some cases, they’re ahead of the established giants. (WP $)
+ What’s next for generative video. (MIT Technology Review)

8 The lonely search for connection online
Modern loneliness is rife. The internet could help—and hinder. (The Guardian)

9 Stretchy screens are on the horizon
And could usher in a whole new era of wearables. (IEEE Spectrum)

10 These glasses could help us to see in the dark 👓
By converting infrared into visible light. (New Scientist $)

Quote of the day

“The world isn’t ready, and we aren’t ready.”

—Daniel Kokotajlo, a former OpenAI researcher, explains to the New York Times why he lost confidence in the company’s ability to behave responsibly as it creates ever more capable AI systems.

The big story

California’s coming offshore wind boom faces big engineering hurdles

December 2022

The state of California has an ambitious goal: building 25 gigawatts of offshore wind by 2045. That’s equivalent to nearly a third of the state’s total generating capacity today, or enough to power 25 million homes.

But the plans are facing a daunting geological challenge: the continental shelf drops steeply just a few miles off the California coast. They also face enormous engineering and regulatory obstacles. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ I could watch this clay genius make models of Pokemon all day long.
+ Cellphones and concert halls don’t tend to go together—but a new symphony is looking to forge a new cellular musical connection.
+ Yaupon tea sounds delicious to me.
+ For six years, Katy Perry had the charts in a chokehold. What happened?

How QWERTY keyboards show the English dominance of tech

By: Zeyi Yang
5 June 2024 at 06:00

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Have you ever thought about the miraculous fact that despite the myriad differences between languages, virtually everyone uses the same QWERTY keyboards? Many languages have more or fewer than 26 letters in their alphabet—or no “alphabet” at all, like Chinese, which has tens of thousands of characters. Yet somehow everyone uses the same keyboard to communicate.

Last week, MIT Technology Review published an excerpt from a new book, The Chinese Computer, which talks about how this problem was solved in China. After generations of work to sort Chinese characters, modify computer parts, and create keyboard apps that automatically predict the next character, it is finally possible for any Chinese speaker to use a QWERTY keyboard. 

But the book doesn’t stop there. It ends with a bigger question about what this all means: Why is it necessary for speakers of non-Latin languages to adapt modern technologies for their uses, and what do their efforts contribute to computing technologies?

I talked to the book’s author, Tom Mullaney, a professor of history at Stanford University. We ended up geeking out over keyboards, computers, the English-centric design that underlies everything about computing, and even how keyboards affect emerging technologies like virtual reality. Here are some of his most fascinating answers, lightly edited for clarity and brevity. 

Mullaney’s book covers many experiments across multiple decades that ultimately made typing Chinese possible and efficient on a QWERTY keyboard, but a similar process has played out all around the world. Many countries with non-Latin languages had to work out how they could use a Western computer to input and process their own languages.

Mullaney: In the Chinese case—but also in Japanese, Korean, and many other non-Western writing systems—this wasn’t done for fun. It was done out of brute necessity because the dominant model of keyboard-based computing, born and raised in the English-speaking world, is not compatible with Chinese. It doesn’t work because the keyboard doesn’t have the necessary real estate. And the question became: I have a few dozen keys but 100,000 characters. How do I map one onto the other? 

Simply put, half of the population on Earth uses the QWERTY keyboard in ways the QWERTY keyboard was never intended to be used, creating a radically different way of interacting with computers.

The root of all of these problems is that computers were designed with English as the default language. So the way English works is just the way computers work today.

M: Every writing system on the planet throughout history is modular, meaning it’s built out of smaller pieces. But computing carefully, brilliantly, and understandably worked on one very specific kind of modularity: modularity as it functions in English. 

And then everybody else had to fit themselves into that modularity. Arabic letters connect, so you have to fix [the computer for it]; In South Asian scripts, the combination of a consonant and a vowel changes the shape of the letter overall—that’s not how modularity works in English. 

The English modularity is so fundamental in computing that non-Latin speakers are still grappling with the impacts today despite decades of hard work to change things.

Mullaney shared a complaint that Arabic speakers made in 2022 about Adobe InDesign, the most popular publishing design software. As recently as two years ago, pasting a string of Arabic text into the software could cause the text to become messed up, misplacing its diacritic marks, which are crucial for indicating phonetic features of the text. It turns out you need to install a Middle East version of the software and apply some deliberate workarounds to avoid the problem.

M: Latin alphabetic dominance is still alive and well; it has not been overthrown. And there’s a troubling question as to whether it can ever be overthrown. Some turn was made, some path taken that advantaged certain writing systems at a deep structural level and disadvantaged others. 

That deeply rooted English-centric design is why mainstream input methods never deviate too far from the keyboards that we all know and love/hate. In the English-speaking world, there have been numerous attempts to reimagine the way text input works. Technologies such as the T9 phone keyboard or the Palm Pilot handwriting alphabet briefly achieved some adoption. But they never stick for long because most developers snap back to QWERTY keyboards at the first opportunity.

M: T9 was born in the context of disability technology and was incorporated into the first mobile phones because button real estate was a major problem (prior to the BlackBerry reintroducing the QWERTY keyboard). It was a necessity; [developers] actually needed to think in a different way. But give me enough space, give me 12 inches by 14 inches, and I’ll default to a QWERTY keyboard.

Every 10 years or so, some Western tech company or inventor announces: “Everybody! I have finally figured out a more advanced way of inputting English at much higher speeds than the QWERTY keyboard.” And time and time again there is zero market appetite. 

Will the QWERTY keyboard stick around forever? After this conversation, I’m secretly hoping it won’t. Maybe it’s time for a change. With new technologies like VR headsets, and other gadgets on the horizon, there may come a time when QWERTY keyboards are not the first preference, and non-Latin languages may finally have a chance in shaping the new norm of human-computer interactions. 

M: It’s funny, because now as you go into augmented and virtual reality, Silicon Valley companies are like, “How do we overcome the interface problem?” Because you can shrink everything except the QWERTY keyboard. And what Western engineers fail to understand is that it’s not a tech problem—it’s a technological cultural problem. And they just don’t get it. They think that if they just invent the tech, it is going to take off. And thus far, it never has.

If I were a software or hardware developer, I would be hanging out in online role-playing games, just in the chat feature; I would be watching people use their TV remote controls to find the title of the film they’re looking for; I would look at how Roblox players chat with each other. It’s going to come from some arena outside the mainstream, because the mainstream is dominated by QWERTY.

What are other signs of the dominance of English in modern computing? I’d love to hear about the geeky details you’ve noticed. Send them to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. Today marks the 35th anniversary of the student protests and subsequent massacre in Tiananmen Square in Beijing. 

  • For decades, Hong Kong was the hub for Tiananmen memorial events. That’s no longer the case, due to Beijing’s growing control over the city’s politics after the 2019 protests. (New Yorker $)
  • To preserve the legacy of the student protesters at Tiananmen, it’s also important to address ethical questions about how American universities and law enforcement have been treating college protesters this year. (The Nation)

2. A Chinese company that makes laser sensors was labeled by the US government as a security concern. A few months later, it discreetly rebranded as a Michigan-registered company called “American Lidar.” (Wall Street Journal $)

3. It’s a tough time to be a celebrity in China. An influencer dubbed “China’s Kim Kardashian” for his extravagant displays of wealth has just been banned by multiple social media platforms after the internet regulator announced an effort to clear out “​​ostentatious personas.” (Financial Times $)

  • Meanwhile, Taiwanese celebrities who also have large followings in China are increasingly finding themselves caught in political crossfires. (CNN)

4. Cases of Chinese students being rejected entry into the US reveals divisions within the Biden administration. Customs agents, who work for the Department of Homeland Security, have canceled an increasing number of student visas that had already been approved by the State Department. (Bloomberg $)

5. Palau, a small Pacific island nation that’s one of the few countries in the world that recognizes Taiwan as a sovereign country, says it is under cyberattack by China. (New York Times $)

6. After being the first space mission to collect samples from the moon’s far side, China’s Chang’e-6 lunar probe has begun its journey back to Earth. (BBC)

7. The Chinese government just set up the third and largest phase of its semiconductor investment fund to prop up its domestic chip industry. This one’s worth $47.5 billion. (Bloomberg $)

Lost in translation

The Chinese generative AI community has been stirred up by the first discovery of a Western large language model plagiarizing a Chinese one, according to the Chinese publication PingWest

Last week, two undergraduate computer science students at Stanford University released an open-source model called Llama 3-V that they claimed is more powerful than LLMs made by OpenAI and Google, while costing less. But Chinese AI researchers soon found out that Llama 3-V had copied the structure, configuration files, and code from MiniCPM-Llama3-V 2.5, another open-source LLM developed by China’s Tsinghua University and ModelBest Inc, a Chinese startup. 

What proved the plagiarism was the fact that the Chinese team secretly trained the model on a collection of Chinese writings on bamboo slips from 2000 years ago, and no other LLMs can recognize the Chinese characters in this ancient writing style accurately. But Llama 3-V could recognize these characters as well as MiniCPM, while making the exact same mistakes as the Chinese model. The students who released Llama 3-V have removed the model and apologized to the Chinese team, but the incident is seen as proof of the rapidly improving capabilities of homegrown LLMs by the Chinese AI community. 

One more thing

Hand-crafted squishy toys (or pressure balls) in the shape of cute animals or desserts have become the latest viral products on Chinese social media. Made in small quantities and sold in limited batches, some of them go for up to $200 per toy on secondhand marketplaces. I mean, they are cute for sure, but I’m afraid the idea of spending $200 on a pressure ball only increases my anxiety.

How a simple circuit could offer an alternative to energy-intensive GPUs

5 June 2024 at 04:00

On a table in his lab at the University of Pennsylvania, physicist Sam Dillavou has connected an array of breadboards via a web of brightly colored wires. The setup looks like a DIY home electronics project—and not a particularly elegant one. But this unassuming assembly, which contains 32 variable resistors, can learn to sort data like a machine-learning model.

While its current capability is rudimentary, the hope is that the prototype will offer a low-power alternative to the energy-guzzling graphical processing unit (GPU) chips widely used in machine learning. 

“Each resistor is simple and kind of meaningless on its own,” says Dillavou. “But when you put them in a network, you can train them to do a variety of things.”

breadboards connected in a grid
Sam Dillavou’s laboratory at the University of Pennsylvania is using circuits composed of resistors to perform simple machine learning classification tasks. 
FELICE MACERA

A task the circuit has performed: classifying flowers by properties such as petal length and width. When given these flower measurements, the circuit could sort them into three species of iris. This kind of activity is known as a “linear” classification problem, because when the iris information is plotted on a graph, the data can be cleanly divided into the correct categories using straight lines. In practice, the researchers represented the flower measurements as voltages, which they fed as input into the circuit. The circuit then produced an output voltage, which corresponded to one of the three species. 

This is a fundamentally different way of encoding data from the approach used in GPUs, which represent information as binary 1s and 0s. In this circuit, information can take on a maximum or minimum voltage or anything in between. The circuit classified 120 irises with 95% accuracy. 

Now the team has managed to make the circuit perform a more complex problem. In a preprint currently under review, the researchers have shown that it can perform a logic operation known as XOR, in which the circuit takes in two binary numbers and determines whether the inputs are the same. This is a “nonlinear” classification task, says Dillavou, and “nonlinearities are the secret sauce behind all machine learning.” 

Their demonstrations are a walk in the park for the devices you use every day. But that’s not the point: Dillavou and his colleagues built this circuit as an exploratory effort to find better computing designs. The computing industry faces an existential challenge as it strives to deliver ever more powerful machines. Between 2012 and 2018, the computing power required for cutting-edge AI models increased 300,000-fold. Now, training a large language model takes the same amount of energy as the annual consumption of more than a hundred US homes. Dillavou hopes that his design offers an alternative, more energy-efficient approach to building faster AI.

Training in pairs

To perform its various tasks correctly, the circuitry requires training, just like contemporary machine-learning models that run on conventional computing chips. ChatGPT, for example, learned to generate human-sounding text after being shown many instances of real human text; the circuit learned to predict which measurements corresponded to which type of iris after being shown flower measurements labeled with their species. 

Training the device involves using a second, identical circuit to “instruct” the first device. Both circuits start with the same resistance values for each of their 32 variable resistors. Dillavou feeds both circuits the same inputs—a voltage corresponding to, say, petal width—and adjusts the output voltage of the second circuit to correspond to the correct species. The first circuit receives feedback from that second circuit, and both circuits adjust their resistances so they converge on the same values. The cycle starts again with a new input, until the circuits have settled on a set of resistance levels that produce the correct output for the training examples. In essence, the team trains the device via a method known as supervised learning, where an AI model learns from labeled data to predict the labels for new examples.

It can help, Dillavou says, to think of the electric current in the circuit as water flowing through a network of pipes. The equations governing fluid flow are analogous to those governing electron flow and voltage. Voltage corresponds to fluid pressure, while electrical resistance corresponds to the pipe diameter. During training, the different “pipes” in the network adjust their diameter in various parts of the network in order to achieve the desired output pressure. In fact, early on, the team considered building the circuit out of water pipes rather than electronics. 

For Dillavou, one fascinating aspect of the circuit is what he calls its “emergent learning.” In a human, “every neuron is doing its own thing,” he says. “And then as an emergent phenomenon, you learn. You have behaviors. You ride a bike.” It’s similar in the circuit. Each resistor adjusts itself according to a simple rule, but collectively they “find” the answer to a more complicated question without any explicit instructions. 

A potential energy advantage

Dillavou’s prototype qualifies as a type of analog computer—one that encodes information along a continuum of values instead of the discrete 1s and 0s used in digital circuitry. The first computers were analog, but their digital counterparts superseded them after engineers developed fabrication techniques to squeeze more transistors onto digital chips to boost their speed. Still, experts have long known that as they increase in computational power, analog computers offer better energy efficiency than digital computers, says Aatmesh Shrivastava, an electrical engineer at Northeastern University. “The power efficiency benefits are not up for debate,” he says. However, he adds, analog signals are much noisier than digital ones, which make them ill suited for any computing tasks that require high precision.

In practice, Dillavou’s circuit hasn’t yet surpassed digital chips in energy efficiency. His team estimates that their design uses about 5 to 20 picojoules per resistor to generate a single output, where each resistor represents a single parameter in a neural network. Dillavou says this is about a tenth as efficient as state-of-the-art AI chips. But he says that the promise of the analog approach lies in scaling the circuit up, to increase its number of resistors and thus its computing power.

He explains the potential energy savings this way: Digital chips like GPUs expend energy per operation, so making a chip that can perform more operations per second just means a chip that uses more energy per second. In contrast, the energy usage of his analog computer is based on how long it is on. Should they make their computer twice as fast, it would also become twice as energy efficient. 

Dillavou’s circuit is also a type of neuromorphic computer, meaning one inspired by the brain. Like other neuromorphic schemes, the researchers’ circuitry doesn’t operate according to top-down instruction the way a conventional computer does. Instead, the resistors adjust their values in response to external feedback in a bottom-up approach, similar to how neurons respond to stimuli. In addition, the device does not have a dedicated component for memory. This could offer another energy efficiency advantage, since a conventional computer expends a significant amount of energy shuttling data between processor and memory. 

While researchers have already built a variety of neuromorphic machines based on different materials and designs, the most technologically mature designs are built on semiconducting chips. One example is Intel’s neuromorphic computer Loihi 2, to which the company began providing access for government, academic, and industry researchers in 2021. DeepSouth, a chip-based neuromorphic machine at Western Sydney University that is designed to be able to simulate the synapses of the human brain at scale, is scheduled to come online this year.

The machine-learning industry has shown interest in chip-based neuromorphic computing as well, with a San Francisco–based startup called Rain Neuromorphics raising $25 million in February. However, researchers still haven’t found a commercial application where neuromorphic computing definitively demonstrates an advantage over conventional computers. In the meantime, researchers like Dillavou’s team are putting forth new schemes to push the field forward. A few people in industry have expressed interest in his circuit. “People are most interested in the energy efficiency angle,” says Dillavou. 

But their design is still a prototype, with its energy savings unconfirmed. For their demonstrations, the team kept the circuit on breadboards because it’s “the easiest to work with and the quickest to change things,” says Dillavou, but the format suffers from all sorts of inefficiencies. They are testing their device on printed circuit boards to improve its energy efficiency, and they plan to scale up the design so it can perform more complicated tasks. It remains to be seen whether their clever idea can take hold out of the lab.

The Download: AI for good, and China’s shrinking internet

4 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What I learned from the UN’s “AI for Good” summit

—Melissa Heikkilä

Last week, Geneva played host to the UN’s AI for Good Summit. The summit’s big focus was how AI can be used to meet the UN’s Sustainable Development Goals, such as eradicating poverty and hunger, achieving gender equality, promoting clean energy and climate action and so on. 

The conference managed to convene people working in AI from around the globe, featuring speakers from China, the Middle East, and Africa too. AI can be very US-centric and male dominated, and any effort to make the conversation more global and diverse is laudable.

But honestly, I didn’t leave the conference feeling confident AI was going to play a meaningful role in advancing any of the UN goals. In fact, the most interesting speeches were about how AI is doing the opposite. Read the full story.

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

Read more of Melissa’s stories about the issues within the AI sector:

+ How generative AI has made phishing, scamming, and doxxing easier than ever.

+ We are all AI’s free data workers. Fancy AI models rely on human labor, which can often be brutal and upsetting. Read the full story.

+ The viral AI avatar app Lensa undressed me—without my consent.

+ Making an image with generative AI uses as much energy as charging your phone. Each time you use AI to generate an image, write an email, or ask a chatbot a question, it comes at a cost to the planet. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Chinese internet is collapsing
Websites are being yanked offline, and history is being lost in the process. (NYT $)
+ The end of anonymity online in China. (MIT Technology Review)

2 The United Arab Emirates want to cozy up to the US over AI
And it’s more than willing to spend billions of dollars in the process. (FT $)
 
3 Google inadvertently collected voice data from children
Alongside users’ home addresses and YouTube recommendations. (404 Media)
+ Why child safety bills are popping up all over the US. (MIT Technology Review)

4 Our appetite for data centers is at odds with zero-carbon goals
Demand for electricity is rising, and decarbonizing the grid is becoming an even bigger challenge. (Undark Magazine)
+ A massive part of why we need more power? Surprise, surprise—it’s AI. (Wired $)+ Energy-hungry data centers are quietly moving into cities. (MIT Technology Review)

5 X is formally allowing X-rated content
NSFW images and videos have been rife on it for years anyway. (TechCrunch)
+ It’s supposed to block under-18s from seeing NSFW material. (The Guardian)

6 AI is getting much better at predicting the weather 🌩
Which is handy, given that the 2024 Atlantic hurricane season is coming. (Ars Technica)
+ Google DeepMind’s weather AI can forecast extreme weather faster and more accurately. (MIT Technology Review)

7 This new startup wants to bring cryonics to the masses
By focusing on the reviving, rather than the freezing part. (Bloomberg $)
+ Why the sci-fi dream of cryonics never died. (MIT Technology Review)

8 Retailers love it when you buy things on your mobile
If you’re making an impulse purchase, chances are it’s on a phone, not a laptop. (WSJ $)

9 Dying stars produce glitching radio waves
Scientists are getting better at reproducing these pulsar glitches. (New Scientist $)

10 Amazon sold fake copies of a major UFO book 🛸
Scammers produced false versions of the hotly-anticipated title, some of which contain AI-generated text. (404 Media)

Quote of the day

“We are still behind them, but we are breathing down their back.”

—Vladimir Milov, a YouTube creator, tells Wired how he helped to create a direct competitor to Putin’s TV propaganda on the platform.

The big story

Broadband funding for Native communities could finally connect some of America’s most isolated places

September 2022

Rural and Native communities in the US have long had lower rates of cellular and broadband connectivity than urban areas, where four out of every five Americans live. Outside the cities and suburbs, which occupy barely 3% of US land, reliable internet service can still be hard to come by.

The covid-19 pandemic underscored the problem as Native communities locked down and moved school and other essential daily activities online. But it also kicked off an unprecedented surge of relief funding to solve it. Read the full story.

—Robert Chaney

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Bats have incredibly sweet little feet.
+ Too many gardens aren’t overly hospitable to nature. Here’s how to make your green spaces into wild wonderlands.
+ Summer is here, and honey garlic parmesan biscuits seem like a great way to celebrate.
+ This sunset ocean painting is really quite something.

What I learned from the UN’s “AI for Good” summit

4 June 2024 at 05:05

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Greetings from Switzerland! I’ve just come back from Geneva, which last week hosted the UN’s AI for Good Summit, organized by the International Telecommunication Union. The summit’s big focus was how AI can be used to meet the UN’s Sustainable Development Goals, such as eradicating poverty and hunger, achieving gender equality, promoting clean energy and climate action and so on. 

The conference featured lots of robots (including one that dispenses wine), but what I liked most of all was how it managed to convene people working in AI from around the globe, featuring speakers from China, the Middle East, and Africa too, such as Pelonomi Moiloa, the CEO of Lelapa AI, a startup building AI for African languages. AI can be very US-centric and male dominated, and any effort to make the conversation more global and diverse is laudable. 

But honestly, I didn’t leave the conference feeling confident AI was going to play a meaningful role in advancing any of the UN goals. In fact, the most interesting speeches were about how AI is doing the opposite. Sage Lenier, a climate activist, talked about how we must not let AI accelerate environmental destruction. Tristan Harris, the cofounder of the Center for Humane Technology, gave a compelling talk connecting the dots between our addiction to social media, the tech sector’s financial incentives, and our failure to learn from previous tech booms. And there are still deeply ingrained gender biases in tech, Mia Shah-Dand, the founder of Women in AI Ethics, reminded us. 

So while the conference itself was about using AI for “good,” I would have liked to see more talk about how increased transparency, accountability, and inclusion could make AI itself good from development to deployment.

We now know that generating one image with generative AI uses as much energy as charging a smartphone. I would have liked more honest conversations about how to make the technology more sustainable itself in order to meet climate goals. And it felt jarring to hear discussions about how AI can be used to help reduce inequalities when we know that so many of the AI systems we use are built on the backs of human content moderators in the Global South who sift through traumatizing content while being paid peanuts. 

Making the case for the “tremendous benefit” of AI was OpenAI’s CEO Sam Altman, the star speaker of the summit. Altman was interviewed remotely by Nicholas Thompson, the CEO of the Atlantic, which has incidentally just announced a deal for OpenAI to share its content to train new AI models. OpenAI is the company that instigated the current AI boom, and it would have been a great opportunity to ask him about all these issues. Instead, the two had a relatively vague, high-level discussion about safety, leaving the audience none the wiser about what exactly OpenAI is doing to make their systems safer. It seemed they were simply supposed to take Altman’s word for it. 

Altman’s talk came a week or so after Helen Toner, a researcher at the Georgetown Center for Security and Emerging Technology and a former OpenAI board member, said in an interview that the board found out about the launch of ChatGPT through Twitter, and that Altman had on multiple occasions given the board inaccurate information about the company’s formal safety processes. She has also argued that it is a bad idea to let AI firms govern themselves, because the immense profit incentives will always win. (Altman said he “disagree[s] with her recollection of events.”) 

When Thompson asked Altman what the first good thing to come out of generative AI will be, Altman mentioned productivity, citing examples such as software developers who can use AI tools to do their work much faster. “We’ll see different industries become much more productive than they used to be because they can use these tools. And that will have a positive impact on everything,” he said. I think the jury is still out on that one. 


Now read the rest of The Algorithm

Deeper Learning

Why Google’s AI Overviews gets things wrong

Google’s new feature, called AI Overviews, provides brief, AI-generated summaries highlighting key information and links on top of search results. Unfortunately, within days of AI Overviews’ release in the US, users were sharing examples of responses that were strange at best. It suggested that users add glue to pizza or eat at least one small rock a day.

MIT Technology Review explains: In order to understand why AI-powered search engines get things wrong, we need to look at how they work. The models that power them simply predict the next word (or token) in a sequence, which makes them appear fluent but also leaves them prone to making things up. They have no ground truth to rely on, but instead choose each word purely on the basis of a statistical calculation. Worst of all? There’s probably no way to fix things. That’s why you shouldn’t trust AI search enginesRead more from Rhiannon Williams here

Bits and Bytes

OpenAI’s latest blunder shows the challenges facing Chinese AI models
OpenAI’s GPT-4o data set is polluted by Chinese spam websites. But this problem is indicative of a much wider issue for those building Chinese AI services: finding the high-quality data sets they need to be trained on is tricky, because of the way China’s internet functions. (MIT Technology Review

Five ways criminals are using AI
Artificial intelligence has brought a big boost in productivity—to the criminal underworld. Generative AI has made phishing, scamming, and doxxing easier than ever. (MIT Technology Review)

OpenAI is rebooting its robotics team
After disbanding its robotics team in 2020, the company is trying again. The resurrection is in part thanks to rapid advancements in robotics brought by generative AI. (Forbes

OpenAI found Russian and Chinese groups using its tech for propaganda campaigns
OpenAI said that it caught, and removed, groups from Russia, China, Iran, and Israel that were using its technology to try to influence political discourse around the world. But this is likely just the tip of the iceberg when it comes to how AI is being used to affect this year’s record-breaking number of elections. (The Washington Post

Inside Anthropic, the AI company betting that safety can be a winning strategy
The AI lab Anthropic, creator of the Claude model, was started by former OpenAI employees who resigned over “trust issues.” This profile is an interesting peek inside one of OpenAI’s competitors, showing how the ideology behind AI safety and effective altruism is guiding business decisions. (Time

AI-directed drones could help find lost hikers faster
Drones are already used for search and rescue, but planning their search paths is more art than science. AI could change that. (MIT Technology Review

The Download: MDMA for PTSD, and Boeing’s rearranged space flight

3 June 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next for MDMA

MDMA has been banned in the United States for more than three decades. But now, this potent mind-altering drug is poised to become a badly needed therapy for PTSD.

On June 4, the Food and Drug Administration’s advisory committee will meet to discuss the risks and benefits of MDMA therapy. If the committee votes in favor of the drug, it could be approved to treat PTSD this summer.

The approval would represent a momentous achievement for proponents of mind-altering drugs, who have been working toward this goal for decades. And it could help pave the way for FDA approval of other illicit drugs like psilocybin. But the details surrounding how these compounds will make the transition from illicit substances to legitimate therapies are still foggy. Here’s what you need to know ahead of the upcoming hearing.

—Cassandra Willyard

If you’re interested in how mind-altering drugs are being used in medicine, why not check out:

+ What do psychedelic drugs do to our brains? AI could help us find out. Why the words people used to describe their trip experiences could lead to better drugs to treat mental illness. Read the full story.+ Psychedelics are being scientifically researched now more than ever. This time, women might finally benefit.+ VR is as good as psychedelics at helping people reach transcendence. On key metrics, a VR experience elicited a response indistinguishable from subjects who took medium doses of LSD or magic mushrooms. Read the full story.

+ One patient in a trial describes his “life-changing” experience with MDMA-assisted therapy.
+ But there is a danger that mind-altering substances are being overhyped as wonder drugs.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Boeing has rescheduled a historic space flight for Wednesday
The company’s first crewed flight was canceled at the last minute on Saturday. (Reuters)
+ The flight was grounded after a faulty ground power unit was uncovered. (CNN)
+ Boeing has been trying to fly astronauts into space for years. (The Atlantic $)

2 Adobe has ceased selling Ansel Adams-style images generated by AI
The late photographer’s estate has been trying to get them taken down for months. (The Verge)
+ This artist is dominating AI-generated art. And he’s not happy about it. (MIT Technology Review)

3 How successful has America’s Chips Act been?
The government effort has awarded billions to chipmakers, but it’s a long game. (WSJ $)
+ What’s next in chips. (MIT Technology Review)

4 Social media videos encourage Chinese migrants to move to the US
But the cheery clips fail to capture the reality of moving to a foreign country. (The Markup)

5 This is what AI thinks a beautiful woman looks like
Light-skinned, thin, and impossibly glamorous. (WP $)
+ How it feels to be sexually objectified by an AI. (MIT Technology Review)

6 Inside the messy ethics of brain implants
The invasive surgery is restricted to disabled patients—for now. (FT $)
+ Beyond Neuralink: Meet the other companies developing brain-computer interfaces. (MIT Technology Review)

7 Learning more about the placenta could help prevent stillbirths
Many stillbirths have unidentified causes. Observing the placenta could help. (The Atlantic $)

8 The internet isn’t fun any more
And it hasn’t been for almost a decade. (Vox)
+ How to fix the internet. (MIT Technology Review)

9 Driverless car racing sounds seriously weird 🏎
It’s incredibly technically challenging, and entirely absent of thrills. (Ars Technica)

10 This app has reinvented the walkie talkie
For the TikTok generation. (TechCrunch)

Quote of the day

“I believe it’s as significant as Windows 95.”

—Cristiano Amon, chief executive of semiconductor company Qualcomm, hypes up its latest chip with a comparison to Microsoft’s seminal computer software, Bloomberg reports.  

The big story

How Bitcoin mining devastated this New York town

April 2022

If you had taken a gamble in 2017 and purchased Bitcoin, today you might be a millionaire many times over. But while the industry has provided windfalls for some, local communities have paid a high price, as people started scouring the world for cheap sources of energy to run large Bitcoin-mining farms.

It didn’t take long for a subsidiary of the popular Bitcoin mining firm Coinmint to lease a Family Dollar store in Plattsburgh, a city in New York state offering cheap power. Soon, the company was regularly drawing enough power for about 4,000 homes. And while other miners were quick to follow, the problems had already taken root. Read the full story.

—Lois Parshley

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Aww, this tiny seal could not be happier after his bath.
+ Photographer Rankin’s archive of 90s photos are beyond cool.
+ Astrolabes were the must-have gadgets of the Middle Ages.
+ I don’t remember this version of Les Misérables?

What’s next for MDMA

3 June 2024 at 05:00

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

MDMA, sometimes called Molly or ecstasy, has been banned in the United States for more than three decades. Now this potent mind-altering drug is poised to become a badly needed therapy for PTSD.

On June 4, the Food and Drug Administration’s advisory committee will meet to discuss the risks and benefits of MDMA therapy. If the committee votes in favor of the drug, it could be approved to treat PTSD this summer. The approval would represent a momentous achievement for proponents of mind-altering drugs, who have been working toward this goal for decades. And it could help pave the way for FDA approval of other illicit drugs like psilocybin. But the details surrounding how these compounds will make the transition from illicit substances to legitimate therapies are still foggy. 

Here’s what to know ahead of the upcoming hearing. 

What’s the argument for legitimizing MDMA? 

Studies suggest the compound can help treat mental-health disorders like PTSD and depression. Lykos, the company that has been developing MDMA as a therapy, looked at efficacy in two clinical trials that included about 200 people with PTSD. Researchers randomly assigned participants to receive psychotherapy with or without MDMA. The group that received MDMA-assisted therapy had a greater reduction in PTSD symptoms. They were also more likely to respond to treatment, to meet the criteria for PTSD remission, and to lose their diagnosis of PTSD.

But some experts question the validity of the results. With substances like MDMA, study participants almost always know whether they’ve received the drug or a placebo. That can skew the results, especially when the participants and therapists strongly believe a drug is going to help. The Institute for Clinical and Economic Review (ICER), a nonprofit research organization that evaluates the clinical and economic value of drugs, recently rated the evidence for MDMA-assisted therapy as “insufficient.

In briefing documents published ahead of the June 4 meeting, FDA officials write that the question of approving MDMA “presents a number of complex review issues.”

The ICER report also referenced allegations of misconduct and ethical violations. Lykos (formerly the Multidisciplinary Association for Psychedelic Studies Public Benefit Corporation) acknowledges that ethical violations occurred in one particularly high-profile case. But in a rebuttal to the ICER report, more than 70 researchers involved in the trials wrote that “a number of assertions in the ICER report represent hearsay, and should be weighted accordingly.” Lykos did not respond to an interview request.

At the meeting on the 4th, the FDA has asked experts to discuss whether Lykos has demonstrated that MDMA is effective, whether the drug’s effect lasts, and what role psychotherapy plays. The committee will also discuss safety, including the drug’s potential for abuse and the risk posed by the impairment MDMA causes. 

What’s stopping people from using this therapy?

MDMA is illegal. In 1985, the Drug Enforcement Agency grew concerned about growing street use of the drug and added it to its list of Schedule 1 substances—those with a high abuse potential and no accepted medical use. 

MDMA boosts the brain’s production of feel-good neurotransmitters, causing a burst of euphoria and good will toward others. But the drug can also cause high blood pressure, memory problems, anxiety, irritability, and confusion. And repeated use can cause lasting changes in the brain

If the FDA approves MDMA therapy, when will people be able to access it?

That has yet to be determined. It could take months for the DEA to reclassify the drug. After that, it’s up to individual states. 

Lykos applied for approval of MDMA-assisted therapy, not just the compound itself. In the clinical trials, MDMA administration happened in the presence of licensed therapists, who then helped patients process their emotions during therapy sessions that lasted for hours.

But regulating therapy isn’t part of the FDA’s purview. The FDA approves drugs; it doesn’t oversee how they’re administered. “The agency has been clear with us,” says Kabir Nath, CEO of Compass Pathways, the company working to bring psilocybin to market. “They don’t want to regulate psychotherapy, because they see that as the practice of medicine, and that’s not their job.” 

However, for drugs that carry a risk of serious side effects, the FDA can add a risk evaluation and mitigation strategy to its approval. For MDMA that might include mandating that the health-care professionals who administer the medication have certain certifications or specialized training, or requiring that the drug be dispensed only in licensed facilities. 

For example, Spravato, a nasal spray approved in 2019 for depression that works much like ketamine, is available only at a limited number of health-care facilities and must be taken under the observation of a health-care provider. Having safeguards in place for MDMA makes sense, at least at the outset, says Matt Lamkin, an associate professor at the University of Tulsa College of Law who has been following the field closely.: “Given the history, I think it would only take a couple of high-profile bad incidents to potentially set things back.”

What mind-altering drug is next in line for FDA approval?

Psilocybin, a.k.a. the active ingredient in magic mushrooms. This summer Compass Pathways will release the first results from one of its phase 3 trials of psilocybin to treat depression. Results from the other trial will come in the middle of 2025, which—if all goes well—puts the company on track to file for approval in the fall or winter of next year. With the FDA review and the DEA rescheduling, “it’s still kind of two to three years out,” Nath says.

Some states are moving ahead without formal approval. Oregon voters made psilocybin legal in 2020, and the drug is now accessible there at about 20 licensed centers for supervised use. “It’s an adult use program that has a therapeutic element,” says Ismail Ali, director of policy and advocacy at the Multidisciplinary Association for Psychedelic Studies (MAPS).

Colorado voted to legalize psilocybin and some other plant-based psychedelics in 2022, and the state is now working to develop a framework to guide the licensing of facilitators to administer these drugs for therapeutic purposes. More states could follow. 

So would FDA approval of these compounds open the door to legal recreational use of psychedelics?

Maybe. The DEA can still prosecute physicians if they’re prescribing drugs outside of their medically accepted uses. But Lamkin does see the lines between recreational use and medical use getting blurry. “What we’re seeing is that the therapeutic uses have recreational side effects and the recreation has therapeutic side effects,” he says. “I’m interested to see how long they can keep the genie in the bottle.”

What’s the status of MDMA therapies elsewhere in the world? 

Last summer, Australia became the first country to approve MDMA and psilocybin as medicines to treat psychiatric disorders, but the therapies are not yet widely available. The first clinic opened just a few months ago. The US is poised to become the second country if the FDA greenlights Lykos’s application. Health Canada told the CBC it is watching the FDA’s review of MDMA “with interest.” Europe is lagging a bit behind, but there are some signs of movement. In April, the European Medicines Agency convened a workshop to bring together a variety of stakeholders to discuss a regulatory framework for psychedelics.

The Download: Google’s AI Overviews nightmare, and improving search and rescue drones

31 May 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why Google’s AI Overviews gets things wrong

When Google announced it was rolling out its artificial intelligence-powered search feature earlier this month, the company promised that “Google will do the googling for you.”The new feature, called AI Overviews, provides brief, AI-generated summaries highlighting key information and links on top of search results.

Unfortunately, AI systems are inherently unreliable. And within days of AI Overviews being released in the US, users quickly shared examples of the feature suggesting that its users add glue to pizza, eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875. 

Yesterday, Liz Reid, head of Google Search, announced that the company has been making technical improvements to the system.

But why is AI Overviews returning unreliable, potentially dangerous information in the first place? And what, if anything, can be done to fix it? Read the full story.

—Rhiannon Williams

AI-directed drones could help find lost hikers faster

If a hiker gets lost in the rugged Scottish Highlands, rescue teams sometimes send up a drone to search for clues of the individual’s route. But with vast terrain to cover and limited battery life, picking the right area to search is critical.

Traditionally, expert drone pilots use a combination of intuition and statistical “search theory”—a strategy with roots in World War II–era hunting of German submarines—to prioritize certain search locations over others.

Now researchers want to see if a machine-learning system could do better. Read the full story.

—James O’Donnell

What’s next for bird flu vaccines

In the US, bird flu has now infected cows in nine states, millions of chickens, and—as of last week—a second dairy worker. There’s no indication that the virus has acquired the mutations it would need to jump between humans, but the possibility of another pandemic has health officials on high alert. Last week, they said they are working to get 4.8 million doses of H5N1 bird flu vaccine packaged into vials as a precautionary measure. 

The good news is that we’re far more prepared for a bird flu outbreak than we were for covid. We know so much more about influenza than we did about coronaviruses. And we already have hundreds of thousands of doses of a bird flu vaccine sitting in the nation’s stockpile.

The bad news is we would need more than 600 million doses to cover everyone in the US, at two shots per person. And the process we typically use to produce flu vaccines takes months and relies on massive quantities of chicken eggs—one of the birds that’s susceptible to avian flu. Read about why we still use a cumbersome, 80-year-old vaccine production process to make flu vaccines—and how we can speed it up.

—Cassandra Willyard

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Russia, Iran and China used generative AI in covert propaganda campaigns
But their efforts weren’t overly successful. (NYT $) 
+ The groups used the generative AI models to write social media posts. (WP $)
+ NSO Group spyware has been used to hack Russian journalists living abroad. (Bloomberg $)
+ How generative AI is boosting the spread of disinformation and propaganda. (MIT Technology Review)

2 TikTok is reportedly working on a clone of its recommendation algorithm
Splitting its source code could trigger the creation of a US-only version of the app. (Reuters)
+ TikTok is attempting to convince the US of its independence from China. (The Verge)

3 A man in England has received a personalized cancer vaccine
Elliot Pfebve is the first patient to receive the jab as part of a major trial. (The Guardian)
+ Cancer vaccines are having a renaissance. (MIT Technology Review)

4 Amazon’s drone delivery business has cleared a major hurdle
US regulators have approved its drones to fly longer distances. (CNBC)

5 OpenAI has launched a version of ChatGPT for universities
ChatGPT Edu is supposed to help institutions deploy AI “responsibly.” (Forbes)
+ ChatGPT is going to change education, not destroy it. (MIT Technology Review)

6 Chile is fighting back against Big Tech’s data centers
Activists aren’t happy with the American giants’ lack of transparency. (Rest of World)
+ Energy-hungry data centers are quietly moving into cities. (MIT Technology Review)

7 Israel is tracking subatomic particles to map underground areas
Archaeologists avoid digging in places with religious significance. (Bloomberg $)

8 Ecuador is in serious trouble 
Drought and power outages are making daily life increasingly difficult. (Wired $)
+ Emissions hit a record high in 2023. Blame hydropower. (MIT Technology Review)

9 How to fight the rise of audio deepfakes
A wave of new techniques could make it easier to tackle the convincing clips. (IEEE Spectrum)
+ Here’s what it’s like to come across your nonconsensual AI clone. (404 Media)
+ An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary. (MIT Technology Review)

10 The James Webb Space Telescope has spotted its most distant galaxy yet 🌌
The JADES-GS-z14-0 galaxy was captured as it was a mere 290 million years after the Big Bang. (BBC)

Quote of the day

“Despite what Donald Trump thinks, America is not for sale to billionaires, oil and gas executives, or even Elon Musk.”

—James Singer, a spokesperson for the Biden campaign, mocks Trump’s attempts to court Musk and other mega donors to fund his reelection campaign, the Financial Times reports.

The big story

How to fix the internet

October 2023

We’re in a very strange moment for the internet. We all know it’s broken. But there’s a sense that things are about to change. The stranglehold that the big social platforms have had on us for the last decade is weakening.

There’s a sort of common wisdom that the internet is irredeemably bad. That social platforms, hungry to profit off your data, opened a Pandora’s box that cannot be closed.

But the internet has also provided a haven for marginalized groups and a place for support. It offers information at times of crisis. It can connect you with long-lost friends. It can make you laugh.

The internet is worth fighting for because despite all the misery, there’s still so much good to be found there. And yet, fixing online discourse is the definition of a hard problem. But don’t worry. I have an idea. Read the full story

—Katie Notopoulos

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)+ It’s peony season!
+ Forget giant squid—there’s colossal squid living in the depths of the ocean. 🦑
+  Is a long conversation in a film your idea of cinematic perfection, or a drawn-out nightmare?
+ Here’s how to successfully decompress after a long day at work.

Why Google’s AI Overviews gets things wrong

31 May 2024 at 06:15

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

When Google announced it was rolling out its artificial-intelligence-powered search feature earlier this month, the company promised that “Google will do the googling for you.” The new feature, called AI Overviews, provides brief, AI-generated summaries highlighting key information and links on top of search results.

Unfortunately, AI systems are inherently unreliable. Within days of AI Overviews’ release in the US, users were sharing examples of responses that were strange at best. It suggested that users add glue to pizza or eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875. 

On Thursday, Liz Reid, head of Google Search, announced that the company has been making technical improvements to the system to make it less likely to generate incorrect answers, including better detection mechanisms for nonsensical queries. It is also limiting the inclusion of satirical, humorous, and user-generated content in responses, since such material could result in misleading advice.

But why is AI Overviews returning unreliable, potentially dangerous information? And what, if anything, can be done to fix it?

How does AI Overviews work?

In order to understand why AI-powered search engines get things wrong, we need to look at how they’ve been optimized to work. We know that AI Overviews uses a new generative AI model in Gemini, Google’s family of large language models (LLMs), that’s been customized for Google Search. That model has been integrated with Google’s core web ranking systems and designed to pull out relevant results from its index of websites.

Most LLMs simply predict the next word (or token) in a sequence, which makes them appear fluent but also leaves them prone to making things up. They have no ground truth to rely on, but instead choose each word purely on the basis of a statistical calculation. That leads to hallucinations. It’s likely that the Gemini model in AI Overviews gets around this by using an AI technique called retrieval-augmented generation (RAG), which allows an LLM to check specific sources outside of the data it’s been trained on, such as certain web pages, says Chirag Shah, a professor at the University of Washington who specializes in online search.

Once a user enters a query, it’s checked against the documents that make up the system’s information sources, and a response is generated. Because the system is able to match the original query to specific parts of web pages, it’s able to cite where it drew its answer from—something normal LLMs cannot do.

One major upside of RAG is that the responses it generates to a user’s queries should be more up to date, more factually accurate, and more relevant than those from a typical model that just generates an answer based on its training data. The technique is often used to try to prevent LLMs from hallucinating. (A Google spokesperson would not confirm whether AI Overviews uses RAG.)

So why does it return bad answers?

But RAG is far from foolproof. In order for an LLM using RAG to come up with a good answer, it has to both retrieve the information correctly and generate the response correctly. A bad answer results when one or both parts of the process fail.

In the case of AI Overviews’ recommendation of a pizza recipe that contains glue—drawing from a joke post on Reddit—it’s likely that the post appeared relevant to the user’s original query about cheese not sticking to pizza, but something went wrong in the retrieval process, says Shah. “Just because it’s relevant doesn’t mean it’s right, and the generation part of the process doesn’t question that,” he says.

Similarly, if a RAG system comes across conflicting information, like a policy handbook and an updated version of the same handbook, it’s unable to work out which version to draw its response from. Instead, it may combine information from both to create a potentially misleading answer. 

“The large language model generates fluent language based on the provided sources, but fluent language is not the same as correct information,” says Suzan Verberne, a professor at Leiden University who specializes in natural-language processing.

The more specific a topic is, the higher the chance of misinformation in a large language model’s output, she says, adding: “This is a problem in the medical domain, but also education and science.”

According to the Google spokesperson, in many cases when AI Overviews returns incorrect answers it’s because there’s not a lot of high-quality information available on the web to show for the query—or because the query most closely matches satirical sites or joke posts.

The spokesperson says the vast majority of AI Overviews provide high-quality information and that many of the examples of bad answers were in response to uncommon queries, adding that AI Overviews containing potentially harmful, obscene, or otherwise unacceptable content came up in response to less than one in every 7 million unique queries. Google is continuing to remove AI Overviews on certain queries in accordance with its content policies. 

It’s not just about bad training data

Although the pizza glue blunder is a good example of a case where AI Overviews pointed to an unreliable source, the system can also generate misinformation from factually correct sources. Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico, googled “How many Muslim presidents has the US had?’” AI Overviews responded: “The United States has had one Muslim president, Barack Hussein Obama.” 

While Barack Obama is not Muslim, making AI Overviews’ response wrong, it drew its information from a chapter in an academic book titled Barack Hussein Obama: America’s First Muslim President? So not only did the AI system miss the entire point of the essay, it interpreted it in the exact opposite of the intended way, says Mitchell. “There’s a few problems here for the AI; one is finding a good source that’s not a joke, but another is interpreting what the source is saying correctly,” she adds. “This is something that AI systems have trouble doing, and it’s important to note that even when it does get a good source, it can still make errors.”

Can the problem be fixed?

Ultimately, we know that AI systems are unreliable, and so long as they are using probability to generate text word by word, hallucination is always going to be a risk. And while AI Overviews is likely to improve as Google tweaks it behind the scenes, we can never be certain it’ll be 100% accurate.

Google has said that it’s adding triggering restrictions for queries where AI Overviews were not proving to be especially helpful and has added additional “triggering refinements” for queries related to health. The company could add a step to the information retrieval process designed to flag a risky query and have the system refuse to generate an answer in these instances, says Verberne. Google doesn’t aim to show AI Overviews for explicit or dangerous topics, or for queries that indicate a vulnerable situation, the company spokesperson says.

Techniques like reinforcement learning from human feedback, which incorporates such feedback into an LLM’s training, can also help improve the quality of its answers. 

Similarly, LLMs could be trained specifically for the task of identifying when a question cannot be answered, and it could also be useful to instruct them to carefully assess the quality of a retrieved document before generating an answer, Verbene says: “Proper instruction helps a lot!” 

Although Google has added a label to AI Overviews answers reading “Generative AI is experimental,” it should consider making it much clearer that the feature is in beta and emphasizing that it is not ready to provide fully reliable answers, says Shah. “Until it’s no longer beta—which it currently definitely is, and will be for some time— it should be completely optional. It should not be forced on us as part of core search.”

What’s next for bird flu vaccines

31 May 2024 at 06:00

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Here in the US, bird flu has now infected cows in nine states, millions of chickens, and—as of last week—a second dairy worker. There’s no indication that the virus has acquired the mutations it would need to jump between humans, but the possibility of another pandemic has health officials on high alert. Last week, they said they are working to get 4.8 million doses of H5N1 bird flu vaccine packaged into vials as a precautionary measure. 

The good news is that we’re far more prepared for a bird flu outbreak than we were for covid. We know so much more about influenza than we did about coronaviruses. And we already have hundreds of thousands of doses of a bird flu vaccine sitting in the nation’s stockpile.

The bad news is we would need more than 600 million doses to cover everyone in the US, at two shots per person. And the process we typically use to produce flu vaccines takes months and relies on massive quantities of chicken eggs. Yes, chickens. One of the birds that’s susceptible to avian flu. (Talk about putting all our eggs in one basket. #sorrynotsorry)

This week in The Checkup, let’s look at why we still use a cumbersome, 80-year-old vaccine production process to make flu vaccines—and how we can speed it up.

The idea to grow flu virus in fertilized chicken eggs originated with Frank Macfarlane Burnet, an Australian virologist. In 1936, he discovered that if he bored a tiny hole in the shell of a chicken egg and injected flu virus between the shell and the inner membrane, he could get the virus to replicate.  

Even now, we still grow flu virus in much the same way. “I think a lot of it has to do with the infrastructure that’s already there,” says Scott Hensley, an immunologist at the University of Pennsylvania’s Perelman School of Medicine. It’s difficult for companies to pivot. 

The process works like this: Health officials provide vaccine manufacturers with a candidate vaccine virus that matches circulating flu strains. That virus is injected into fertilized chicken eggs, where it replicates for several days. The virus is then harvested, killed (for most use cases), purified, and packaged. 

Making flu vaccine in eggs has a couple of major drawbacks. For a start, the virus doesn’t always grow well in eggs. So the first step in vaccine development is creating a virus that does. That happens through an adaptation process that can take weeks or even months. This process is particularly tricky for bird flu: Viruses like H5N1 are deadly to birds, so the virus might end up killing the embryo before the egg can produce much virus. To avoid this, scientists have to develop a weakened version of the virus by combining genes from the bird flu virus with genes typically used to produce seasonal flu virus vaccines. 

And then there’s the problem of securing enough chickens and eggs. Right now, many egg-based production lines are focused on producing vaccines for seasonal flu. They could switch over to bird flu, but “we don’t have the capacity to do both,” Amesh Adalja, an infectious disease specialist at Johns Hopkins University, told KFF Health News. The US government is so worried about its egg supply that it keeps secret, heavily guarded flocks of chickens peppered throughout the country. 

Most of the flu virus used in vaccines is grown in eggs, but there are alternatives. The seasonal flu vaccine Flucelvax, produced by CSL Seqirus, is grown in a cell line derived in the 1950s from the kidney of a cocker spaniel. The virus used in the seasonal flu vaccine FluBlok, made by Protein Sciences, isn’t grown; it’s synthesized. Scientists engineer an insect virus to carry the gene for hemagglutinin, a key component of the flu virus that triggers the human immune system to create antibodies against it. That engineered virus turns insect cells into tiny hemagglutinin production plants.   

And then we have mRNA vaccines, which wouldn’t require vaccine manufacturers to grow any virus at all. There aren’t yet any approved mRNA vaccines for influenza, but many companies are fervently working on them, including Pfizer, Moderna, Sanofi, and GSK. “With the covid vaccines and the infrastructure that’s been built for covid, we now have the capacity to ramp up production of mRNA vaccines very quickly,” says Hensley. This week, the Financial Times reported that the US government will soon close a deal with Moderna to provide tens of millions of dollars to fund a large clinical trial of a bird flu vaccine the company is developing.

There are hints that egg-free vaccines might work better than egg-based vaccines. A CDC study published in January showed that people who received Flucelvax or FluBlok had more robust antibody responses than those who received egg-based flu vaccines. That may be because viruses grown in eggs sometimes acquire mutations that help them grow better in eggs. Those mutations can change the virus so much that the immune response generated by the vaccine doesn’t work as well against the actual flu virus that’s circulating in the population. 

Hensley and his colleagues are developing an mRNA vaccine against bird flu. So far they’ve only tested it in animals, but the shot performed well, he claims. “All of our preclinical studies in animals show that these vaccines elicit a much stronger antibody response compared with conventional flu vaccines.”

No one can predict when we might need a pandemic flu vaccine. But just because bird flu hasn’t made the jump to a pandemic doesn’t mean it won’t. “The cattle situation makes me worried,” Hensley says. Humans are in constant contact with cows, he explains. While there have only been a couple of human cases so far, “the fear is that some of those exposures will spark a fire.” Let’s make sure we can extinguish it quickly. 


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

In a previous issue of The Checkup, Jessica Hamzelou explained what it would take for bird flu to jump to humans. And last month, after bird flu began circulating in cows, I posted an update that looked at strategies to protect people and animals.

I don’t have to tell you that mRNA vaccines are a big deal. In 2021, MIT Technology Review highlighted them as one of the year’s 10 breakthrough technologies. Antonio Regalado explored their massive potential to transform medicine. Jessica Hamzelou wrote about the other diseases researchers are hoping to tackle. I followed up with a story after two mRNA researchers won a Nobel Prize. And earlier this year I wrote about a new kind of mRNA vaccine that’s self-amplifying, meaning it not only works at lower doses, but also sticks around for longer in the body. 

From around the web

Researchers installed a literal window into the brain, allowing for ultrasound imaging that they hope will be a step toward less invasive brain-computer interfaces. (Stat

People who carry antibodies against the common viruses used to deliver gene therapies can mount a dangerous immune response if they’re re-exposed. That means many people are ineligible for these therapies and others can’t get a second dose. Now researchers are hunting for a solution. (Nature)

More good news about Ozempic. A new study shows that the drug can cut the risk of kidney complications, including death in people with diabetes and chronic kidney disease. (NYT)

Microplastics are everywhere. Including testicles. (Scientific American)

Must read: This story, the second in series on the denial of reproductive autonomy for people with sickle-cell disease, examines how the US medical system undermines a woman’s right to choose. (Stat)

AI-directed drones could help find lost hikers faster

30 May 2024 at 11:26

If a hiker gets lost in the rugged Scottish Highlands, rescue teams sometimes send up a drone to search for clues of the individual’s route—trampled vegetation, dropped clothing, food wrappers. But with vast terrain to cover and limited battery life, picking the right area to search is critical.

Traditionally, expert drone pilots use a combination of intuition and statistical “search theory”—a strategy with roots in World War II–era hunting of German submarines—to prioritize certain search locations over others. Jan-Hendrik Ewers and a team from the University of Glasgow recently set out to see if a machine-learning system could do better.

Ewers grew up skiing and hiking in the Highlands, giving him a clear idea of the complicated challenges involved in rescue operations there. “There wasn’t much to do growing up, other than spending time outdoors or sitting in front of my computer,” he says. “I ended up doing a lot of both.”

To start, Ewers took data sets of search-and-rescue cases from around the world, which include details such as an individual’s age, whether they were hunting, horseback riding, or hiking, and if they suffered from dementia, along with information about the location where the person was eventually found—by water, buildings, open ground, trees, or roads. He trained an AI model with this data, in addition to geographical data from Scotland. The model runs millions of simulations to reveal the routes a missing person would be most likely to take under the specific circumstances. The result is a probability distribution—a heat map of sorts—indicating the priority search areas. 

With this kind of probability map, the team showed that deep learning could be used to design more efficient search paths for drones. In research published last week on arXiv, which has not yet been peer reviewed, the team tested its algorithm against two common search patterns: the “lawn mower,” in which a drone would fly over a target area in a series of simple stripes, and an algorithm similar to Ewers’s but less adept at working with probability distribution maps.

In virtual testing, Ewers’s algorithm beat both of those approaches on two key measures: the distance a drone would have to fly to locate the missing person, and the likelihood that the person was found. While the lawn mower and the existing algorithmic approach found the person 8% of the time and 12% of the time, respectively, Ewers’s approach found them 19% of the time. If it proves successful in real rescue situations, the new system could speed up response times, and save more lives, in scenarios where every minute counts. 

“The search-and-rescue domain in Scotland is extremely varied, and also quite dangerous,” Ewers says. Emergencies can arise in thick forests on the Isle of Arran, the steep mountains and slopes around the Cairngorm Plateau, or the faces of Ben Nevis, one of the most revered but dangerous rock climbing destinations in Scotland. “Being able to send up a drone and efficiently search with it could potentially save lives,” he adds.

Search-and-rescue experts say that using deep learning to design more efficient drone routes could help locate missing persons faster in a variety of wilderness areas, depending on how well suited the environment is for drone exploration (it’s harder for drones to explore dense canopy than open brush, for example).

“That approach in the Scottish Highlands certainly sounds like a viable one, particularly in the early stages of search when you’re waiting for other people to show up,” says David Kovar, a director at the US National Association for Search and Rescue in Williamsburg, Virginia, who has used drones for everything from disaster response in California to wilderness search missions in New Hampshire’s White Mountains. 

But there are caveats. The success of such a planning algorithm will hinge on how accurate the probability maps are. Overreliance on these maps could mean that drone operators spend too much time searching the wrong areas. 

Ewers says a key next step to making the probability maps as accurate as possible will be obtaining more training data. To do that, he hopes to use GPS data from more recent rescue operations to run simulations, essentially helping his model to understand the connections between the location where someone was last seen and where they were ultimately found. 

Not all rescue operations contain rich enough data for him to work with, however. “We have this problem in search and rescue where the training data is extremely sparse, and we know from machine learning that we want a lot of high-quality data,” Ewers says. “If an algorithm doesn’t perform better than a human, you are potentially risking someone’s life.”

Drones are becoming more common in the world of search and rescue. But they are still a relatively new technology, and regulations surrounding their use are still in flux.

In the US, for example, drone pilots are required to have a constant line of sight between them and their drone. In Scotland, meanwhile, operators aren’t permitted to be more than 500 meters away from their drone. These rules are meant to prevent accidents, such as a drone falling and endangering people, but in rescue settings such rules severely curtail ground rescuers’ ability to survey for clues. 

“Oftentimes we’re facing a regulatory problem rather than a technical problem,” Kovar says. “Drones are capable of doing far more than we’re allowed to use them for.”

Ewers hopes that models like his might one day expand the capabilities of drones even more. For now, he is in conversation with the Police Scotland Air Support Unit to see what it would take to test and deploy his system in real-world settings. 

The Download: the future of electroceuticals, and bigger EVs

30 May 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The messy quest to replace drugs with electricity

In the early 2010s, electricity seemed poised for a hostile takeover of your doctor’s office. Research into how the nervous system—the highway that carries electrical messages between the brain and the body— controls the immune response was gaining traction.

And that had opened the door to the possibility of hacking into the body’s circuitry and thereby controlling a host of chronic diseases, including rheumatoid arthritis, asthma, and diabetes, as if the immune system were as reprogrammable as a computer.

To do that you’d need a new class of implant: an “electroceutical.” These devices would replace drugs. No more messy side effects. And no more guessing whether a drug would work differently for you and someone else. In the 10 years or so since, around a billion dollars has accreted around the effort. But electroceuticals have still not taken off as hoped.

Now, however, a growing number of researchers are starting to look beyond the nervous system, and experimenting with clever ways to electrically manipulate cells elsewhere in the body, such as the skin.

Their work suggests that this approach could match the early promise of electroceuticals, yielding fast-healing bioelectric bandages, novel approaches to treating autoimmune disorders, new ways of repairing nerve damage, and even better treatments for cancer. Read the full story.

—Sally Adee

Why bigger EVs aren’t always better

SUVs are taking over the world—larger vehicle models made up nearly half of new car sales globally in 2023, a new record for the segment. 

There are a lot of reasons to be nervous about the ever-expanding footprint of vehicles, from pedestrian safety and road maintenance concerns to higher greenhouse-gas emissions. But in a way, SUVs also represent a massive opportunity for climate action, since pulling the worst gas-guzzlers off the roads and replacing them with electric versions could be a big step in cutting pollution. 

It’s clear that we’re heading toward a future with bigger cars. Here’s what it might mean for the climate, and for our future on the road. Read the full story.

—Casey Crownhart

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A pro-Palestinian AI image has been shared millons of times
But social media activism critics feel it’s merely performative. (WP $)
+ The smooth, sanitized picture is inescapable across Instagram and TikTok. (Vox)
+ It appears to have originated from Malaysia. (The Guardian)

2 OpenAI is struggling to rein in its internal rows
Six months after Sam Altman returned as CEO following a coup, divisions remain. (FT $)
+ A nonprofit created by former Facebook workers is experiencing similar problems. (Wired $)

3 Chinese EV makers are facing a new hurdle in the US
A new bill could quadruple import duties on Chinese EVs to 100% (TechCrunch)
+ Why China’s EV ambitions need virtual power plants. (MIT Technology Review)

4 India’s election wasn’t derailed by deepfakes
AI fakery was largely restricted to trolling, rather than malicious interference. (Rest of World)
+ Meta says AI-generated election content is not happening at a “systemic level” (MIT Technology Review)

5 Extreme weather events are feeding into each other
It’s becoming more difficult to separate disasters into standalone events. (Vox)
+ Our current El Niño climate event is about to make way for La Niña. (The Atlantic $)
+ Last summer was the hottest in 2,000 years. Here’s how we know. (MIT Technology Review)

6 It’s high time to stop paying cyber ransoms
Paying criminals isn’t stopping attacks, experts worry. (Bloomberg $)

7 How programmatic advertising facilitated the spread of misinformation
Algorithmically-placed ads are funding shadowing operations across the web. (Wired $)

8 Smart bandages could help to heal wound faster 🩹
Sensor-embedded dressings could help doctors to monitor ailments remotely. (WSJ $)

9 Move over smartphones—the intelliPhones are coming 📱
It’s a lame name for the AI-powered phones of tomorrow. (Insider $) 

10 The content creators worth paying attention to
Algorithms are no substitution for enthusiastic human curators. (New Yorker $)

Quote of the day

“It’s not about managing your home, it’s about what’s happening. That’s like, ‘Hey, there’s raccoons in my backyard.’”

—Liz Hamren, CEO of smart doorbell company Ring, explains the firm’s pivot away from fighting neighborhood crime and towards keeping tabs on wildlife to Bloomberg.

The big story

House-flipping algorithms are coming to your neighborhood

April 2022

When Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.

During this time, Zillow lost more than $420 million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ What mathematics can tell us about the formation of animal patterns.
+ How much pasta is too much pasta?
+ Here’s how to stretch out your lower back—without risking making it worse.
+ Over on the Thailand-Malaysia Border, food is an essential signifier of identity.

Why bigger EVs aren’t always better

30 May 2024 at 06:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

SUVs are taking over the world—larger vehicle models made up nearly half of new car sales globally in 2023, a new record for the segment. 

There are a lot of reasons to be nervous about the ever-expanding footprint of vehicles, from pedestrian safety and road maintenance concerns to higher greenhouse-gas emissions. But in a way, SUVs also represent a massive opportunity for climate action, since pulling the worst gas-guzzlers off the roads and replacing them with electric versions could be a big step in cutting pollution. 

It’s clear that we’re heading toward a future with bigger cars. Here’s what it might mean for the climate, and for our future on the road. 

SUVs accounted for 48% of global car sales in 2023, according to a new analysis from the International Energy Agency. This is a continuation of a trend toward bigger cars—just a decade ago, SUVs only made up about 20% of new vehicle sales. 

Big vehicles mean big emissions numbers. Last year there were more than 360 million SUVs on the roads, and they produced a billion metric tons of carbon dioxide. If SUVs were a country, they’d have the fifth-highest emissions of any nation on the planet—more than Japan. Of all the energy-related emissions growth last year, over 20% can be attributed to SUVs. 

There are several factors driving the world’s move toward larger vehicles. Larger cars tend to have higher profit margins, so companies may be more likely to make and push those models. And drivers are willing to jump on the bandwagon. I understand the appeal—I learned to drive in a huge SUV, and being able to stretch out my legs and float several feet above traffic has its perks. 

Electric vehicles are very much following the trend, with several companies unveiling  larger models in the past few years. Some of these newly released electric SUVs are seeing massive success. The Tesla Model Y, released in 2020, was far and away the most popular EV last year, with over 1.2 million units sold in 2023. The BYD Song (also an SUV) took second place with 630,000 sold. 

Globally, SUVs made up nearly 50% of new EV sales in 2023, compared to just under 20% in 2018, according to the IEA’s Global EV Outlook 2024. There’s also been a shift away from small cars (think the size of the Fiat 500) and toward large ones (similar to the BMW 7-series). 

And big-car obsession is a global phenomenon. The US is the land of the free and the home of the massive vehicles—SUVs made up 65% of new electric-vehicle sales in the country in 2023. But other major markets aren’t all that far behind: in Europe, the share was 52%, and in China, it was 36%. (You can see the above chart broken down by region from the IEA here.)

So it’s clear that we’re clamoring for bigger cars. Now what? 

One way of looking at this whole thing is that SUVs offer up an incredible opportunity for climate action. EVs will reduce emissions over their life span relative to gas-powered versions of the same model, so electrifying the biggest emitters on the roads would have an outsize impact. If all gas-powered and hybrid SUVs sold in 2023 were instead electric vehicles, about 770 million metric tons of carbon dioxide would be avoided over the lifetime of those vehicles, according to the IEA report. That’s equivalent to all of China’s road emissions last year. 

I previously wrote a somewhat hesitant defense of large EVs for this reason—electric SUVs aren’t perfect, but they could still help us address climate change. If some drivers are willing to buy an EV but aren’t willing to downsize their cars, then having larger electric options available could be a huge lever for climate action. 

But there are several very legitimate reasons why not everyone is welcoming the future of massive cars (even electric ones) with open arms. Larger vehicles are harder on roads, making upkeep more expensive. SUVs and other big vehicles are way more dangerous for pedestrians, too. Vehicles with higher front ends and blunter profiles are 45% more likely to cause fatalities in crashes with pedestrians. 

Bigger EVs could also have a huge effect on the amount of mining we’ll need to do to meet demand for metals like lithium, nickel, and cobalt. One 2023 study found that larger vehicles could increase the amount of mining needed more than 50% by 2050, relative to the amount that would be necessary if people drove smaller vehicles. Given that mining is energy intensive and can come with significant environmental harms, it’s not an unreasonable worry. 

New technologies could help reduce the mining we need to do for some materials: LFP batteries that don’t contain nickel or cobalt are quickly growing in market share, especially in China, and they could help reduce demand for those metals.

Another potential solution is reducing the demand for bigger cars in the first place. Policies have historically had a hand in pushing people toward larger cars and could help us make a U-turn on car bloat. Some countries, including Norway and France, now charge more in taxes or registration for larger vehicles. Paris recently jacked up parking rates for SUVs. 

For now, our vehicles are growing, and if we’re going to have SUVs on the roads, then we should have electric options. But bigger isn’t always better. 


Now read the rest of The Spark

Related reading

I’ve defended big EVs in the past—SUVs come with challenges, but electric ones are hands-down better for emissions than gas-guzzlers. Read this 2023 newsletter for more

The average size of batteries in EVs has steadily ticked up in recent years, as I touched on in this newsletter from last year

Electric cars are still cars, and smaller, safer EVs, along with more transit options, will be key to hitting our climate goals, Paris Marx argued in this 2022 op-ed

Keeping up with climate  

We might be underestimating how much power transmission lines can carry. Sensors can give grid operators a better sense of capacity based on factors like temperature and wind speed, and it could help projects hook up to the grid faster. (Canary Media)

North America could be in for an active fire season, though it’s likely not going to rise to the level of 2023. (New Scientist)

Climate change is making some types of turbulence more common, and that could spell trouble for flying. Studying how birds move might provide clues about dangerous spots. (BBC)

The perceived slowdown for EVs in the US is looking more like a temporary blip than an ongoing catastrophe. Tesla is something of an outlier with its recent slump—most automakers saw greater than 50% growth in the first quarter of this year. (Bloomberg)

This visualization shows just how dominant China is in the EV supply chain, from mining materials like graphite to manufacturing battery cells. (Cipher News)

Climate change is coming for our summer oysters. The variety that have been bred to be eaten year round are sensitive to extreme heat, making their future rocky. (The Atlantic)

The US has new federal guidelines for carbon offsets. It’s an effort to fix up an industry that studies and reports have consistently shown doesn’t work very well. (New York Times)

The most stubborn myth about heat pumps is that they don’t work in cold weather. Heat pumps are actually more efficient than gas furnaces in cold conditions. (Wired)

The messy quest to replace drugs with electricity

30 May 2024 at 05:00

In the early 2010s, electricity seemed poised for a hostile takeover of your doctor’s office. Research into how the nervous system controls the immune response was gaining traction. And that had opened the door to the possibility of hacking into the body’s circuitry and thereby controlling a host of chronic diseases, including rheumatoid arthritis, asthma, and diabetes, as if the immune system were as reprogrammable as a computer.

To do that you’d need a new class of implant: an “electroceutical,” formally introduced in an article in Nature in 2013. “What we are doing is developing devices to replace drugs,” coauthor and neurosurgeon Kevin Tracey told Wired UK. These would become a “mainstay of medical treatment.” No more messy side effects. And no more guessing whether a drug would work differently for you and someone else.

There was money behind this vision: the British pharmaceutical giant GlaxoSmithKline announced a $1 million research prize, a $50 million venture fund, and an ambitious program to fund 40 researchers who would identify neural pathways that could control specific diseases. And the company had an aggressive timeline in mind. As one GlaxoSmithKline executive put it, the goal was to have “the first medicine that speaks the electrical language of our body ready for approval by the end of this decade.” 

In the 10 years or so since, around a billion dollars has accreted around the effort by way of direct and indirect funding. Some implants developed in that electroceutical push have trickled into clinical trials, and two companies affiliated with GlaxoSmithKline and Tracey are ramping up for splashy announcements later this year. We don’t know much yet about how successful the trials now underway have been. But widespread regulatory approval of the sorts of devices envisioned in 2013—devices that could be applied to a broad range of chronic diseases—is not imminent. Electroceuticals are a long way from fomenting a revolution in medical care.

At the same time, a new area of science has begun to cohere around another way of using electricity to intervene in the body. Instead of focusing only on the nervous system—the highway that carries electrical messages between the brain and the body—a growing number of researchers are finding clever ways to electrically manipulate cells elsewhere in the body, such as skin and kidney cells, more directly than ever before. Their work suggests that this approach could match the early promise of electroceuticals, yielding fast-healing bioelectric bandages, novel approaches to treating autoimmune disorders, new ways of repairing nerve damage, and even better treatments for cancer. However, such ventures have not benefited from investment largesse. Investors tend to understand the relationship between biology and electricity only in the context of the nervous system. “These assumptions come from biases and blind spots that were baked in during 100 years of neuroscience,” says Michael Levin, a bioelectricity researcher at Tufts University. 

Electrical implants have already had success in targeting specific problems like epilepsy, sleep apnea, and catastrophic bowel dysfunction. But the broader vision of replacing drugs with nerve-zapping devices, especially ones that alter the immune system, has been slower to materialize. In some cases, perhaps the nervous system is not the best way in. Looking beyond this singular locus of control might open the way for a wider suite of electromedical interventions—especially if the nervous system proves less amenable to hacking than originally advertised. 

How it started

GSK’s ambitious electroceutical venture was a response to an increasingly onerous problem: 90% of drugs fall down during the obstacle race through clinical trials. A new drug that does squeak by can cost $2 billion or $3 billion and take 10 to 15 years to bring to market, a galling return on investment. The flaw is in the delivery system. The way we administer healing chemicals hasn’t had much of a conceptual overhaul since the Renaissance physician Paracelsus: ingest or inject. Both approaches have built-in inefficiencies: it takes a long time for the drugs to build up in your system, and they can disperse widely before arriving in diluted form at their target, which may make them useless where they are needed and toxic elsewhere. Tracey and Kristoffer Famm, a coauthor on the Nature article who was then a VP at GlaxoSmithKline, explained on the publicity circuit that electroceuticals would solve these problems—acting more quickly and working only in the precise spot where the intervention was needed. After 500 years, finally, here was a new idea. 

Well … new-ish. Electrically stimulating the nervous system had racked up promising successes since the mid-20th century. For example, the symptoms of Parkinson’s disease had been treated via deep brain stimulation, and intractable pain via spinal stimulation. However, these interventions could not be undertaken lightly; the implants needed to be placed in the spine or the brain, a daunting prospect to entertain. In other words, this idea would never be a money spinner.

The brain, in right profile with the glossopharyngeal and vagus nerves
The vagus nerve runs from the brain through the body
WELLCOME COLLECTION

What got GSK excited was recent evidence that health could be more broadly controlled, and by nerves that were easier to access. By the dawn of the 21st century it had become clear you could tap the nervous system in a way that carried fewer risks and more rewards. That was because of findings suggesting that the peripheral nervous system—essentially, everything but the brain and spine—had much wider influence than previously believed. 

The prevailing wisdom had long been that the peripheral nervous system had only one job: sensory awareness of the outside world. This information is ferried to the brain along many little neural tributaries that emerge from the extremities and organs, most of which converge into a single main avenue at the torso: the vagus nerve. 

Starting in the 1990s, research by Linda Watkins, a neuroscientist leading a team at the University of Colorado, Boulder, suggested that this main superhighway of the peripheral nervous system was not a one-way street after all. Instead it seemed to carry message traffic in both directions, not just into the brain but from the brain back into all those organs. Furthermore, it appeared that this comms link allows the brain to exert some control over the immune system—for example, stoking a fever in response to an infection.

And unlike the brain or spinal cord, the vagus nerve is comparatively easy to access: its path to and from the brain stem runs close to the surface of the neck, along a big cable on either side. You could just pop an electrode on it—typically on the left branch—and get zapping.

Meddling with the flow of traffic up the vagus nerve in this way had successfully treated issues in the brain, specifically epilepsy and treatment-resistant depression (and electrical implants for those applications were approved by the FDA around the turn of the millennium). But the insights from Watkins’s team put the down direction in play. 

It was Kevin Tracey who joined all these dots, after which it did not take long for him to become the public face of research on vagus nerve stimulation. During the 2000s, he showed that electrically stimulating the nerve calmed inflammation in animals. This “inflammatory reflex,” as he came to call it, implied that the vagus nerve could act as a switch capable of turning off a wide range of diseases, essentially hacking the immune system. In 2007, while based at what is now called the Feinstein Institutes for Medical Research, in New York, he spun his insights off into a Boston startup called SetPoint Medical. Its aim was to develop devices to flip this switch and bring relief, starting with inflammatory bowel disease and rheumatoid arthritis

By 2012, a coordinated relationship had developed between GSK, Tracey, and US government agencies. Tracey says that Famm and others contacted him “to help them on that Nature article.” A year later the electroceuticals road map was ready to be presented to the public.

The story the researchers told about the future was elegant and simple. It was illustrated by a tale Tracey recounted frequently on the publicity circuit, of a first-in-human case study SetPoint had coordinated at the University of Amsterdam’s Academic Medical Center. That team had implanted a vagus nerve stimulator in a man suffering from rheumatoid arthritis. The stimulation triggered his spleen to release a chemical called acetylcholine. This in turn told the cells in the spleen to switch off production of inflammatory molecules called cytokines. For this man, the approach worked well enough to let him resume his job, play with his kids, and even take up his old hobbies. In fact, his overenthusiastic resumption of his former activities resulted in a sports injury, as Tracey delighted in recounting for reporters and conferences.

Such case studies opened the money spigot. The combination of a wider range of disease targets and less risky surgical targets was an investor’s love language. Where deep brain stimulation and other invasive implants had been limited to rare, obscure, and catastrophic problems, this new interface with the body promised many more customers: the chronic diseases now on the table are much more prevalent, including not only rheumatoid arthritis but diabetes, asthma, irritable bowel syndrome, lupus, and many other autoimmune disorders. GSK launched an investment arm it dubbed Action Potential Venture Capital Limited, with $50 million in the coffers to invest in the technologies and companies that would turn the futuristic vision of electroceuticals into reality. Its inaugural investment was a $5 million stake in SetPoint. 

If you were superstitious, what happened next might have looked like an omen. The word “electroceutical” already belonged to someone else—a company called Ivivi Technologies had trademarked it in 2008. “I am fairly certain we sent them a letter soon after they started that campaign, to alert them of our trademark,” says Sean Hagberg, a cofounder and then chief science officer at the company. Today neither GSK nor SetPoint can officially call its tech “electroceuticals,” and both refer to the implants they are developing as “bioelectronic medicine.” However, this umbrella term encompasses a wide range of other interventions, some quite well established, including brain implants, spine implants, hypoglossal nerve stimulation for sleep apnea (which targets a motor nerve running through the vagus), and other peripheral-nervous-system implants, including those for people with severe gastric disorders.

Kevin J Tracey
Kevin Tracey has been one of the leading proponents of using electrical stimulation to target inflammation in the body.
MIKE DENORA VIA WIKIPEDIA

The next problem appeared in short order: how to target the correct nerve. The vagus nerve has roughly 100,000 fibers packed tightly within it, says Kip Ludwig, who was then with the US National Institutes of Health and now co-directs the Wisconsin Institute for Translational Neuroengineering at the University of Wisconsin, Madison. These myriad fibers connect to many different organs, including the larynx and lower airways, and electrical fields are not precise enough to hit a single one without hitting many of its neighbors (as Ludwig puts it, “electric fields [are] really promiscuous”).

This explains why a wholesale zap of the entire bundle had long been associated with unpredictable “on-target effects” and unpleasant “off-target effects,” which is another way of saying it didn’t always work and could carry side effects that ranged from the irritating, like a chronic cough, to the life-altering, including headaches and a shortness of breath that is better described as air hunger. Singling out the fibers that led to the particular organ you were after was hard for another reason, too: the existing  maps of the human peripheral nervous system were old and quite limited. Such a low-resolution road map wouldn’t be sufficient to get a signal from the highway all the way to a destination.

In 2014, to remedy this and generally advance the field of peripheral nerve stimulation, the NIH announced a research initiative known as SPARC—Stimulating Peripheral Activity to Relieve Conditions—with the aim of pouring $248 million into research on new ways to exploit the nervous system’s electrical pathways for medicine. “My job,” says Gene Civillico, who managed the program until 2021, “was to do a program related to electroceuticals that used the NIH policy options that were available to us to try to make something catalytic happen.” The idea was to make neural anatomical maps and sort out the consequences of following various paths. After the organs were mapped, Civillico says, the next step was to figure out which nerve circuit would stimulate them, and settle on an access point—“And the access point should be the vagus nerve, because that’s where the most interest is.” 

Two years later, as SPARC began to distribute its funds, companies moved forward with plans for the first generation of implants. GSK teamed up with Verily (formerly Google Life Sciences) on a $715 million research initiative they called Galvani Bioelectronics, with Famm at its helm as president. SetPoint, which had relocated to Valencia, California, moved to an expanded location, a campus that had once housed a secret Lockheed R&D facility.

How it’s going

Ten years after electroceuticals entered (and then quickly departed) the lexicon, the SPARC program has yielded important information about the electrical particulars of the  peripheral nervous system. Its maps have illuminated nodes that are both surgically attractive and medically relevant. It has funded a global constellation of academic researchers. But its insights will be useful for the next generation of implants, not those in trials today.

Today’s implants, from SetPoint and Galvani, will be in the news later this year. Though SetPoint estimates that an extended study of its phase III clinical trial will conclude in 2027, the primary outcomes will be released this summer, says Ankit Shah, a marketing VP at SetPoint. And while Galvani’s trial will conclude in 2029, Famm says, the company is “coming to an exciting point” and will publish patient data later in 2024.

The results could be interpreted as a referendum on the two companies’ different approaches. Both devices treat rheumatoid arthritis, and both target the immune system via the peripheral nervous system, but that’s where the similarities end. SetPoint’s device uses a clamshell design that cuffs around the vagus nerve at the neck. It stimulates for just one minute, once per day. SetPoint representatives say they have never seen the sorts of side effects that have resulted from using such stimulators to treat epilepsy. But if anyone did experience those described by other researchers—even vomiting and headaches—they might be tolerable if they only lasted a minute. 

But why not avoid the vagus nerve entirely? Galvani is using a more precise implant that targets the “end organ” of the spleen. If the vagus nerve can be considered the main highway of the peripheral nervous system, an end organ is essentially a particular organ’s “driveway.” Galvani’s target is the point where the splenic nerve (having split off from a system connected to the vagus highway) meets the spleen.  

To zero in on such a specific target, the company has sacrificed ease of access. Its implant, which is about the size of a house key, is laparoscopically injected into the body through the belly button. Famm says if this approach works for rheumatoid arthritis, then it will likely translate for all autoimmune disorders. Highlighting this clinical trial in 2022, he told Nature Reviews: “This is what makes the next 10 years exciting.”

the Galvani device system with phone and tablet UI
The Galvani device and system targets the splenic nerve.
GALVANI VIA BUSINESSWIRE

Perhaps more so for researchers than for patients, however. Even as Galvani and SetPoint prepare talking points, other SPARC-funded groups are still pondering the sorts of research questions suggesting that the best technological interface with the immune system is still up for debate. At the moment, electroceuticals are in the spotlight, but they have a long way to go, says Vaughan Macefield, a neurophysiologist at Monash University in Australia, whose work is funded by a more recent $21 million SPARC grant: “It’s an elegant idea, [but] there are conflicting views.”

Macefield doesn’t think zapping the entire bundle is a good idea. Many researchers are working on ways to get more selective about which particular fibers of the vagus nerve they stimulate. Some are designing novel electrodes that will penetrate specific fibers rather than clamping around all of them. Others are trying to hit the vagus at deeper points in the abdomen. Indeed, some aren’t sure either electricity or an implant is a necessary ingredient of the “electroceutical.” Instead, they are pivoting from electrical stimulation to ultrasound.

The sheer range of these approaches makes it pretty clear that the electroceutical’s final form is still an open research question. Macefield says we still don’t know the nitty-gritty of how vagus nerve stimulation works.

However, Tracey thinks the variety of approaches being developed doesn’t contravene the merits of the basic idea. How tech companies will make this work in the clinic, he says, is a separate business and IP question: “Can you do it with focused ultrasound? Can you do it with a device implanted with abdominal surgery? Can you do it with a device implanted in the neck? Can you do it with a device implanted in the brain, even? All of these strategies are enabled by the idea of the inflammatory reflex.” Until clinical trial data is in, he says, there’s no point arguing about the best way to manipulate the mechanism—and if one approach fails to work, that is not a referendum on the validity of the inflammatory reflex.

After stepping down from SetPoint’s board to resume a purely consulting role in 2011, Tracey focused on his lab work at the Feinstein Institutes, which he directs, to deepen understanding of this pathway. The research there is wide-ranging. Several researchers under his remit are exploring a type of noninvasive, indirect manipulation called transcutaneous auricular vagus nerve stimulation, which stimulates the skin of the ear with a wearable device. Tracey says it’s a “malapropism” to call this approach vagus nerve stimulation. “It’s just an ear buzzer,” he says. It may stimulate a sensory branch of the vagus nerve, which may engage the inflammatory reflex. “But nobody knows,” he says. Nonetheless, several clinical trials are underway.

the setpoint medical device held in between the index and thumb of a gloved hand
SetPoint’s device is cuffed around the vagus nerve within the neck of a patient.
SETPOINT MEDICAL

“These things take time,” Tracey says. “It is extremely difficult to invent and develop a completely revolutionary new thing in medicine. In the history of medicine, anything that was truly new and revolutionary takes between 20 and 40 years from the time it’s invented to the time it’s widely adopted.” 

“As the discoverer of this pathway,” he says, “what I want to see is multiple therapies, helping millions of people.” This vision will hinge on bigger trials conducted over many more years. These tend to be about as hard for devices as they are for drugs. Many results that look compelling in early trials disappoint in later rounds—just as for drugs. It will be possible, says Ludwig, “for them to pass a short-duration FDA trial yet still really not be a major improvement over the drug solutions.” Even after FDA approval, should it come, yet more studies will be needed to determine whether the implants are subject to the same issues that plague drugs, including habituation. 

This vision of electroceuticals seems to have placed about a billion eggs into the single basket of the peripheral nervous system. In some ways, this makes sense. After all, the received wisdom has it that these nervous signals are the only way to exert electrical control of the other cells in the body. Those other trillions—the skin cells, the immune cells, the stem cells—are beyond the reach of direct electrical intervention. 

Except in the past 20 years it’s become abundantly clear that they are not.

Other cells speak electricity 

At the end of the 19th century, the German physiologist Max Verworn watched as a single-celled marine creature was drawn across the surface of his slide as if captured by a tractor beam. It had been, in a way: under the influence of an electric field, it squidged over to the cathode (the pole that attracts positive charge). Many other types of cells could be coaxed to obey the directional wiles of an electric field, a phenomenon known as galvanotaxis.

But this was too weird for biology, and charlatans already occupied too much of the space in the Venn diagram where electricity met medicine. (The association was formalized in 1910 in the Flexner Report, commissioned to improve the dismal state of American medical schools, which sent electrical medicine into exile along with the likes of homeopathy.) Everyone politely forgot about galvanotaxis until the 1970s and ’80s, when the peculiar behavior resurfaced. Yeast, fungi, bacteria, you name it—they all liked a cathode. “We were pulling every kind of cell along on petri dishes with an electric field,” says Ann Rajnicek of the University of Aberdeen in Scotland, who was among the first group of researchers who tried to discover the mechanism when scientific interest reawakened.

Galvanotaxis would have raised few eyebrows if the behavior had been confined to neurons. Those cells have evolved receptors that sense electric fields; they are a fundamental aspect of the mechanism the nervous system uses to send its information. Indeed, the reason neurons are so amenable to electrical manipulation in the first place is that electric implants hijack a relatively predictable mechanism. Zap a nerve or a muscle and you are forcing it to “speak” a language in which it is already fluent. 

Non-excitable cells such as those found in skin and bone don’t share these receptors, but it keeps getting more obvious that they somehow still sense and respond to electric fields. 

Why? We keep finding more reasons. Galvanotaxis, for example, is increasingly understood to play a crucial role in wound healing. In every species studied, injury to the skin produces an instant, internally generated electric field, and there’s overwhelming evidence that it guides patch-up cells to the center of the wound to start the rebuilding process. But galvanotaxis is not the only way these cells are led by electricity. During development, immature cells seem to sense the electric properties of their neighbors, which plays a role in their future identity—whether they become neurons, skin cells, fat cells, or bone cells. 

Galvanotaxis of paramecium. The arrow indicates the direction in which the paramecia are swimming.
Early experiments showed that paramecia on a wet plate will orient themselves in the direction of a cathode.
PUBLIC DOMAIN

Intriguing as this all was, no one had much luck turning such insights into medicine. Even attempts to go after the lowest-hanging fruit—by exploiting galvanotaxis for novel bandages—were for many years at best hit or miss. “When we’ve come upon wounds that are intractable, resistant, and will not heal, and we apply an electric field, only 50% or so of the cases actually show any effect,” says Anthony Guiseppi-Elie, a senior fellow with the American International Institute for Medical Sciences, Engineering, and Innovation. 

However, in the past few years, researchers have found ways to make electrical stimulation outside the nervous system less of a coin toss.

That’s down to steady progress in our understanding of how exactly non-neural cells pick up on electric fields, which has helped calm anxieties around the mysticism and the Frankenstein associations that have attended biological responses to electricity.  

The first big win came in 2006, with the identification of specific genes in skin cells that get turned on and off by electric fields. When skin is injured, the body’s native electric field orients cells toward the center of the wound, and the physiologist Min Zhao and his colleagues found important signaling pathways that are turned on by this field and mobilized to move cells toward this natural cathode. He also found associated receptors, and other scientists added to the catalogue of changes to genes and gene regulatory networks that get switched on and off under an electric field.

What has become clear since then is that there is no simple mechanism waiting at the end of the rainbow. “There isn’t one single master protein, as far as anybody knows, that regulates responses [to an electric field],” says Daniel Cohen, a bioengineer at Princeton University. “Every cell type has a different cocktail of stuff sticking out of it.”

But recent years have brought good news, in both experimental and applied science. First, the experimental platforms to investigate gene expression are in the middle of a transformation. One advance was unveiled last year by Sara Abasi, Guiseppi-Elie, and their colleagues at Texas A&M and the Houston Methodist Research Institute: their carefully designed research platform kept track of pertinent cellular gene expression profiles and how they change under electric fields—specifically, ones tuned to closely mimic what you find in biology. They found evidence for the activation of two proteins involved in tissue growth along with increased expression of a protein called CD-144, a specific version of what’s known as a cadherin. Cadherins are important physical structures that enable cells to stick to each other, acting like little handshakes between cells. They are crucial to the cells’ ability to act en masse instead of individually. 

The other big improvement is in tools that can reveal just how cells work together in the presence of electric fields. 

A different kind of electroceutical

A major limit on past experiments was that they tended to test the effects of electrical fields either on single cells or on whole animals. Neither is quite the right scale to offer useful insights, explains Cohen: measuring these dynamics in animals is  too “messy,” but in single cells, the dynamics are too artificial  to tell you much about how cells behave collectively as they heal a wound. That behavior emerges only at relevant scales, like bird flocks, schools of fish, or road traffic. “The math is identical to describe these types of collective dynamics,” he says.

In 2020, Cohen and his team came up with a solution: an experimental setup that strikes the balance between single cell (tells you next to nothing) and animal (tells you too many things at once). The device, called SCHEEPDOG, can reveal what is going on at the tissue level, which is the relevant scale for investigating wound healing. 

It uses two sets of electrodes—a bit the way you might twiddle the dials on an Etch A Sketch—placed in a closed bioreactor, which better approximates how electric fields operate in biology. With this setup, Cohen and his colleagues can precisely tune the electrical environment of tens of thousands of cells at a time to influence their behavior. 

In this time-lapse, SCHEEPDOG maneuvers epithelial cells with electric fields.
COHEN ET AL

Their subsequent “healing-on-a-chip” platform yielded an interesting discovery: skin cells’ response to an electric field depends on their maturity. The less mature, the easier they were to control.

The culprit? Those cadherins that Abasi and Guiseppi-Elie had also observed changing under electric fields. In mature cells, these little handshakes had become so strong that a competing electric field, instead of gently guiding the cells, caused them to rip apart. The immature skin cells followed the electric field’s directions without complaint.

After they found a way to dial down the cadherins with an antibody drug, all the cells synchronized. For Cohen, the lesson was that it’s more important to look at the system, and the collective dynamics that govern a behavior like wound healing, than at what is happening in any single cell. “This is really important because many clinical attempts at using electrical stimulation to accelerate wound healing have failed,” says Guiseppi-Elie, and it had never become clear why some worked and others some didn’t. 

Cohen’s team is now working to translate these findings into next-generation bioelectric plasters. They are far from alone, and the payoff is more than skin deep. A lot of work is going on, some of it open and some behind closed doors with patents being closely guarded, says Cohen.

At Stanford, the University of Arizona, and Northwestern, researchers are creating smart electric bandages that can be implanted under the skin. They can also monitor the state of the wound in real time, increasing the stimulation if healing is too slow. More challenging, says Rajnicek, are ways to interface with less accessible areas of the body. However, here too new tools are revealing intriguing creative solutions. 

Electric fields don’t have to directly change cells’ gene expression to be useful. There is another way their application can be turned to medical benefit. Electric fields evoke reactive oxygen species (ROS) in biological cells. Normally, these charged molecules are a by-product of a cell’s everyday metabolic activities. If you induce them purposefully using an external DC current, however, they can be hijacked to do your bidding. 

Starting in 2020, the Swiss bioengineer Martin Fussenegger and an international team of collaborators began to publish investigations into this mechanism to power gene expression. He and his team engineered human kidney cells to be hypersensitive to the induced ROSs in quantities that normal cells couldn’t sense. But when these were generated by DC electrodes, the kidney cells could sense the minute quantities just fine. 

Using this instrument, in 2023 they were able to create a tiny, wearable insulin factory. The designer kidney cells were created with a synthetic promoter—an engineered sequence of DNA that can drive expression of a target gene—that reacted to those faint induced ROSs by activating a cascade of genetic changes that opened a tap for insulin production on demand.

Then they packaged this electrogenetic contraption into a wearable device that worked for a month in a living mouse, which had been engineered to be diabetic (Fussenegger says that “others have shown that implanted designer cells can generally be active for over a year”). The designer cells in the wearable are kept alive by algae gelatine but are fed by the mouse’s own vascular system, permitting the exchange of nutrients and protein. The cells can’t get out, but the insulin they secrete can, seeping straight into the mouse’s bloodstream. Ten seconds a day of electrical stimulation delivered via needles connected to three AAA batteries was enough to make the implant perform like a pancreas, returning the mouse’s blood sugar to nondiabetic levels. Given how easy it would be to generalize the mechanism, Fussenegger says, there’s no reason insulin should be the only drug such a device can generate. He is quick to stress that this wearable device is very much in the proof-of-concept stage, but others outside the team are excited about its potential. It could provide a more direct electrical alternative to the solution electroceuticals promised for diabetes. 

Escaping neurochauvinism

Before the concerted push around branding electroceuticals, efforts to tap the peripheral nervous system were fragmented and did not share much data. Today, thanks to SPARC, which is winding down, data-sharing resources have been centralized. And money, both direct and indirect, for the electroceuticals project has been lavish. Therapies—especially vagus nerve stimulation—have been the subject of “a steady increase in funding and interest,” says Imran Eba, a partner at GSK’s bioelectronics investment arm Action Potential Venture Capital. Eba estimates that the initial GSK seed of $50 million at Action Potential has grown to about $200 million in assets under management. 

Whether you call it bioelectronic medicine or electroceuticals, some researchers would like to see the definition take on a broader remit. “It’s been an extremely neurocentric approach,” says Daniel Cohen. 

Neurostimulation has not yet shown success against cancer. Other forms of electrical stimulation, however, have proved surprisingly effective. In one study on glioblastoma, tumor-treating fields offered an electrical version of chemotherapy: an electric field blasts a brain tumor, preferentially killing only cells whose electrical identity marks them as dividing (which cancer cells do, pathologically—but neurons, being fully differentiated, do not). A study recently published in The Lancet Oncology suggests that these fields could also work in lung cancer to boost existing drugs and extend survival. 

All of this points to more sophisticated interventions than a zap to a nerve. “The complex things that we need to do in medicine will be about communicating with the collective decision-making and problem-solving of the cells,” says Michael Levin. He has been working to repurpose already-approved drugs so they can be used to target the electrical communication between cells. In a funny twist, he has taken to calling these drugs electroceuticals, which has ruffled some feathers. But he would certainly find support from researchers like Cohen. “I would describe electroceuticals much more broadly as anything that manipulates cellular electrophysiology,” Cohen says.

Even interventions with the nervous system could be helped by expanding our understanding of the ways nerve cells react to electricity beyond action potentials. Kim Gokoffski, a professor of clinical ophthalmology at the University of Southern California, is working with galvanotaxis as a possible means of repairing damage to the optic nerve. In prior experiments that involve regrowing axons—the cables that carry messages out of neurons—these new nerve fibers tend to miss the target they’re meant to rejoin. Existing approaches “are all pushing the gas pedal,” she says, “but no one is controlling the steering wheel.” So her group uses electric fields to guide the regenerating axons into position. In rodent trials, this has worked well enough to partially restore sight.

And yet, Cohen says, “there’s massive social stigma around this that is significantly hampering the entire field.” That stigma has dramatically shaped research direction and funding. For Gokoffski, it has led to difficulties with publishing. She also recounts hearing a senior NIH official refer to her lab’s work on reconnecting optic nerves as “New Age–y.” It was a nasty surprise: “New Age–y has a very bad connotation.” 

However, there are signs of more support for work outside the neurocentric model of bioelectric medicine. The US Defense Department funds projects in electrical wound healing (including Gokoffski’s). Action Potential’s original remit—confined to targeting peripheral nerves with electrical stimulation—has expanded. “We have a broader approach now, where energy (in any form, be it electric, electromagnetic, or acoustic) can be directed to regulate neuronal or other cellular activities in the body,” Eba wrote in an email. Three of the companies now in their portfolio focus on areas outside neurostimulation. “While we don’t have any investments targeting wound healing or regenerative medicine specifically, there is no explicit exclusion here for us,” he says.

This suggests that the “social stigma” Cohen described around electrical medicine outside the nervous system is slowly beginning to abate. But if such projects are to really flourish, the field needs to be supported, not just tolerated—perhaps with its own road map and dedicated NIH program. Whether or not bioelectric medicine ends up following anything like the original electroceuticals road map, SPARC ensured a flourishing research community, one that is in hot pursuit of promising alternatives. 

The use of electricity outside the nervous system needs a SPARC program of its own. But if history is any guide, first it needs a catchy name. It can’t be “electroceuticals.” And the researchers should definitely check the trademark listings before rolling it out.

Sally Adee is a science and technology writer and the author of We Are Electric: Inside the 200-Year Hunt for Our Body’s Bioelectric Code, and What the Future Holds.

This article has been updated to correct the name of the Feinstein Institutes for Medical Research.

Industry- and AI-focused cloud transformation

For years, cloud technology has demonstrated its ability to cut costs, improve efficiencies, and boost productivity. But today’s organizations are looking to cloud for more than simply operational gains. Faced with an ever-evolving regulatory landscape, a complex business environment, and rapid technological change, organizations are increasingly recognizing cloud’s potential to catalyze business transformation.

Cloud can transform business by making it ready for AI and other emerging technologies. The global consultancy McKinsey projects that a staggering $3 trillion in value could be created by cloud transformations by 2030. Key value drivers range from innovation-driven growth to accelerated product development.

“As applications move to the cloud, more and more opportunities are getting unlocked,” says Vinod Mamtani, vice president and general manager of generative AI services for Oracle Cloud Infrastructure. “For example, the application of AI and generative AI are transforming businesses in deep ways.”

No longer simply a software and infrastructure upgrade, cloud is now a powerful technology capable of accelerating innovation, improving agility, and supporting emerging tools. In order to capitalize on cloud’s competitive advantages, however, businesses must ask for more from their cloud transformations.

Every business operates in its own context, and so a strong cloud solution should have built-in support for industry-specific best practices. And because emerging technology increasingly drives all businesses, an effective cloud platform must be ready for AI and the immense impacts it will have on the way organizations operate and employees work.

An industry-specific approach

The imperative for cloud transformation is evident: In today’s fast-faced business environment, cloud can help organizations enhance innovation, scalability, agility, and speed while simultaneously alleviating the burden on time-strapped IT teams. Yet most organizations have not fully made the leap to cloud. McKinsey, for example, reports a broad mismatch between leading companies’ cloud aspirations and realities—though nearly all organizations say they aspire to run the majority of their applications in the cloud within the decade, the average organization has currently relocated only 15–20% of them.

Cloud solutions that take an industry-specific approach can help companies meet their business needs more easily, making cloud adoption faster, smoother, and more immediately useful. “Cloud requirements can vary significantly across vertical industries due to differences in compliance requirements, data sensitivity, scalability, and specific business objectives,” says Deviprasad Rambhatla, senior vice president and sector head of retail services and transportation at Wipro.

Health-care organizations, for instance, need to manage sensitive patient data while complying with strict regulations such as HIPAA. As a result, cloud solutions for that industry must ensure features such as high availability, disaster recovery capabilities, and continuous access to critical patient information.

Retailers, on the other hand, are more likely to experience seasonal business fluctuations, requiring cloud solutions that allow for greater flexibility. “Cloud solutions allow retailers to scale infrastructure on an up-and-down basis,” says Rambhatla. “Moreover, they’re able to do it on demand, ensuring optimal performance and cost efficiency.”

Cloud-based applications can also be tailored to meet the precise requirements of a particular industry. For retailers, these might include analytics tools that ingest vast volumes of data and generate insights that help the business better understand consumer behavior and anticipate market trends.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

AI-readiness for C-suite leaders

Generative AI, like predictive AI before it, has rightly seized the attention of business executives. The technology has the potential to add trillions of dollars to annual global economic activity, and its adoption for business applications is expected to improve the top or bottom lines—or both—at many organizations.

While generative AI offers an impressive and powerful new set of capabilities, its business value is not a given. While some powerful foundational models are open to public use, these do not serve as a differentiator for those looking to get ahead of the competition and unlock AI’s full potential. To gain those advantages, organizations must look to enhance AI models with their own data to create unique business insights and opportunities.

Preparing an organization’s data for AI, however, unlocks a new set of challenges and opportunities. This MIT Technology Review Insights survey report investigates whether companies’ data foundations are ready to garner benefits from generative AI, as well as the challenges of building the necessary data infrastructure for this technology. In doing so, it draws on insights from a survey of 300 C-suite executives and senior technology leaders, as well on in-depth interviews with four leading experts.

Its key findings include the following:

Data integration is the leading priority for AI readiness. In our survey, 82% of C-suite and other senior executives agree that “scaling AI or generative AI use cases to create business value is a top priority for our organization.” The number-one challenge in achieving that AI readiness, survey respondents say, is data integration and pipelines (45%). Asked about challenging aspects of data integration, respondents named four: managing data volume, moving data from on-premises to the cloud, enabling real-time access, and managing changes to data.

Executives are laser-focused on data management challenges—and lasting solutions. Among survey respondents, 83% say that their “organization has identified numerous sources of data that we must bring together in order to enable our AI initiatives.” Though data-dependent technologies of recent decades drove data integration and aggregation programs, these were typically tailored to specific use cases. Now, however, companies are looking for something more scalable and use-case agnostic: 82% of respondents are prioritizing solutions “that will continue to work in the future, regardless of other changes to our data strategy and partners.”

Data governance and security is a top concern for regulated sectors. Data governance and security concerns are the second most common data readiness challenge (cited by 44% of respondents). Respondents from highly regulated sectors were two to three times more likely to cite data governance and security as a concern, and chief data officers (CDOs) say this is a challenge at twice the rate of their C-suite peers. And our experts agree: Data governance and security should be addressed from the beginning of any AI strategy to ensure data is used and accessed properly.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Download the full report.

The Download: the minerals powering our economy, and Chinese companies’ identity crisis

29 May 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Quartz, cobalt, and the waste we leave behind

It is easy to convince ourselves that we now live in a dematerialized ethereal world, ruled by digital startups, artificial intelligence, and financial services.

Yet there is little evidence that we have decoupled our economy from its churning hunger for resources. We are still reliant on the products of geological processes like coal and quartz, a mineral that’s a rich source of the silicon used to build computer chips, to power our world.

Three recent books aim to reconnect readers with the physical reality that underpins the global economy. Each one fills in dark secrets about the places, processes, and lived realities that make the economy tick, and reveals just how tragic a toll the materials we rely on take for humans and the environment. Read the full story.

—Matthew Ponsford

The story is from the current print issue of MIT Technology Review, which is on the theme of Build. If you don’t already, subscribe now to receive future copies once they land.

If you’re interested in the minerals powering our economy, why not take a look at my colleague James Temple’s pieces about how a US town is being torn apart as communities clash over plans to open a nickel mine—and how that mine could unlock billions in EV subsidies.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Blacklisted Chinese firms are rebranding as American
In a bid to swerve the Biden administration’s crackdown on national security concerns. (WSJ $)+ The US has sanctioned three Chinese nationals over their links to a botnet. (Ars Technica)

2 More than half of cars sold last year were SUVs
The large vehicles are major contributors to the climate crisis. (The Guardian)
+ Three frequently asked questions about EVs, answered. (MIT Technology Review)

3 A record number of electrodes have been placed on a human brain
The more electrodes, the higher the resolution for mapping brain activity. (Ars Technica)
+ Beyond Neuralink: Meet the other companies developing brain-computer interfaces. (MIT Technology Review)

4 A former FTX executive has been sentenced to 7.5 years in prison
Ryan Salame had been hoping for a maximum of 18 months. (CoinDesk)

5 Food delivery apps are hemorrhaging money 
The four major platforms are locked in intense competition for diners. (FT $)

6 Saudi Arabia is going all in on building solar farms
It’s looking beyond its oil empire to invest in other promising forms of energy. (NYT $)
+ The world is finally spending more on solar than oil production. (MIT Technology Review)

7 Clouds are a climate mystery ☁
Experts are trying to integrate them into climate models—but it’s tough work. (The Atlantic $)
+ ‘Bog physics’ could work out how much carbon is stored in peat bogs. (Quanta Magazine)

8 An 11-year old crypto mystery has finally been solved
To crack into a $3 million fortune. (Wired $)

9 AI models are pretty good at spotting bugs in software 🪳
The problem is, they’re also prone to making up new flaws entirely. (New Scientist $)
+ How AI assistants are already changing the way code gets made. (MIT Technology Review)

10 Beware promises made by airmiles influencers ✈
While some of their advice is sound, it pays to play the long game. (WP $)

Quote of the day

“We learned about ChatGPT on Twitter.”

—Helen Toner, a former OpenAI board member, explains how the company’s board was not informed in advance about the release of its blockbuster AI system in November 2022, the Verge reports.

The big story

Generative AI is changing everything. But what’s left when the hype is gone?

December 2022

It was clear that OpenAI was on to something. In late 2021, a small team of researchers was playing around with a new version of OpenAI’s text-to-image model, DALL-E, an AI that converts short written descriptions into pictures: a fox painted by Van Gogh, perhaps, or a corgi made of pizza. Now they just had to figure out what to do with it.

Nobody could have predicted just how big a splash this product was going to make. The rapid release of other generative models has inspired hundreds of newspaper headlines and magazine covers, filled social media with memes, kicked a hype machine into overdrive—and set off an intense backlash from creators.

The exciting truth is, we don’t really know what’s coming next. While creative industries will feel the impact first, this tech will give creative superpowers to everybody. Read the full story

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These baby tiger cubs are just too cute.
+ Meet me at El Califa de León, the world’s first taquería to receive a Michelin star.
+ This feather sounds like a bargain, frankly. 🪶
+ Did you know that Sean Connery was only 12 years older than Harrison Ford when he played his father in Indiana Jones and the Last Crusade?

The Download: autocorrect’s surprising origins, and how to pre-bunk electoral misinformation

28 May 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How the quest to type Chinese on a QWERTY keyboard created autocomplete

—This is an excerpt from The Chinese Computer: A Global History of the Information Age by Thomas S. Mullaney, published on May 28 by The MIT Press. It has been lightly edited.

When a young Chinese man sat down at his QWERTY keyboard in 2013 and rattled off an enigmatic string of letters and numbers, his forty-four keystrokes marked the first steps in a process known as “input” or shuru.

Shuru is the act of getting Chinese characters to appear on a computer monitor or other digital device using a QWERTY keyboard or trackpad.

The young man, Huang Zhenyu, was one of around 60 contestants in the 2013 National Chinese Characters Typing Competition. His keyboard did not permit him to enter these characters directly, however, and so he entered the quasi-gibberish string of letters and numbers instead: ymiw2klt4pwyy1wdy6…

But Zhenyu’s prizewinning performance wasn’t solely noteworthy for his impressive typing speed—one of the fastest ever recorded. It was also premised on the same kind of “additional steps” as the first Chinese computer in history that led to the discovery of autocompletion. Read the rest of the excerpt here.

If you’re interested in tech in China, why not check out some of our China reporter Zeyi Yang’s recent reporting (and subscribe to his weekly newsletter China Report!)

+ GPT-4o’s Chinese token-training data is polluted by spam and porn websites. The problem, which is likely due to inadequate data cleaning, could lead to hallucinations, poor performance, and misuse. Read the full story.

+ Why Hong Kong is targeting Western Big Tech companies in its ban of a popular protest song.

+ Deepfakes of your dead loved ones are a booming Chinese business. People are seeking help from AI-generated avatars to process their grief after a family member passes away. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Election officials want to pre-bunk harmful online campaigns
It’s a bid to prevent political hoaxes from ever getting off the ground. (WP $)
+ Fake news verification tools are failing in India. (Rest of World)
+ Three technology trends shaping 2024’s elections. (MIT Technology Review)

2 OpenAI has started training the successor to GPT-4
Just weeks after it revealed an updated version, GPT-4o. (NYT $)
+ OpenAI’s new GPT-4o lets people interact using voice or video in the same model. (MIT Technology Review)

3 China is bolstering its national semiconductor fund
To the tune of $48 billion. (WSJ $)
+ It’s the third round of the country’s native chip funding program. (FT $)
+ What’s next in chips. (MIT Technology Review)

4 Nuclear plants are extremely expensive to build
The US needs to learn how to cut costs without cutting corners. (The Atlantic $)
+ How to reopen a nuclear power plant. (MIT Technology Review)

5 Laser systems could be the best line of defense against military drones
The Pentagon is investing in BlueHalo’s AI-powered laser technology. (Insider $)
+ The US military is also pumping money into Palmer Luckey’s Anduril. (Wired $)
+ Inside the messy ethics of making war with machines. (MIT Technology Review)

6 Klarna’s marketing campaigns are the product of generative AI
The fintech firm claims the technology will save it $10 million a year. (Reuters)

7 The US has an EV charging problem
Would-be car buyers are still nervous about investing in EVs. (Wired $)
+ Micro-EVs could offer one solution. (Ars Technica)
+ Toyota has unveiled new engines compatible with alternative fuels. (Reuters)

8 Good luck betting on anything that’s not sports in the US
The outcome of a major election, for example. (Vox)
+ How mobile money supercharged Kenya’s sports betting addiction. (MIT Technology Review)

9 Perfectionist parents are Facetuning their children
It goes without saying: don’t do this. (NY Mag $)

10 Why a movie version of The Sims never got off the ground
The beloved video game would make for a seriously weird cinema spectacle. (The Guardian)

Quote of the day

“Once materialism starts spreading, it can have a bad influence on teenagers.”

—Chinese state media Beijing News explains why China has started cracking down on luxurious influencers known for their ostentatious displays of wealth, the Financial Times reports.

The big story

Recapturing early internet whimsy with HTML

December 2023

Websites weren’t always slick digital experiences. 

There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code. 

Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story

—Tiffany Ng

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Enjoy this potted history of why we say okay, and where it came from.
+ There is something very funny about Elton John calling The Lion King’s Timon and Pumbaa “the rat and the pig.”
+ The best of British press photography is always worth a peruse.
+ I had no idea that Sisqo’s Thong Song used an Eleanor Rigby sample.

How the quest to type Chinese on a QWERTY keyboard created autocomplete

27 May 2024 at 05:00

This is an excerpt from The Chinese Computer: A Global History of the Information Age by Thomas S. Mullaney, published on May 28 by The MIT Press. It has been lightly edited.

ymiw2

klt4

pwyy1

wdy6

o1

dfb2

wdv2

fypw3

uet5

dm2

dlu1 …

A young Chinese man sat down at his QWERTY keyboard and rattled off an enigmatic string of letters and numbers.

Was it code? Child’s play? Confusion? It was Chinese.

The beginning of Chinese, at least. These 44 keystrokes marked the first steps in a process known as “input” or shuru: the act of getting Chinese characters to appear on a computer monitor or other digital device using a QWERTY keyboard or trackpad.

Stills taken from a 2013 Chinese input competition screencast.
Stills taken from a 2013 Chinese input competition screencast.
COURTESY OF MIT PRESS

Across all computational and digital media, Chinese text entry relies on software programs known as “input method editors”—better known as “IMEs” or simply “input methods” (shurufa). IMEs are a form of “middleware,” so named because they operate in between the hardware of the user’s device and the software of its program or application. Whether a person is composing a Chinese document in Microsoft Word, searching the web, sending text messages, or otherwise, an IME is always at work, intercepting all of the user’s keystrokes and trying to figure out which Chinese characters the user wants to produce. Input, simply put, is the way ymiw2klt4pwyy … becomes a string of Chinese characters.

IMEs are restless creatures. From the moment a key is depressed or a stroke swiped, they set off on a dynamic, iterative process, snatching up user-inputted data and searching computer memory for potential Chinese character matches. The most popular IMEs these days are based on Chinese phonetics—that is, they use the letters of the Latin alphabet to describe the sound of Chinese characters, with mainland Chinese operators using the country’s official Romanization system, Hanyu pinyin. 

A series of screenshots of the Chinese Input Method Editor pop-up menu showing the process of typing (抄袭 / “plagiarism”).
Example of Chinese Input Method Editor pop-up menu (抄袭 / “plagiarism”)
COURTESY OF MIT PRESS

This young man was Huang Zhenyu (also known by his nom de guerre, Yu Shi). He was one of around 60 contestants that day, each wearing a bright red shoulder sash—as in a ticker-tape parade of old, or a beauty pageant. “Love Chinese Characters” (Ai Hanzi) was emblazoned in vivid golden yellow on a poster at the front of the hall. The contestants’ task was to transcribe a speech by outgoing Chinese president Hu Jintao, as quickly and as accurately as they could. “Hold High the Great Banner of Socialism with Chinese Characteristics,” it began, or in the original:  高举中国特色社会主义伟大旗帜为夺取全面建设小康社会新胜利而奋斗. Huang’s QWERTY keyboard did not permit him to enter these characters directly, however, and so he entered the quasi-gibberish string of letters and numbers instead: ymiw2klt4pwyy1wdy6 …

With these four dozen keystrokes, Huang was well on his way, not only to winning the 2013 National Chinese Characters Typing Competition, but also to clocking one of the fastest typing speeds ever recorded, anywhere in the world.

ymiw2klt4pwyy1wdy6 … is not the same as 高举中国特色社会主义 …  The keys that Huang actually depressed on his QWERTY keyboard—his “primary transcript,” as we could call it—were completely different from the symbols that ultimately appeared on his computer screen, namely the “secondary transcript” of Hu Jintao’s speech. This is true for every one of the world’s billion-plus Sinophone computer users. In Chinese computing, what you type is never what you get.

For readers accustomed to English-language word processing and computing, this should come as a surprise. For example, were you to compare the paragraph you’re reading right now against a key log showing exactly which buttons I depressed to produce it, the exercise would be unenlightening (to put it mildly). “F-o-r-_-r-e-a-d-e-r-s-_-a-c-c-u-s-t-o-m-e-d-_t-o-_-E-n-g-l-i-s-h … ” it would read (forgiving any typos or edits). In English-language typewriting and computer input, a typist’s primary and secondary transcripts are, in principle, identical. The symbols on the keys and the symbols on the screen are the same.

Not so for Chinese computing. When inputting Chinese, the symbols a person sees on a QWERTY keyboard are always different from the symbols that ultimately appear on the monitor or on paper. Every single computer and new media user in the Sinophone world—no matter if they are blazing-fast or molasses-slow—uses their device in exactly the same way as Huang Zhenyu, constantly engaged in this iterative process of criteria-candidacy-confirmation, using one IME or another. Not some Chinese-speaking users, mind you, but all. This is the first and most basic feature of Chinese computing: Chinese human-computer interaction (HCI) requires users to operate entirely in code all the time.

If Huang Zhenyu’s mastery of a complex alphanumeric code weren’t impressive enough, consider the staggering speed of his performance. He transcribed the first 31 Chinese characters of Hu Jintao’s speech in roughly five seconds, for an extrapolated speed of 372 Chinese characters per minute. By the close of the grueling 20-minute contest, one extending over thousands of characters, he crossed the finish line with an almost unbelievable speed of 221.9 characters per minute.

That’s 3.7 Chinese characters every second.

In the context of English, Huang’s opening five seconds would have been the equivalent of around 375 English words per minute, with his overall competition speed easily surpassing 200 WPM—a blistering pace unmatched by anyone in the Anglophone world (using QWERTY, at least). In 1985, Barbara Blackburn achieved a Guinness Book of World Records–verified performance of 170 English words per minute (on a typewriter, no less). Speed demon Sean Wrona later bested Blackburn’s score with a performance of 174 WPM (on a computer keyboard, it should be noted). As impressive as these milestones are, the fact remains: had Huang’s performance taken place in the Anglophone world, it would be his name enshrined in the Guinness Book of World Records as the new benchmark to beat.

Huang’s speed carried special historical significance as well.

For a person living between the years 1850 and 1950—the period examined in the book The Chinese Typewriter—the idea of producing Chinese by mechanical means at a rate of over 200 characters per minute would have been virtually unimaginable. Throughout the history of Chinese telegraphy, dating back to the 1870s, operators maxed out at perhaps a few dozen characters per minute. In the heyday of mechanical Chinese typewriting, from the 1920s to the 1970s, the fastest speeds on record were just shy of 80 characters per minute (with the majority of typists operating at far slower rates). When it came to modern information technologies, that is to say, Chinese was consistently one of the slowest writing systems in the world.

What changed? How did a script so long disparaged as cumbersome and helplessly complex suddenly rival—exceed, even—computational typing speeds clocked in other parts of the world? Even if we accept that Chinese computer users are somehow able to engage in “real time” coding, shouldn’t Chinese IMEs result in a lower overall “ceiling” for Chinese text processing as compared with English? Chinese computer users have to jump through so many more hoops, after all, over the course of a cumbersome, multistep process: the IME has to intercept a user’s keystrokes, search in memory for a match, present potential candidates, and wait for the user’s confirmation. Meanwhile, English-language computer users need only depress whichever key they wish to see printed on screen. What could be simpler than the “immediacy” of “Q equals Q,” “W equals W,” and so on?

Tom Mullaney
COURTESY OF TOM MULLANEY

To unravel this seeming paradox, we will examine the first Chinese computer ever designed: the Sinotype, also known as the Ideographic Composing Machine. Debuted in 1959 by MIT professor Samuel Hawks Caldwell and the Graphic Arts Research Foundation, this machine featured a QWERTY keyboard, which the operator used to input—not the phonetic values of Chinese characters—but the brushstrokes out of which Chinese characters are composed. The objective of Sinotype was not to “build up” Chinese characters on the page, though, the way a user builds up English words through the successive addition of letters. Instead, each stroke “spelling” served as an electronic address that Sinotype’s logical circuit used to retrieve a Chinese character from memory. In other words, the first Chinese computer in history was premised on the same kind of “additional steps” as seen in Huang Zhenyu’s prizewinning 2013 performance.

During Caldwell’s research, he discovered unexpected benefits of all these additional steps—benefits entirely unheard of in the context of Anglophone human-machine interaction at that time. The Sinotype, he found, needed far fewer keystrokes to find a Chinese character in memory than to compose one through conventional means of inscription. By way of analogy, to “spell” a nine-letter word like “crocodile” (c-r-o-c-o-d-i-l-e) took far more time than to retrieve that same word from memory (“c-r-o-c-o-d” would be enough for a computer to make an unambiguous match, after all, given the absence of other words with similar or identical spellings). Caldwell called his discovery “minimum spelling,” making it a core part of the first Chinese computer ever built. 

Today, we know this technique by a different name: “autocompletion,” a strategy of human-computer interaction in which additional layers of mediation result in faster textual input than the “unmediated” act of typing. Decades before its rediscovery in the Anglophone world, then, autocompletion was first invented in the arena of Chinese computing.

The Download: head transplants, and filtering sounds with AI

24 May 2024 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

That viral video showing a head transplant is a fake. But it might be real someday. 

An animated video posted this week has a voice-over that sounds like a late-night TV ad, but the pitch is straight out of the far future. The arms of an octopus-like robotic surgeon swirl, swiftly removing the head of a dying man and placing it onto a young, healthy body. 

This is BrainBridge, the animated video claims—“the world’s first revolutionary concept for a head transplant machine, which uses state-of-the-art robotics and artificial intelligence to conduct complete head and face transplantation.”

BrainBridge is not a real company—it’s not incorporated anywhere. Yet it’s not merely a provocative work of art. This video is better understood as the first public billboard for a hugely controversial scheme to defeat death that’s recently been gaining attention among some life-extension proponents and entrepreneurs. Read the full story.

—Antonio Regalado

Noise-canceling headphones use AI to let a single voice through

Modern life is noisy. If you don’t like it, noise-canceling headphones can reduce the sounds in your environment. But they muffle sounds indiscriminately, so you can easily end up missing something you actually want to hear.

A new prototype AI system for such headphones aims to solve this. Called Target Speech Hearing, the system gives users the ability to select a person whose voice will remain audible even when all other sounds are canceled out.

Although the technology is currently a proof of concept, its creators say they are in talks to embed it in popular brands of noise-canceling earbuds and are also working to make it available for hearing aids. Read the full story.

—Rhiannon Williams

Splashy breakthroughs are exciting, but people with spinal cord injuries need more

—Cassandra Willyard

This week, I wrote about an external stimulator that delivers electrical pulses to the spine to help improve hand and arm function in people who are paralyzed. This isn’t a cure. In many cases the gains were relatively modest.

The study didn’t garner as much media attention as previous, much smaller studies that focused on helping people with paralysis walk. Tech that allows people to type slightly faster or put their hair in a ponytail unaided just doesn’t have the same allure.

For the people who have spinal cord injuries, however, incremental gains can have a huge impact on quality of life. So who does this tech really serve? Read the full story.

This story is from The Checkup, our weekly health and biotech newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google’s AI search is advising people to put glue on pizza 
These tools clearly aren’t ready to provide billions of users with accurate answers. (The Verge)
+ That $60 million Google paid Reddit for its data sure looks questionable. (404 Media)
+ But who’s legally responsible here? (Vox)
+ Why you shouldn’t trust AI search engines. (MIT Technology Review)

2 Russia is increasingly interfering with Ukraine’s Starlink service
It’s disrupting Ukraine’s ability to collect intelligence and conduct drone attacks. (NYT $)

3 Taiwan is prepared to shut down its chipmaking machines if China invades
China is currently circling the island on military exercises. (Bloomberg $)
+ Meanwhile, China’s PC makers are on the up. (FT $)
+ What’s next in chips. (MIT Technology Review)

4 X is planning on hiding users’ likes

Elon Musk wants to encourage users to like ‘edgy’ content without fear. (Insider $)

5 The scammer who cloned Joe Biden’s voice could be fined $6 million
Regulators want to make it clear that political AI manipulation will not be tolerated. (TechCrunch)
+ He’s due to appear in court next month. (Reuters)
+ Meta says AI-generated election content is not happening at a “systemic level.” (MIT Technology Review)

6 NSO Group’s former CEO is staging a comeback
Shalev Huloi resigned after the US blacklisted the company. (The Intercept)

7 Rivers in Alaska are running orange
It’s highly likely that climate change is to blame. (WP $)
+ It’s looking unlikely that we’re going to limit global warming to 1.5°C. (New Scientist $)

8 We’re learning more about one of the world’s rarest elements
Promethium is extremely radioactive, and extremely unstable. (New Scientist $)

9 Children can’t really become music lovers without a phone
Without cassette players or CDs, streaming seems the only option.(The Guardian)

10 AI art will always look cheap 🖼
It’s no substitute for the real deal. (Vox)
+ This artist is dominating AI-generated art. And he’s not happy about it. (MIT Technology Review)

Quote of the day

“Naming space as a warfighting domain was kind of forbidden, but that’s changed.”

—Air Force General Charles “CQ” Brown explains how the US is preparing to fight adversaries in space, Ars Technica reports.

The big story

How Facebook got addicted to spreading misinformation 

March 2021

When the Cambridge Analytica scandal broke in March 2018, it would kick off Facebook’s largest publicity crisis to date. It compounded fears that the algorithms that determine what people see were amplifying fake news and hate speech, and prompted the company to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms.

Joaquin Quiñonero Candela was a natural pick to head it up. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful. However, his hands were tied, and the drive to make money came first. Read the full story.

—Karen Hao

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Zillow is the wild west of home listings. This Twitter (sorry, X) account collates some of the best.
+ COUSIN! We love you, Ebon Moss-Bachrach! 🐻
+ Gimme all the potato salad.
+ Much sad: rest in power Kabosu, the beautiful shiba inu whose tentative face launched a thousand memes.

Splashy breakthroughs are exciting, but people with spinal cord injuries need more

24 May 2024 at 06:00

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

This week, I wrote about an external stimulator that delivers electrical pulses to the spine to help improve hand and arm function in people who are paralyzed. This isn’t a cure. In many cases the gains were relatively modest. One participant said it increased his typing speed from 23 words a minute to 35. Another participant was newly able to use scissors with his right hand. A third used her left hand to release a seatbelt.

The study didn’t garner as much media attention as previous, much smaller studies that focused on helping people with paralysis walk. Tech that allows people to type slightly faster or put their hair in a ponytail unaided just doesn’t have the same allure. “The image of a paralyzed person getting up and walking is almost biblical,” Charles Liu, director of the Neurorestoration Center at the University of Southern California, once told a reporter. 

For the people who have spinal cord injuries, however, incremental gains can have a huge impact on quality of life. 

So today in The Checkup, let’s talk about this tech and who it serves.

In 2004, Kim Anderson-Erisman, a researcher at Case Western Reserve University, who also happens to be paralyzed, surveyed more than 600 people with spinal cord injuries. Wanting to better understand their priorities, she asked them to consider seven different functions—everything from hand and arm mobility to bowel and bladder function to sexual function. She asked respondents to rank these functions according to how big an impact recovery would have on their quality of life. 

Walking was one of the functions, but it wasn’t the top priority for most people. Most quadriplegics put hand and arm function at the top of the list. For paraplegics, meanwhile, the top priority was sexual function. I interviewed Anderson-Erisman for a story I wrote in 2019 about research on implantable stimulators as a way to help people with spinal cord injuries walk. For many people, “not being able to walk is the easy part of spinal cord injury,” she told me. “[If] you don’t have enough upper-extremity strength or ability to take care of yourself independently, that’s a bigger problem than not being able to walk.” 

One of the research groups I focused on was at the University of Louisville. When I visited in 2019, the team had recently made the news because two people with spinal cord injuries in one of their studies had regained the ability to walk, thanks to an implanted stimulator. “Experimental device helps paralyzed man walk the length of four football fields,” one headline had trumpeted.

But when I visited one of those participants, Jeff Marquis, in his condo in Louisville, I learned that walking was something he could only do in the lab. To walk he needed to hold onto parallel bars supported by other people and wear a harness to catch him if he fell. Even if he had extra help at home, there wasn’t enough room for the apparatus. Instead, he gets around his condo the same way he gets around outside his condo: in a wheelchair. Marquis does stand at home, but even that requires a bulky frame. And the standing he does is only for therapy. “I mostly just watch TV while I’m doing that,” he said.  

That’s not to say the tech has been useless. The implant helped Marquis gain some balance, stamina, and trunk stability. “Trunk stability is kind of underrated in how much easier that makes every other activity I do,” he told me. “That’s the biggest thing that stays with me when I have [the stimulator] turned off.”  

What’s exciting to me about this latest study is that the tech gave the participants skills they could use beyond the lab. And because the stimulator is external, it is likely to be more accessible and vastly cheaper. Yes, the newly enabled movements are small, but if you listen to the palpable excitement of one study participant as he demonstrates how he can move a small ball into a cup, you’ll appreciate that incremental gains are far from insignificant. That’s according to Melanie Reid, one of the participants in the latest trial, who spoke at a press conference last week. “There [are] no miracles in spinal injury, but tiny gains can be life-changing.”


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

In 2017, we hailed as a breakthrough technology electronic interfaces designed to reverse paralysis by reconnecting the brain and body. Antonio Regalado has the story

An implanted stimulator changed John Mumford’s life, allowing him to once again grasp objects after a spinal cord injury left him paralyzed. But when the company that made the device folded, Mumford was left with few options for keeping the device running. “Limp limbs can be reanimated by technology, but they can be quieted again by basic market economics,” wrote Brian Bergstein in 2015. 

In 2014, Courtney Humphries covered some of the rat research that laid the foundation for the technological developments that have allowed paralyzed people to walk. 

From around the web

Lots of bird flu news this week. A second person in the US has tested positive for the illness after working with infected livestock. (NBC)

The livestock industry, which depends on shipping tens of millions of live animals, provides some ideal conditions for the spread of pathogens, including bird flu. (NYT)

Long read: How the death of a nine-year-old boy in Cambodia triggered a global H5N1 alert. (NYT)

You’ve heard about tracking viruses via wastewater. H5N1 is the first one we’re tracking via store-bought milk. (STAT

The first organ transplants from pigs to humans have not ended well, but scientists are learning valuable lessons about what they need to do better. (Nature

Another long read that’s worth your time: an inside look at just how long 3M knew about the pervasiveness of “forever chemicals.” (New Yorker

That viral video showing a head transplant is a fake. But it might be real someday. 

23 May 2024 at 15:36

An animated video posted this week has a voice-over that sounds like a late-night TV ad, but the pitch is straight out of the far future. The arms of an octopus-like robotic surgeon swirl, swiftly removing the head of a dying man and placing it onto a young, healthy body. 

This is BrainBridge, the animated video claims—“the world’s first revolutionary concept for a head transplant machine, which uses state-of-the-art robotics and artificial intelligence to conduct complete head and face transplantation.”

First posted on Tuesday, the video has millions of views, more than 24,000 comments on Facebook, and a content warning on TikTok for its grisly depictions of severed heads. A slick BrainBridge website has several job postings, including one for a “neuroscience team leader” and another for a “government relations adviser.” It is all convincing enough for the New York Post to announce that BrainBridge is “a biomedical engineering startup” and that “the company” plans a surgery within eight years. 

We can report that BrainBridge is not a real company—it’s not incorporated anywhere. The video was made by Hashem Al-Ghaili, a Yemeni science communicator and film director who in 2022 made a viral video called “EctoLife,” about artificial wombs, that also left journalists scrambling to determine if it was real or not.

Yet BrainBridge is not merely a provocative work of art. This video is better understood as a public billboard for a hugely controversial scheme to defeat death that’s recently been gaining attention among some life-extension proponents and entrepreneurs. 

“It’s about recruiting newcomers to join the project,” says Al-Ghaili.

This morning, Al-Ghaili, who lives in Dubai, was up at 5 a.m., tracking the video as its viewership ballooned around social media. “I am monitoring its progress,” he says, but he insists he didn’t make the film for clicks: “Being viral is not the goal. I can be viral anytime. It’s pushing boundaries and testing feasibility.”

The video project was bankrolled in part by Alex Zhavoronkov, the founder of Insilico Medicine, a large AI drug discovery company, who is also a prominent figure in anti-aging research. After Zhavoronkov posted the video on his LinkedIn account, commenters noticed that it is his face on the two bodies shown in the video.

“I can confirm I helped design and fund a few things,” Zhavoronkov told MIT Technology Review in a WhatsApp message, in which he also claimed that “some important and famous people are supporting [it] financially.”

Zhavoronkov declined to name these individuals. He also didn’t respond when asked if the job ads—whose cookie-cutter descriptions of qualifications and responsibilities appear to have been written by an AI—are real roles or make-believe positions.

Aging bypass

What is certain is that head transplantation—or body transplant, as some prefer to call it—is a subject of growing, if speculative, interest in longevity circles, the kind inhabited by biohackers, techno-anarchists, and others on the fringes of biotechnology and the startup scene and who form the most dedicated cadre of extreme life-extensionists.

Many proponents of longer life spans will admit things don’t look good. Anti-aging medicine so far hasn’t achieved any breakthroughs. In fact, as research advances into the molecular details, the problem of death only looks more and more complicated. As we age, our billions of cells gradually succumb to the irreversible effects of entropy. Fixing that may never be possible.

By comparison, putting your head on a young body looks comparatively easy—a way to bypass aging in a single stroke, at least as long as your brain holds out. The idea was strongly endorsed in a technical road map put forward this year by the Longevity Biotech Fellowship, a group espousing radical life extension, which rated “body replacement” as the cheapest, fastest pathway to “solve aging.”  

Will head transplants work? In a crude way, they already have. In the early 1970s, the American neurosurgeon Robert White performed a “cephalic exchange,” cutting off the head of a monkey, placing it on the body of another, and sewing together their circulatory systems. Reports suggest the head remained conscious, and able to see, for a few days before it died.

Most likely, a human head transplant would also be fatal. But even if you lived, you’d be a mind atop a paralyzed body, since exchanging heads means severing the spinal cord. 

Yet head-swapping proponents can point to plausible solutions for that, too—a number of which appear in the BrainBridge video. In Europe, for instance, some paralyzed people have walked again after doctors bridged their spinal injuries with electronics. Other scientists in China are studying growth factors to regrow nerves.

Joined at the neck

As shocking as the video is, BrainBridge is in some ways overly conventional in its thinking. If you want to keep your brain going, why must it be on a human body? You might instead keep the head alive on a heart-lung machine—with an Elon Musk neural implant to let it surf the internet, for as long as it lives. Or consider how doctors hoping to solve the organ shortage have started putting hearts and kidneys from genetically engineered pigs into patients. If you don’t mind having a tail and four legs, maybe your head could be placed onto a pig’s body.

Let’s take it a step further. Why does the body “donor” have to be dead at all? Anatomically, it’s possible to have two heads. There are conjoined twins who share one body. If your spouse were diagnosed with a fatal cancer, you would surely welcome his or her head next to yours, if it allowed their mind to live on. After all, the concept of a “living donor” is widely accepted in transplant medicine already, and married couples are often said to be joined at the hip. Why not at the neck, too?

If the video is an attempt to take the public’s temperature and gauge reactions, it’s been successful. Since it was posted, thousands of commenters have explored the moral dilemmas posed by the procedure. For instance, if someone is left brain dead—say, in a motorcycle accident—surgeons can use their heart, liver, and kidneys to save multiple other people. Would it be ethical to use a body to help only one person?

“The most common question is ‘Where do you get the bodies from?’” says Al-Ghaili. The BrainBridge website answers this question by stating it will source “ethically grown” unconscious bodies from EctoLife, the artificial womb company that is Al-Ghaili’s previous fiction. He also suggests that people undergoing euthanasia because of chronic pain, or even psychiatric problems, could provide an additional supply. 

For the most part, the public seems to hate the idea. On Facebook, a pastor, Matthew. W. Tucker, called the concept “disgusting, immoral, unnecessary, pagan, demonic and outright idiotic,” adding that “they have no idea what they are doing.” A poster from the Middle East apologized for the video, joking that its creator “is one of our psychiatric patients who escaped last night.” “We urge the public to go about [their] business as everything is under control,” this person said.

Al-Ghaili is monitoring the feedback with interest and some concern. “The negativity is huge, to be honest,” he says. “But behind that are the ones who are sending emails. These are people who want to invest, or who are expressing their personal health challenges. These are the ones who matter.”

He says if suitable job applicants appear, the backers of BrainBridge are prepared to fund a small technical feasibility study to see if their idea has legs.

❌
❌