Normal view

Received today — 15 December 2025Technology

Microsoft takes down mod that recreated Halo 3 in Counter-Strike 2

15 December 2025 at 15:20

Last month saw the release of Project Misriah, an ambitious modding project that tried to recreate the feel of Halo 3 inside Valve’s Counter-Strike 2. That project has now been taken down from the Steam Workshop, though, after drawing a DMCA complaint from Microsoft.

Modder Froddoyo introduced Project Misriah on November 16 as “a workshop collection of Halo ported maps and assets that aims to bring a Halo 3 multiplayer-like experience to Counter-Strike 2.” Far from just being inspired by Halo 3, the mod directly copied multiple sound effects, character models, maps, and even movement mechanics from Bungie and Microsoft’s popular series.

In the weeks since, Project Misriah drew a lot of praise from both Halo fans and those impressed by what modders could pull off with the Source 2 engine. But last Wednesday, modder Froddoyo shared a DMCA request from Microsoft citing the “unauthorized use of Halo game content in a [Steam] workshop not associated with Halo games.”

Read full article

Comments

© https://www.youtube.com/watch?v=ki1RQF0-jyk

Murder-suicide case shows OpenAI selectively hides data after users die

15 December 2025 at 15:10

OpenAI is facing increasing scrutiny over how it handles ChatGPT data after users die, only selectively sharing data in lawsuits over ChatGPT-linked suicides.

Last week, OpenAI was accused of hiding key ChatGPT logs from the days before a 56-year-old bodybuilder, Stein-Erik Soelberg, took his own life after “savagely” murdering his mother, 83-year-old Suzanne Adams.

According to the lawsuit—which was filed by Adams’ estate on behalf of surviving family members—Soelberg struggled with mental health problems after a divorce led him to move back into Adams’ home in 2018. But allegedly Soelberg did not turn violent until ChatGPT became his sole confidant, validating a wide range of wild conspiracies, including a dangerous delusion that his mother was part of a network of conspirators spying on him, tracking him, and making attempts on his life.

Read full article

Comments

© via OpenAI complaint

If Some Photos Are Inexplicably Turning Red on Your iPhone, There's a Fix

15 December 2025 at 15:00

If you open a picture in the Photos app on your iPhone, and it inexplicably starts turning red, I wouldn't blame you for being a bit concerned. After all, that's not supposed to happen, and out of all the colors your photos could randomly fade into, red is among the creepiest.

While you contemplate what angry and vengeful god you might have crossed recently, understand that this isn't necessarily a problem affecting all, or even most, iPhone users and their photos. In fact, it doesn't appear to be affecting photos taken on iPhones at all. Rather, the users reporting this issue see it when zooming in on photos taken on Android devices. It seems a new hue has been added to the iPhone/Android divide: green bubbles, red pictures.

If this isn't happening on your own iPhone, you can see the issue play out in this Reddit post. User djenki0119 posted a screen recording of themself browsing photos on their iPhone that they had originally taken on a Samsung Galaxy S24. At first, the pictures appears totally normal. But once djenki0119 zooms in on each, it quickly turns a deep shade of red—almost as if you were looking at film developing in a dark room. This user has the same issue, only they took their photo on a Motorola Razr.

At this time, it's unclear what is actually causing the issue to occur. It usually doesn't matter what type of device took any particular image: Once it's in the Photos app, it should display normally. But there must be something about Android files that the iOS Photos app isn't reading correctly, at least when users zoom in on the image. As 9to5Mac highlights, it appears that something is adding a red filter to these images in the Photos app. Since this issue is only popping up recently, my guess is there's a bug within iOS 26, though there could be an issue with Android instead.

For what it's worth, I wasn't able to replicate the problem with photos I sent from my Pixel 8 Pro over to my iPhone. But perhaps there is some strange combination of hardware and software that results in this tinting: Maybe a photo taken on a certain type of Android device running a specific version of Android turns red on a certain iPhone model running a specific version of iOS.

How to undo a photo that turned red on iPhone

Luckily, you don't have to wait for Apple, Google, Samsung, or Motorola to issue a fix, depending on where the actual issue is coming from. To return your image to its proper color scheme, open it in the Photos app, tap "Edit," then choose "Revert." This restores the image to its original state, and removes the red filter that was unnecessarily overlayed on top of it.

Filmmaker Rob Reiner, wife, killed in horrific home attack

15 December 2025 at 14:53

We woke up this morning to the horrifying news that beloved actor and director Rob Reiner and his wife Michele were killed in their Brentwood home in Los Angeles last night. Both had been stabbed multiple times. Details are scarce, but the couple’s 32-year-old son, Nick—who has long struggled with addiction and recently moved back in with his parents—has been arrested in connection with the killings, with bail set at $4 million.

“It is with profound sorrow that we announce the tragic passing of Michele and Rob Reiner,” the family said in a statement confirming the deaths. “We are heartbroken by this sudden loss, and we ask for privacy during this unbelievably difficult time.”

Reiner started his career as an actor, best known for his Emmy-winning role as Meathead, son-in-law to Archie Bunker, on the 1970s sitcom All in the Family. (“I could win the Nobel Prize and they’d write ‘Meathead wins the Nobel Prize,'” Reiner once joked about the enduring popularity of the character.) Then Reiner turned to directing, although he continued to make small but memorable appearances in films such as Throw Momma from the Train, Sleepless in Seattle, The First Wives Club, and The Wolf of Wall Street, as well as TV’s The New Girl.

Read full article

Comments

© Public domain

UK to “encourage” Apple and Google to put nudity-blocking systems on phones

15 December 2025 at 14:38

The UK government reportedly will “encourage” Apple and Google to prevent phones from displaying nude images except when users verify that they are adults.

The forthcoming push for nudity-blocking systems was reported by the Financial Times today. The report said the UK won’t institute a legal requirement “for now.” But asking companies to block nude images could be the first step toward making it mandatory if the government doesn’t get what it wants.

“The UK government wants technology companies to block explicit images on phones and computers by default to protect children, with adults having to verify their age to create and access such content,” the FT report said. “Ministers want the likes of Apple and Google to incorporate nudity-detection algorithms into their device operating systems to prevent users taking photos or sharing images of genitalia unless they are verified as adults.”

Read full article

Comments

© Getty Images | Eshma

All the Best Ways to Upgrade and Organize Your Garage

15 December 2025 at 14:30

We may earn a commission from links on this page.

Outfitting that garage with the right gear is the difference between a useful and organized space and a chaotic black hole of junk. If you want your garage to be a place where you can get work done, where you can actually find stuff, and maybe also where you can park your car, here’s what you should have, ranging from the must-have essentials to some luxuries you could stretch for.

What every garage needs

If you want to get the most of any garage, here’s the short list of essentials:

  • Overhead storage. An empty garage might seem like endless storage, but it fills up fast. If you don’t want to navigate a maze of junk every time you go in there, some overhead storage shelving is a must.

  • Wall storage. For more readily accessible storage, some bike hooks and wall hooks for yard tools (like rakes and shovels) will keep those things easy to grab but off the floor.

  • Absorbent mats. Your car will leak oil and other fluids, and the garage is where you do a lot of messy jobs. A large-format absorbent mat will save you a lot of cleanup.

  • Fire safety. Every space in your home should have a fire extinguisher within easy reach, and your garage is no exception. You might also consider keeping some fire blankets, which are often more effective for small, contained fires.

  • Sports caddy. If you’ve got a collection of balls for every possible sport, plus other implements, having a place to dump them all so they don’t roll around is a must.

  • Tool storage. Whether you go with a classic tool box or a set of magnetic strips on the walls, don’t let your expensive, delicate tools get dirty, damp, and lost.

  • Workbench. Even if you’re not a hobbyist or much of a DIYer, having a workbench in the garage is a good idea. If space is an issue, a wall-mounted folding one like this is an ideal solution.

  • Creeper seat. Similarly, you don’t have to be a total gearhead to appreciate a creeper seat. Working in the garage often means working down low to the floor, so unless you enjoy sitting on he cold, greasy floor for a few hours, a creeper seat is a necessity.

What would be nice to have if you own a garage

You can always find an upgrade for any space, and your garage is no exception. Some useful-but-not-essential upgrades include:

  • Rubber door bottom. Your garage door probably isn’t a very good seal. When the weather’s hot or cold, that can make the space uncomfortable. A simple adhesive rubber door bottom provides a nice seal to make the space a little nicer to be in.

  • Climate control. You don’t necessarily have to install central air or a mini-split in your garage. There are plenty of portable heating and cooling solutions that will keep you comfy while you work without breaking the bank.

  • Tire rack: If you have a growing collection of spare tires, or you’re rotating between all-weather and winter tires regularly, having a stable storage shelf for them is a lot better than stacking them up or having them rolling around.

  • Stopper mats: It’s a universal frustration: trying not to ram into the wall or crush a bunch of stuff every time you pull your car into the garage. If you don’t want a tennis ball hanging from a string, some simple mats like these will let you know when to hit the brakes.

  • Tile flooring: Even if you have a nice floor in your garage—maybe especially if you have a nice floor in there—a protective tile floor is a good idea. It’ll keep everything pristine and protected from damage (from dropped tools, for example).

Best garage upgrades

Want your garage to be even nicer? No problem—here are some very useful add-ons that fall into the “luxury” category for most garages:

  • Speakers. These days, you don’t need a whole entertainment system in your garage—your phone and a Bluetooth speaker will do. But the speaker needs to be waterproof and designed for work spaces if you want it to survive.

  • Fridge. If you’re still nipping into the kitchen for a fresh beverage while you’re working in the garage, it’s time to upgrade to a garage-specific fridge.

  • Retractable extension cords. Necessary? Not really, but very useful: A retractable extension cord keeps wires out of the way when you’re not using them and prevents them from knotting up in maddening ways. Extra bonus: Have an outlet installed in the ceiling of the garage and mount the extension cord up there, too, so you just pull it down when you need it.

  • Utility sink. Having a place to wash up and clean dirty tools in the garage is a godsend. If the garage isn’t plumbed and you don’t want to sink that kind of cash into it, an outdoor sink hooked up to a garden hose will do the trick.

  • Hoist. Suckers get on ladders and lift with their muscles. Smart folks install an overhead hoist that can lift cargo boxes or other heavy things out of the way with ease.

  • Garage door screen. It’s nice to open the garage door while you’re working in the warm weather—unless it invites every bug in the universe to assault you. A nice garage door screen lets you enjoy the breeze without the bugs.

  • Laser guides. If a rubber mat or a tennis ball on a string isn’t high-tech enough for your parking needs, why not install a laser-guided system? You’ll never scrape a door or crush a garbage can again.

  • Paper towel dispenser. Necessary? Maybe not, but being able to grab a paper towel hands-free as needed just makes garage life easier.

  • Wall-mounted inflator. If you’ve got cars and bikes in your garage, installing a wall-mounted inflator will make keeping tires properly inflated and maintained a breeze.

  • Seating area. And if you’ve got a large garage—or no car—why not create a comfortable place to sit and relax in-between projects? Some durable outdoor furniture and an outdoor rug are all you need.

My Favorite Amazon Deal of the Day: The 2024 Amazon Kindle Scribe

15 December 2025 at 14:00

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

Though the Kindle Scribe has just been overhauled for 2025, last year's model remains an attractive digital notebook. Released in December of last year, the 2024 Kindle Scribe is an upgraded version of the oversized e-reader designed for note-taking, offering some nice improvements over the 2022 original. Those upgrades don't come cheap, however, with prices on the 2024 Kindle Scribe starting at $399.99 (still a lot less than the new-for-2025 version, which starts at $499.99).

Right now, the 32GB version of the 2024 Scribe is a lot cheaper: It's discounted to $279.99 (originally $419.99), the lowest price this reader has seen since its release, according to price-tracking tools, and a great opportunity to snatch one for a bargain. The 64GB version Essentials Bundle is also on sale for $341.97 (originally $449.99), adding a case and a power adapter to your purchase.

The original Kindle Scribe came out in 2022; that version is currently $369.99 for the 16GB model with the Premium Pen, which puts how good a deal this is in perspective—you can get the three-years-newer model for less.

That said, if you already have the 2022 version, there is no compelling reason to upgrade—the main difference is that the 2024 version comes with the Premium Pen instead of the Basic Pen stylus, while the tablet itself is shorter, narrower, and slimmer, but not by much (you can even still use the same case). The new screen also has texture, which will add some resistance when you're writing on it, for a more natural feel. The gap between the screen and the outer casing is also smaller. But that's where the differences end.

Otherwise, you'll get the same book format compatibility, the same 15.3 oz weight, the same glare-free 300 ppi front-lit display screen, and the same 12-week battery life. Both tablets run the same software. Still, if you don't own a Scribe at all and are considering getting one, the 2024 version is a good choice at the current price point—it's 44% cheaper than the new 2025 model.

Deals are selected by our commerce team

30 of the Best Modern Christmas Movies

15 December 2025 at 13:30

We may earn a commission from links on this page.

Christmas movies have been a tradition for decades, but the days when our choices were limited to George Bailey contemplating jumping off of a bridge and Ralphie washing his mouth out with Lifebuoy soap are well past us. Holiday movies are an industry in and of themselves, with dozens of new seasonal offerings released each year, starting as soon as the leaves start to turn colors. Most of them are cozy cookie-cutter offerings—relaxing, if largely disposable.

But among the seasonal glut, new classics do occasionally emerge. Here are 30 more recent holiday classics, from silly comedies, to cozy dramas, to gruesome horrors, queer romances, and even a surprisingly literal adaptation of a Wham! song.

The Family Stone (2005)

Holiday gatherings always offer great potential for comedy and drama, with The Family Stone landing a bit of each. The setup involves Dermot Mulroney bringing home his new girlfriend, played by a fearlessly brittle Sarah Jessica Parker, for Christmas. That doesn’t go great, with the visitor constantly feeling out of place and embarrassed amid the insular, tight-knit, standoffish clan. But, in the background, strong-willed matriarch Sybil Stone (Diane Keaton) is also looking for an opportunity, amidst the holiday chaos, to reveal a terminal medical diagnosis. The subtle final shot lands like a sledgehammer every time and, of course, the recent passing of Diane Keaton adds a deeper poignance to the film this year—oh and there's maybe a sequel coming coming. Stream The Family Stone on Disney+, Prime Video, and Hulu.


Last Holiday (2006)

Remaking a 1950 Alec Guinness movie, Last Holiday puts the ever-radiant Queen Latifah in the lead here as Georgia, a department store assistant given the news that she has a rare brain condition and, potentially, only weeks to live (insurance won't cover an operation because of course it won't). Georgia quits her job, sells her stuff, and heads off to the Czech Republic (which looks a lot more like Austria, where Last Holiday was filmed) for the glamorous European holiday spa trip of her dreams. Her workplace crush, Sean (LL Cool J) is hot on her heels. The plot here is nothing new, even leaving aside that the movie is a remake, but Queen Latifah brings her considerable charm and old-school Hollywood swagger to the film. Stream Last Holiday on Paramount+ and Hulu.


Elf (2003)

A Will Ferrell comedy about a human who identifies as a literal elf has no business being this sweet and smart. Ferrell is Buddy, a kid who was accidentally shipped off to the North Pole as a child, and now he’s off to New York during the holiday season to find his biological father (James Caan). The impressive cast here (Ed Asner, Zooey Deschanel, Peter Dinklage, Bob Newhart) doesn’t hurt one bit. Stream Elf on HBO Max.


The Holdovers (2023)

A modest box office success, The Holdovers did even better with the critics, earning a Best Picture Oscar nod (among other nominations) and a Best Supporting Actress prize for Da'Vine Joy Randolph. She plays Mary Lamb, the cafeteria manager at a New England prep school stuck on campus during the holiday break with Paul Giamatti, playing a jerky, uptight classics teacher, as well as with a troubled student. Having recently lost her son in Vietnam, Lamb isn't inclined to spend much time with her fellow holdovers; at least until the three of them are forced to come to terms. Rent The Holdovers from Prime Video and Apple TV.


Get Santa (2014)

In the venerable tradition of Bob Clark, who directed Children Shouldn't Play With Dead Things and Black Christmas before making his reputation with A Christmas Story, Christopher Smith took a break from directing horror movies to helm this good-natured family comedy. Steve (Rafe Spall) is excited to reunite with his son, 9-year-old Tom (Kit Connor) after a two-year prison sentence. Of course, Christmas is always complicated, and Steve's is more complicated than most. Just as he's trying to navigate parole and visitations, he encounters a man claiming to be Santa (Jim Broadbent) in his garage. The intruder claims to have been testing a new sleigh when things went awry, leading to a crash-landing and several reindeer on the loose. Santa's attempt to reclaim his sled team leads to his incarceration, and to his son's absolute insistence that Santa gets sprung in time to save Christmas. It's silly but heartwarming, and Broadbent in particular seems to be having a blast. Stream Get Santa on Peacock and Tubi.


Tokyo Godfathers (2003)

Roughly inspired by John Ford’s 1948 3 Godfathers, this one finds a drag queen, a teenage runaway, and a good-hearted middle-aged man struggling with alcoholism living on the streets of Tokyo when they come across a baby in a trash bin on Christmas Eve. The lovely, moving adventure that follows comes from director Satoshi Kon, who also directed classics Perfect Blue, Millennium Actress, and Paprika in his too-short life and career. Stream Tokyo Godfathers on Tubi or rent it from Apple TV.


Ben is Back (2018)

As not every holiday is happy, not every Christmas movie should go down easy. Lucas Hedges stars here as the title's Ben, the recently clean addict son of Julia Roberts' Holly. He's released from rehab for the holiday, which comes as a surprise to his family. Holly is happy to see him, but leery of the impact he might have on her other children. She allows him to stay at the family home, as long as he is never be out of her sight. What follows is a harrowing 24 hour period during which the two face old ghosts and Ben's past associates threaten the family, even as he struggles to keep a handle on his addiction disorder. There's a bit of light and hope here, but only a bit; the emphasis here is more on realism than a message of holiday cheer. Still, the performances are stellar and the issues at hand will be relatable for a great many of us. Stream Ben is Back on Tubi or rent it from Prime Video and Apple TV.


Happiest Season (2020)

This splashy Christmas comedy with a marquee cast (Kristen Stewart, Mackenzie Davis, Alison Brie, Aubrey Plaza, et al.) sits somewhere on the border between Lifetime/Hallmark-style Christmas movie and traditional rom-com. Abby and Harper are a couple that have been dating for nearly a year—but it turns out that Harper had lied about coming out to her parents. And, what with the stress of the holidays, she’s hoping that Abby will play along and pretend to be her roommate until after Christmas. What could go wrong? Stream Happiest Season on Hulu.


Love Actually (2003)

Starting a few weeks before the holiday and counting down to the big day, the modern Christmas staple movie weaves together multiple stories of love starring British familiars like Hugh Grant, Emma Thompson, Alan Rickman, Keira Knightley, and Colin Firth. If anyone’s ever professed love to you via a series of cue cards on your doorstep, you can thank (or blame) Love Actually. Stream Love Actually on Peacock and Prime Video.


Bad Santa (2003)

2003 was a banner year for modern Christmas classics, in any flavor you’d choose. The platonic ideal of a rude Christmas movie, Terry Zwigoff’s Bad Santa stars Billy Bob Thornton as Willie Soke, a mall Santa who’s actually a con man, using his seasonal gigs to scope out stores that he can rob at night. He represents everything that you probably don’t want your kid to be around during the holidays (or anytime, really): He’s foul-mouthed, cynical, and abusive whenever he’s not putting on the merest hint of a front for the children. The film does offer a solid Christmas redemption arc in and around scenes of seasonal debauchery—but still, this probably isn’t one for the kids. Stream Bad Santa on HBO Max.


Klaus (2019)

A charming Santa origin story based on nothing in particular, Klaus finds Jesper Johansen, the lazy son of a postmaster general in 19th century Norway forced to a distant island town where he’s tasked with delivering 6,000 letters within a year, otherwise he’ll be cut off from the family fortune. Arriving there, he discovers the two primary feuding families can’t be bothered to send letters for him to deliver, but that reclusive widower Klaus might be willing to help him in a scheme he’s concocted to convince the town’s children to write letters in the hopes of receiving toys in return—toys crafted by old Klaus in hope of a family that never materialized. It’s all beautifully done, and I defy you not to cry during the final act. Stream Klaus on Netflix.


Dolly Parton’s Christmas on the Square (2020)

It’s the holidays, and Regina Fuller (Christine Baranski!) is on her way home to evict a bunch of people so she can sell the land they live on to a mall developer. Naturally she’s got some seasonal learning to do, with help from erstwhile bestie Margeline (Jenifer Lewis!!) and Parton herself, typecast as an all-singing angel. Dolly wrote all the musical numbers, and the results are dorky fun in the best ways, with a deliberate staginess that invites you to appreciate the sentiment without taking things too seriously. The whole cast is several cuts above, as are the dance numbers, choreographed by Debbie Allen. Stream Christmas on the Square.


Hot Frosty (2024)

Maybe "classic" is going a bit far here (though time will tell), but there's something to be said for grabbing a glass of wine and having yourself a (lightly) horny holiday. In that vein, Hot Frosty casts Lacey Chabert as a widow running a cafe in the tiny made-up town of Hope Springs, New York. One day she picks up a scarf at a secondhand store and places it around the neck of a particularly chiseled snowman (because while all snowman bodies are valid, it's gonna take abs to score free winter apparel). The snowman naturally comes to life, leading to a series of wacky misunderstandings, but also a little holiday romance. If it's not cinematic genius, it's a perfectly delicious bit of holiday silliness. Stream Hot Frosty on Netflix.


Joyeux Noël (2005)

A fictionalized version of a true story, this Academy Award nominee deals with an unusual moment during the first year of World War I, when, at several points along the front lines, French, German, and British soldiers called a series of informal truces, often mingling to celebrate Christmas Eve and Christmas Day. The German Crown Prince even sent the lead singer of the Berlin opera to perform along the front lines, entertaining both sides. In dramatizing the event, the filmmakers understand that the truce was both glorious and absurd. Those complicated feelings, and the knowledge that what we’re seeing represents a momentary lull in a war that would continue for years, make for powerful emotional moments. Stream Joyeux Noël on Tubi and Netflix.


The Holiday (2006)

Depressed Englishwoman Iris (Kate Winslet) decides to swap homes and lives, for a bit, with similarly unlucky-in-love Californian Amanda (Cameron Diaz). Iris is now living in a giant Hollywood mansion, while Amanda is exploring a quaint country village. Naturally, romance is waiting for each woman in her newfound environs. It was largely ignored on its initial release, but has grown into a charmingly dorky Christmas cult classic. Word is that Apple is working on an update. Rent The Holiday from Prime Video and Apple TV.


Rare Exports: A Christmas Tale (2010)

In the film, the research team of a greedy government drills into land best left undisturbed: an ancient burial mound that, legends suggest, is the resting place of Joulupukki, a pagan forerunner to our modern Santa Claus. BAD IDEA. Old Joulupukki is not dissimilar from Krampus, in that he’s much more interested in punishing the wicked than in rewarding the good. It’s an action-packed, darkly comic, cynical winter’s tale (rather the perfect one for our times) and builds to a wild climax. Stream Rare Exports on Tubi or rent it from Prime Video and Apple TV.


The Christmas Chronicles (2018)

A deeply cute Christmas adventure finds a couple of kids (Judah Lewis and Darby Camp) accidentally crashing Santa’s sleigh (Santa here is played by Kurt Russell). It’s got plenty of (family-friendly) action, and Russell seems to be having a ton of fun. If you like this one, the sequel is approximately as good. Stream The Christmas Chronicles.


Arthur Christmas (2011)

Aardman Animations, the Wallace and Gromit/Shaun the Sheep people, produced this joyful, quirky computer-animated family film. James McAvoy plays Arthur Claus, son of the current holder of the Santa title. Operations at the North Pole are largely automated, and Arthur has a hard time convincing management that a single undelivered toy is worth much fuss. So it’s clumsy, goofy Arthur to the rescue, with the certain knowledge that ruining even one kid’s holiday would be a failure. Stream Arthur Christmas on Prime Video and Tubi.


The Best Man Holiday (2013)

The long-awaited sequel to 1999's The Best Man, this one quickly updates us on the fallout from that earlier film before moving into new territory (it’s not strictly necessary to have seen the original if you’re looking to dive straight into the holiday festivities). Morris Chestnut, Taye Diggs, Regina Hall, Terrence Howard, and Sanaa Lathan lead the sequel, which offers a bold blend of off-color humor, hot shirtless guys, sincere religious themes, and shamelessly heartbreaking plot twists. Stream The Best Man Holiday on Peacock and Hulu or rent it from Prime Video.


Tangerine (2015)

Just your typical girlfriend/buddy/revenge comedy movie about two trans sex workers on the hunt for the man who did one of them wrong. As heartfelt as it is madcap, it all takes place on a wild Christmas Eve in Hollywood (so don’t expect snow). Shot on a couple of iPhones, director Sean Baker and company make a virtue of the intimacy and immediacy that modern technology can bring. Stream Tangerine on Peacock and Hulu or rent it from Prime Video.


Carol (2015)

Mara Rooney’s Therese and Cate Blanchett’s glamorous Carol set off sparks when they meet in a department store during the Christmas season of 1952. The women suffer for their growing attraction, and this certainly isn’t the breeziest of holiday movies, but there’s light here, and beauty, and hope for the future. Stream Carol on HBO Max or rent it from Prime Video.


A Very Harold and Kumar Christmas (2011)

The last (to date) of the Harold and Kumar movies, this one balances stoner humor with a surprising sweetness, even if it's the kind of Christmas movie in which Santa smokes a bong on his holiday rounds and replacement urine for a drug test more than qualifies as a nice Christmas present. Stream A Very Harold and Kumar Christmas on Paramount+ or rent it from Prime Video.


The Night Before (2015)

What else are you gonna do Christmas Eve than spend the night with your best friends (Seth Rogan, Anthony Mackie, and Joseph Gordon-Levitt) at something called the Nutcracker Ball? Yeah, sounds awful to me, too. Luckily they’ve got a ton of drugs to get them through the night. A reliably entertaining stoner Christmas story. Stream The Night Before on Peacock and Tubi or rent it from Prime Video.


Krampus (2015)

Among the best of a decade’s worth of films reviving ancient, scary European traditions involving far less jolly versions of Santa, Krampus is a Gremlins-esque horror comedy with imaginative creature effects from the folks over at Weta Workshop. It might not be the darkest, nor the goriest, of holiday-themed horror sendups, but it is an awful lot of fun, with effects that evoke a twisted winter wonderland as we follow a family being hunted by the title demon. Stream Krampus on Peacock or rent it from Prime Video.


The Grinch (2018)

Though I might still stick with the 1966 animated version (Boris Karloff FTW), as updates go, this 2018 version is bright and colorful and energetic without getting stressful (looking at you, Jim Carrey version from 2000). Benedict Cumberbatch plays the Grinch; Pharrell narrates; and Rashida Jones, Kenan Thompson, and Angela Lansbury round out the solid voice cast. Stream The Grinch on Peacock or rent it from Prime Video.


Anna and the Apocalypse (2017)

Zombies for Christmas? OK! In this mash-up of High School Musical and Shaun of the Dead that you never knew you needed, the titular Anna just wants to get through the Christmas show at her high school in Little Haven, Scotland. She’s so preoccupied with her own problems that she fails to notice the undead infection spreading around her. It’s a weird blend of styles, no question, but one packed with gory fun, musical numbers, and some surprising, seasonally appropriate heart. Stream Anna and the Apocalypse on Prime Video and Tubi.


The Man Who Invented Christmas (2017)

There are plenty of versions of A Christmas Carol to choose from, but this one examines that tale from the other side. It’s the story of Charles Dickens himself (Dan Stevens) and his journey to creating the wildly successful work. Dodging typical biopic tropes in favor of something more appropriate to the subject matter, the movie finds Dickens interacting with his fictional characters in a film that blends realism with whimsical fantasy. Stream The Man Who Invented Christmas on HBO Max or rent it from Prime Video.


Last Christmas (2019)

Emilia Clarke and America’s sweetheart Henry Golding have tremendous chemistry as a down-on-her-luck aspiring singer and the slightly mysterious man with whom she shares a lovely and inspiring holiday season. The twist ending here, inspired by a literal reading of the title song, is bonkers—but it works better than it has a right to. Stream Last Christmas on Netflix or rent it from Prime Video.


Little Women (2019)

Before Barbie, Greta Gerwig took on an American classic and, while I’m not sure there’s ever been a bad adaptation of Little Women, this one is at the top of the pile, staying faithful to the novel’s themes while rearranging the narrative just a bit, and adding elements from Alcott’s own life to hint at the ending that the author really wanted. Rent Little Women from Prime Video and Apple TV.


Jingle Jangle (2020)

This one’s a straight-up fantasy that finds toymaker Jeronicus Jangle (Forest Whitaker) inventing a sentient matador figure (Ricky Martin) who fights for his right to be something other than a mass-produced toy. That sets off a series of misfortunes for Jeronicus, but his granddaughter Journey (Madalen Mills) is on hand to try to put things right. The pedigree here includes playwright David E. Talbert in the director’s chair and an almost all-Black cast that includes Whitaker, Keegan-Michael Key, and Anika Noni Rose, all having a lot of fun in a colorful (and musical!) adventure. Stream Jingle Jangle on Netflix.


Single All the Way (2021)

Sick of questions about being single, Peter (Michael Urie) decides to invite his best friend Nick (Philemon Chambers) to pose as more than his roommate. He’s in a high-stress L.A. job, and heading home for the holidays in New Hampshire and just can’t deal with cracks about being single. His mom (Kathy Najimy), though, already had plans to fix him up with her fitness instructor (Luke Macfarlane). Now James has to navigate not only his family obligations and his new date, but also his developing feelings for the guy who was just supposed to be a pretend romance. Stream Single All the Way on Netflix.

Google will end dark web reports that alerted users to leaked data

15 December 2025 at 13:13

Google began offering “dark web reports” a while back, but the company has just announced the feature will be going away very soon. In an email to users of the service, Google says it will stop telling you about dark web data leaks in February. This probably won’t negatively impact your security or privacy because, as Google points out in its latest email, there’s really nothing you can do about the dark web.

The dark web reports launched in March 2023 as a perk for Google One subscribers. The reports were expanded to general access in 2024. Now, barely a year later, Google has decided it doesn’t see the value in this type of alert for users. Dark web reports provide a list of partially redacted user data retrieved from shadowy forums and sites where such information is bought and sold. However, that’s all it is—a list.

The dark web consists of so-called hidden services hosted inside the Tor network. You need a special browser or connection tools in order to access Tor hidden services, and its largely anonymous nature has made it a favorite hangout for online criminals. If a company with your personal data has been hacked, that data probably lives somewhere on the dark web.

Read full article

Comments

© Getty Images | 400tmax

How to Set Up Your Own Custom Focus Modes on iPhone

15 December 2025 at 13:00

The iPhone's Focus modes are perhaps its most underrated feature. Once customized, they can become incredibly powerful tools that put you in control of how your iPhone can grab your attention. They can take some time to set up, but it’s worth it. Once you've got everything squared away, you'll have timed boundaries from certain apps, people, and even your work, for that mythical work-life balance. Forget having a personal phone and a work phone—a couple of well-tuned Focus modes might be enough.

Diving into Focus modes

iPhone Focus Modes
Credit: Khamosh Pathak

What used to be Do Not Disturb on iPhone is now Focus mode, which comes with many more options. Open the Control Center and tap the Focus button to see a list of all available Focus modes. The familiar Do Not Disturb option will be up top, but you’ll also see helpful Focus modes premade by Apple, Sleep being a prominent example. If you have a device that supports Apple Intelligence, you’ll also see a mode called Reduce Interruptions, which automatically mutes all notifications except the really important ones. Other premade modes include Personal, Work, and Sleep, which you can all customize to your own liking.

How to set up your own Focus mode

To get the most out of Focus Modes, you should set some Focus modes for yourself. One for work and one for personal time would be a great place to start. Go to Settings > Focus and tap the Plus button at the top. Here, choose the Custom mode option to get the most flexibility. Give it a name, icon, and tap Next. Then, tap Customize Focus.

Creating your own custom Focus mode
Credit: Khamosh Pathak

This is where you'll do most of your work. First, tap Choose People and select if you want to allow notifications from only a couple of people, or if you want to silence notifications from particular folks. If you’re setting up a Focus mode for personal time, you might want to stop notifications only from your boss and colleagues. Choose the people to allow, and tap Next. Then, choose who is allowed to call you. You can limit it to just your Favorites, or only a handful of people.

Customizing Focus mode
Credit: Khamosh Pathak

Then, tap Choose Apps and follow the same process for apps as well, either allowing notifications from some apps, or only silencing notifications from particular apps. For example, if you’re setting up a Focus mode for personal time, you might want to disable notifications from work apps like Slack, Teams, Gmail, and more. Tap into the Options menu, and you can also choose to show silenced notifications on the Lock Screen, or to dim the lock screen every time that Focus mode is enabled.

Next, take some time to customize what you see when a Focus mode is enabled. Apple will let you choose a distinct Lock screen, Home screen and even an Apple Watch watch face per Focus mode. For example, your work Focus can feature just your calendar and to do list. This will go a long way towards cementing the Focus state in your mind. For example, when I’m in my Writing Focus mode, my home screen is devoid of everything, including my tasks widget and communication apps.

Custom home screen and lock screen for Focus mode.
Credit: Khamosh Pathak

Then, you’ll see a Set a Schedule section. Here, you can turn on a Smart Activation feature that will automatically enable a Focus mode depending on your location, app usage and so on. This has been hit or miss for me, so I would advise you to avoid it for the most reliable results. But you can definitely create a manual schedule using the Add Schedule button. Here, You can trigger a Focus mode to automatically start or stop at a certain time of day.

Custom schedule Focus mode.
Credit: Khamosh Pathak

You can even use Focus Filters to further customize exactly what apps can show you when you’re in a Focus mode. For example, you can choose to only see your work calendar when you’re in your work Focus, but not your other calendars. These filters work for Apple’s apps and even third-party apps.

Calendar filter in Focus mode.
Credit: Khamosh Pathak

Lastly, you can choose to enable the Intelligent Breakthrough & Silencing feature that's found at the top of the Focus page. If you have an iPhone with Apple Intelligence enabled, you'll see this setting. It uses on-device intelligence to allow priority notifications to interrupt you even when you're in silent mode. This goes over all other customizations that you might've done. But, being an Apple Intelligence feature, its reliability can be a bit iffy. Based on personal experience, I would recommend you take the time to fully customize the Focus mode to your liking instead of handing some of that work over to Apple Intelligence, as it gets things wrong for me fairly often.

10 Hacks That Every Smart Home Owner Should Know

15 December 2025 at 12:00

We may earn a commission from links on this page.

My smart home routines are ready for a refresh. As new standards have emerged for connecting gadgets in the home, and Google and Amazon have been updating their respective hardware and apps, I've been lagging in keeping things sharp and running smoothly. So, I'm doing something about it now.

If you've been feeling bored by your smart home and its current routines too, keep reading. These are ways to configure the smart devices around you to make them for more than just turning the lights on and off (although there's always plenty of that). Although my personal smart home is in the Google Home ecosystem, these features also apply to smart homes powered by Apple HomeKit and Amazon Alexa.

Turn everything off when no one is home

It sounds like a no-brainer, but in nearly ten years, I still haven't set up my smart home so the lights turn off when I leave the house. Given how my energy bill is looking lately, I'd like to get out of this practice. I want to make sure the lights and any errant appliances turn off, especially when no one is inside.

A screenshot of the Away routine in Google Home app
The "Away" routine in Google Home can be programmed to detect when everyone is out of the house. Credit: Florence Ion/Lifehacker

In the Google Home app, there's an "Away" routine in the Automations tab that lets me select which devices to turn off when the system detects that my phone is gone and away. But what if everyone else is home? I don't want the lights to turn off on them. Instead, I use an automation that turns the lights off when two conditions are met: I'm not at home, and none of the house's centralized gadgets, like the Chromecast-connected TVs, are on.

Even if you aren't in the Google ecosystem, you can use similar "if-this-then-that" logic. For Apple HomeKit users, the Shortcuts app is a better way to make a "Leave Home" automation and add a "Get State of Home" condition to ensure companion devices, like an Apple TV, are not in use. Amazon Alexa users have it a bit harder, as there is no native way to detect a device's on/off status. You can create a location-based routine or use the "Away Lighting" feature (in your Home/Away settings). It effectively switches on an "enforce" mode when you leave.

A screenshot of what the option for "when the last person leaves" looks like in the Apple Home app
Apple lets you select "when the last person leaves" as a trigger for a smart home automation. Credit: Florence Ion/Lifehacker

Set the morning volume

Some people like to rock out first thing in the morning. But there's nothing worse than scaring the rest of the household into a wake-state because the volume was left on high. While you could yell over the device streaming music or run to turn down the volume, there's no need to deal with all that. Instead, set a volume-first routine so the speaker is set to the desired volume each morning before anyone activates it.

A screenshot for setting the volume on multiple smart speaker devices in the Google Home app
Set all your smart speakers to the same volume level at each time of day so no one gets their ears blasted off. Credit: Florence Ion/Lifehacker

In the Google Home app, under Automations, set the formula to run first thing in the morning every day, week after week. Then select the offending speaker-equipped devices. (I set up all my smart speakers at the same volume each day, upstairs and downstairs, since you never know.) Apple and Alexa have similar setups. In the Apple Home app, you'll set a scene on the corresponding HomePod to run at a Time-of-Day Automation. And through Amazon Alexa, you'll create a Routine with a "schedule" trigger, then select Echo devices to set the volume.

Deter people from your porch

If you're not interested in visitors at certain times of day or night, you can set up your porch to perform a visible action that gets whoever is outside to scram.

If you have a doorbell camera, you are likely used to getting passive notifications that someone is visible. You can turn that notice into a smart home automation. Set it up so that when motion is detected, the outdoor lights blast to full brightness and the outward-facing lights inside the house flicker on. You will need smart bulbs or smart plugs to enable this.

A screenshot of the option to detect events on a Nest camera
Nest cameras in the Google Home app let you choose a "person detected" trigger to start an action. Credit: Florence Ion/Lifehacker

In the Google Home app, the Automations tab is where this is done. I set my Nest doorbell camera to "Starter" when it detects "Person seen." Then, I choose the lights that I want blaring at 100% under Actions. Apple smart homes need HomeKit Secure Video (HSV)-enabled cameras to access something like this. In the Home app, you can create an automation that runs when the camera detects activity, then select the outdoor lights and the outward-facing lights that should turn on. Amazon users with Ring cameras can do the same in the Alexa app under Routines. You can even go a step further and enable the same "Away Lighting" feature from the last tip, which broadcasts a chime inside the house the moment motion is detected.

Focus mode for the house

Unfortunately, I can't focus. I need all external distractions disabled in some capacity. Rather than do that manually, I set up an automation to get the rest of the house whipped into shape when it's time to work. With that, I skip saying a command out loud and instead set it up on a schedule.

Beginning at 9:30 each morning, except weekends, I put the action to adjust all the lights in my office to a specific setting, enough to get me into the groove, and turn off any other lights in the house that may have been left on from the chaotic morning routine. I also turn off the TVs and any internal-facing security cameras that shouldn't be watching me while I work. It's a similar schematic for Apple HomeKit users, though it's even better because iOS lets your iPhone's state set the tone. In the Shortcuts app, you can create a personal automation. Select a Focus mode as the trigger (it might look like "Do not disturb"), then select "When Turning On." You'll then set the action to "Control Home," and that's where you'll put the status for smart lights and any other devices you want. Once you place the iPhone into silent mode, or the clock strikes 9:30 a.m.—whichever comes first—you'll see the devices linked here follow suit.

Alexa uses a similar logic to Google Home, with the schedule doing the heavy lifting. In the Alexa app, go to Routines and create one with a scheduled time as the trigger, set to run only on weekdays. Then add the smart home actions you want to adjust, turn off, and turn on. The only bummer here is that there is no way to extend the action to your smartphone, at least through Alexa.

Create a Guest Mode for smart devices

People are confused about how I control my house, and I don't blame them. So, I set up a "limited access" guest profile for friends who plan to stay only a night or two.

A screenshot of the option that pops up to add a "member" in the Google Home app
The Google Home app lets you add a "Member" with limited access to the smart home. Credit: Florence Ion/Lifehacker

Google Home lets you invite people with the "Member" role to access smart home controls. Provided they have a Google account, the person can access connected lights in the designated rooms as needed. Apple HomeKit is much more granular, but it works similarly. You can invite people by their Apple ID and manage access to certain accessories. You can also lock them out of security cameras and thermostats, so they have access only to the essentials, like the smart lights.

In the Amazon ecosystem, Alexa is the most limited. (It once offered a now-deprecated Guest Connect feature.) Instead, you'll rely on the Amazon Household feature, so you'll have to invite a guest with an Amazon account to control devices. However, this also gives them access to the whole kit and caboodle, like your payment methods. If you want to avoid oversharing, teach your guests the basic "on" and "off" commands for your smart devices.

Protect your thermostat

If you don't want other people adjusting your thermostat, you can lock them out with your smart home. In Google-led smart homes, you can set up a PIN in the Home app to prevent manual adjusters from accessing the thermostat and changing the temperature. However, this works only with compatible hardware, like a Nest Thermostat.

In an Amazon home, you need an Alexa-compatible thermostat. You could dig through the settings of the manufacturer's apps to set up a PIN to keep people from messing with the dial. Or you can use a Routine within Alexa to set a specific schedule so that the temperature automatically returns to your preferred setting even if someone else has touched it.

A screenshot of what it looks like to create a new scene in Apple Home
Remind everyone that the scene that's taking place is your temperature, and no one else's. Credit: Florence Ion/Lifehacker

Apple HomeKit lets you, the smart home owner, be the boss with Scenes. (Get used to making them, because they become essential later.) In the Home app, create a scene called "My temp" and then set the compatible thermostat to your preferred temperature. In the Shortcuts app, create a personal automation to run this scene at a specific time, then select how often you want it to run. This will check and adjust the temperature every few hours to ensure it's at your favorite level, not anyone else's.

Never forget another load of laundry

I have a connected washer and dryer for laundry, which I can configure to alert me when a load is done. There's the simple push notification, which might work for some, but I prefer Google Home to holler at me when the laundry's done drying. In the Home app, under Automations, I can select my LG dryer going off as the status, then ask the Home app to broadcast a message to a few specific smart speakers around the house to let me know the laundry is ready to fetch.

If you don't have internet-connected appliances, you can use a smart plug with energy- and power-monitoring capabilities from brands like Govee or TP-Link's Kasa. Provided they can handle high-voltage use (look for over 15 amps), you can plug in your unconnected washer or dryer that way and have it notify you when the appliance shuts off.

Apple HomeKit users should look into compatible Eve Energy smart plugs, then create a personal automation routine in the Shortcuts app to trigger when the smart plug's current drops below a set threshold. The action can be to "Control Home," and then choose a scene that flashes lights a certain color at high brightness, all-lights-on, as an indicator that it's time to get to the clothes. Amazon users are in the same boat. A compatible smart plug can be added to a Routine that triggers when the smart plug's energy usage is below a certain wattage. For the action, you'd set a smart bulb to red or something similar to serve as a visual cue that it's time to fold.

Don't water when it rains

My husband has set up a vast network of internet-connected sprinklers in both the front and back yards using B-Hyve. It's great for easily turning the sprinklers on and off, and for scheduling them in the summertime. But in the winter, we don't need to water the grass as much as we do in the dry summer. So we set up a weather override in the app. If you don't have a smart sprinkler setup, you can fake it. Again, all you need is a smart plug rated for outdoor use, plugged into the sprinkler system. An external temperature sensor can make this routine more accurate.

For Google Home users, you'll rely on seasonal schedules instead of live weather data. Start a new automation with a "time of day" trigger that runs only on weekdays. You will need to manually turn this routine off in winter to prevent it from overwatering the lawn. You can use a third-party service like IFTTT or Zapier to set up something that's based on the actual weather forecast. Alexa requires a similar third-party to make a Routine with a weather condition.

Apple is more accommodating. In the Apple Home app, you can create a time-of-day automation and then convert it to a Shortcut to add the weather as a condition. You can then set the action to "Get Weather Forecast" and select whether the current weather is "rainy" or whether the chance exceeds a certain percentage. If the forecast calls for rain, the Shortcut doesn't affect the system. Alternatively, if there is no rain, the Shortcut continues and sets the sprinkler's smart plug to "on."  

Play music or soundscapes on command

I work best with one of those binaural tracks on loop in the background. Instead of manually starting these tracks every day, I can have Google do it by tying my soundscapes directly to a routine. You can make one, too, for any media you'd like to listen to.

In the Google Home app, under Automations, create a household routine that runs when you say "Hey Google, it's chill time!" Under Actions, select which lights should turn on and how they should be set up. Then, you can choose a smart speaker or a Chromecast device and set it up to play specific media from Spotify or YouTube.

A screenshot of choosing a speaker to do an action in the Google Home app
You can select speakers to play something very particular when it's working time. Credit: Florence Ion/Lifehacker

The same goes for Apple and Amazon households. Apple Home lets you set a time-of-day automation or a voice command to run on its own. For audio, select the HomePod and set it to play "ambient sounds" or anything from Apple Music. HomePod supports a "Stop Playing After" setting, so you can set it to turn off after an hour or two.

Amazon also relies on a Routine. For the action, select the music and audio option, then specify the source of your noises. Add a second action by selecting "Timers & Alarms" and setting a "Sleep Timer." This ensures that Alexa stops the audio after a set time, like with Apple Home, so you don't have to turn it off manually.

Get an alert if someone leaves the garage door open

The best part of having a smart home is remote access to all the appliances and devices you're worried about leaving on or open when you leave the house. You can do this with your garage without dealing with one of those tricky garage door sensor installations, provided you have a compatible smart home hub.

You can buy a cheap security camera that uses an SD card to monitor the garage door and let you peek in. Or, for around $20, you can buy a small ZigBee-enabled tilt sensor and automate it to check the garage status once the system has detected that everyone is out of the house. In Google Home, you'd attach this sensor to the "Home & Away status." Like the routine we set up for the lights earlier, here you'd choose the tilt sensor to check when "Everyone is Away." If the sensor device status is set to "open," you can select an action to notify you with a custom message. Closing it is still on you, though. If you were the last to leave, you'll need to double back; if someone else was, you can quickly call or text them to turn around and close the door.

Apple and Amazon have the same location-based blueprint. On Apple, you'd set up the sensor along with the "People Leave" automation, then set the condition to "Open" after the last person leaves. Set the Action to send a notification to your device if so. And on Amazon, set a Routine to check for the garage status when you've left the premises.

Oh look, yet another Starship clone has popped up in China

15 December 2025 at 11:44

Every other week, it seems, a new Chinese launch company pops up with a rocket design and a plan to reach orbit within a few years. For a long time, the majority of these companies revealed designs that looked a lot like SpaceX’s Falcon 9 rocket.

The first of these copy cats, the medium-lift Zhuque-3 rocket built by LandSpace, launched earlier this month. Its primary mission was nominal, but the Zhuque-3 rocket failed its landing attempt, which is understandable for a first flight. Doubtless there will be more Chinese Falcon 9-like rockets making their debut in the near future.

However, over the last year, there has been a distinct change in announcements from China when it comes to new launch technology. Just as SpaceX is seeking to transition from its workhorse Falcon 9 rocket—which has now been flying for a decade and a half—to the fully reusable Starship design, so too are Chinese companies modifying their visions.

Read full article

Comments

© Beijing Leading Rocket Technology Co.

iFixit's New AI Assistant Can Help You Fix Almost Anything

15 December 2025 at 11:30

Generative AI has advanced to the stage where you can ask bots such as ChatGPT or Gemini questions about almost anything, and get reasonable-sounding responses—and now renowned gadget repair site iFixit has joined the party with an AI assistant of its own, ready and willing to solve any of your hardware problems.

While you can already ask general-purpose chatbots for advice on how to repair a phone screen or diagnose a problem with a car engine, there's always the question of how accurate the AI replies will be. With FixBot, iFixit is trying to minimize mistakes by drawing on its vast library of verified repair guides, written by experts and users.

That's certainly reassuring: I don't want to waste time and money replacing a broken phone screen with a new display that's the wrong size or shape. And using a conversational AI bot to fix gadget problems is often going to feel like a more natural and intuitive experience than a Google search. As iFixit puts it, the bot "does what a good expert does" in guiding you to the right solutions.

How FixBot improves accuracy

The iFixit website has been around since 2003—practically ancient times, considering the rapid evolution of modern technology. The iFixit team has always prided itself on detailed, thorough, tested guides to repairing devices, and all of that information can now be tapped into by the FixBot tool.

iFixit says the bot is trained on more than 125,000 repair guides written by humans who have worked through the steps involved, as well as the question and answer forums attached to the site, and the "huge cache" of PDF manuals that iFixit has accumulated over the years that it's been business.

iFixit FixBot
FixBot uses an intuitive chatbot interface. Credit: Lifehacker

That gives me a lot more confidence that FixBot will get its answers right, compared to whatever ChatGPT or Gemini might tell me. iFixit hasn't said what AI models are powering the bot—only that they've been "hand-picked"—and there's also a custom-built search engine included to select data sources from the repair archives on the site.

"Every answer starts with a search for guides, parts, and repairs that worked," according to the iFixit team, and that conversational approach you'll recognize from other AI bots is here too: If you need clarification on something, then you can ask a follow-up question. In the same way, if the AI bot needs more information or specifics, it will ask you.

It's designed to be fast—responses should be returned in seconds—and the iFixit team also talks about an "evaluation harness" that tests the FixBot responses against thousands of real repair questions posed and answered by humans. That extra level of fact-checking should reduce the number of false answers you get.

However, it's not perfect, as iFixit admits: "FixBot is an AI, and AI sometimes gets things wrong." Whether or not those mistakes will be easy to spot remains to be seen, but users of the chatbot are being encouraged to upload their own documents and repair solutions to fix gaps in the knowledge that FixBot is drawing on.

Using FixBot to diagnose problems

iFixit says the FixBot is going to be free for everyone to use, for a limited time. At some point, there will be a free version with limitations, and paid tiers with the full set of features—including support for voice input and document uploads. You can give it a try for yourself now on the iFixit website.

I was reluctant to deliberately break one of my devices just so FixBot could help me repair it, but I did test it with a few issues I've had (and sorted out) in the past. One was a completely dead SSD drive stopping my Windows PC from booting: I started off with a vague description about the computer not starting up properly, and the bot did a good job at narrowing down what the problem was, and suggesting fixes.

iFixit FixBot
FixBot will refer back to articles and forum posts. Credit: Lifehacker

It went through everything I had already tried when the problem happened, including trying System Repair and troubleshooting the issue via the Command Prompt. Eventually, via a few links to repair guides on the iFixit website, it did conclude that my SSD drive had been corrupted by a power cut—which I knew was what had indeed happened.

I also tested the bot with a more general question about a phone restarting at random times—something one of my old handsets used to do. Again, the responses were accurate, and the troubleshooting steps I was asked to try made a lot of sense. I was also directed to the iFixit guide for the phone model.

iFixit FixBot
FixBot's answers are generally accurate and intelligent. Credit: Lifehacker

The bot is as enthusiastic as a lot of the others available now (I was regularly praised for the "excellent information" I was providing), and does appear to know what it's talking about. This is one of the scenarios where generative AI shows its worth, in distilling a large amount of information based on natural language prompts.

There's definitely potential here: Compare this approach to having to sift through dozens of forum posts, web articles, and documents manually. However, there's always that nagging sense that AI makes mistakes, as the on-screen FixBot disclaimer says. I'd recommend checking other sources before doing anything drastic with your hardware troubleshooting.

Attackers Are Spreading Malware Through ChatGPT

15 December 2025 at 11:00

You (hopefully) know by now that you can't take everything AI tells you at face value. Large language models (LLMs) sometimes provide incorrect information, and threat actors are now using paid search ads on Google to spread conversations with ChatGPT and Grok that appear to provide tech support instructions but actually direct macOS users to install an infostealing malware on their devices.

The campaign is a variation on the ClickFix attack, which often uses CAPTCHA prompts or fake error messages to trick targets into executing malicious commands. But in this case, the instructions are disguised as helpful troubleshooting guides on legitimate AI platforms.

How attackers are using ChatGPT

Kaspersky details a campaign specific to installing Atlas for macOS. If a user searches "chatgpt atlas" to find a guide, the first sponsored result is a link to chatgpt.com with the page title "ChatGPT™ Atlas for macOS – Download ChatGPT Atlas for Mac." If you click through, you'll land on the official ChatGPT site and find a series of instructions for (supposedly) installing Atlas.

However, the page is a copy of a conversation between an anonymous user and the AI—which can be shared publicly—that is actually a malware installation guide. The chat directs you to copy, paste, and execute a command in your Mac's Terminal and grant all permissions, which hands over access to the AMOS (Atomic macOS Stealer) infostealer.

A further investigation from Huntress showed similarly poisoned results via both ChatGPT and Grok using more general troubleshooting queries like "how to delete system data on Mac" and "clear disk space on macOS."

AMOS targets macOS, gaining root-level privileges and allowing attackers to execute commands, log keystrokes, and deliver additional payloads. BleepingComputer notes that the infostealer also targets cryptocurrency wallets, browser data (including cookies, saved passwords, and autofill data), macOS Keychain data, and files on the filesystem.

Don't trust every command AI generates

If you're troubleshooting a tech issue, carefully vet any instructions you find online. Threat actors often use sponsored search results as well as social media platforms to spread instructions that are actually ClickFix attacks. Never follow any guidance that you don't understand, and know that if it asks you to execute commands on your device using PowerShell or Terminal to "fix" a problem, there's a high likelihood that it's malicious—even if it comes from a search engine or LLM you've used and trusted in the past.

Of course, you can potentially turn the attack around by asking ChatGPT (in a new conversation) if the instructions are safe to follow. According to Kaspersky, the AI will tell you that they aren't.

How to Spot a Browser-in-the-Browser Phishing Attack

15 December 2025 at 10:30

Between the sheer number and the increasing sophistication of phishing campaigns, seeing should not automatically be believing when browsing online. One particularly sneaky scam is a browser-in-the-browser (BitB) attack, in which threat actors create a fake browser window that looks like a trusted single sign-on (SSO) login page within a real browser session.

Because we use SSO to access many of our online accounts, we may not think twice before entering usernames and passwords on these spoofed pages. Cybercriminals are counting on this to steal user credentials.

How a browser-in-the-browser attack works

Rather than redirecting users to a spoofed website, threat actors running a BitB attack create a fake pop-up within the page you're already on (which may either be set up for the attack or compromised in some way). Using HTML, CSS, and JavaScript, they're able to design a login window that looks exactly like the real one, right down to the lock icon and URL in the pop-up's address bar.

These fake login windows typically appear in a seamless fashion, such as after a click or redirect you're expecting to lead to SSO. Obviously, entering your credentials hands them directly to the attackers, who can either use or sell them.

Fraudulent pop-ups often imitates SSO such as Google, Apple, and Microsoft, though they may exploit any login portal. Earlier this year, researchers at Silent Push identified a BitB phishing campaign targeting Steam users, specifically those playing Counter-Strike 2. Gamers saw a fake browser pop-up window displaying the URL of the real Steam portal, making them more likely to enter their credentials without suspicion. The attackers also featured the likenesses of eSports team NAVI to lend credibility.

Signs of a BitB scam

Because threat actors are able to so closely imitate trusted sign-on pages, including using the real domain in the address bar, a visual inspection may not be enough to catch the fraud. Instead, you need to interact with the window in some way.

In many cases, a genuine SSO pop-up can be dragged around and away from the browser page it appears on top of, so you can first try to move it elsewhere on your screen. However, some SSO dialogs are static, so if you can't drag it, try to highlight the URL or click the padlock icon to show certificate details. If these elements are fake, you won't be able to interact with them at all because the window itself is just an image.

This is also an excellent reason to use a secure password manager to fill your credentials instead of entering them manually. A password manager will work only on the legitimate domain. If it doesn't autofill, don't automatically override it—check to ensure the pop-up is real.

You should also have a strong form of multi-factor authentication (MFA) enabled wherever possible, so even if your username and password are somehow compromised, attackers won't have the additional factor needed to actually access your account. Note that hackers can still phish some forms of authentication—physical keys along with biometrics and passkeys are the most secure options.

Roomba maker iRobot swept into bankruptcy

Roomba maker iRobot has filed for bankruptcy and will be taken over by its Chinese supplier after the company that popularized the robot vacuum cleaner fell under the weight of competition from cheaper rivals.

The US-listed group on Sunday said it had filed for Chapter 11 bankruptcy in Delaware as part of a restructuring agreement with Shenzhen-based Picea Robotics, its lender and primary supplier, which will acquire all of iRobot’s shares.

The deal comes nearly two years after a proposed $1.5 billion acquisition by Amazon fell through over competition concerns from EU regulators.

Read full article

Comments

© Onfokus

In A.I. Boom, Venture Capital Firms Are Raising Loads More Money

15 December 2025 at 12:50
Lightspeed Venture Partners, a Silicon Valley venture firm, has amassed more than $9 billion to invest in artificial intelligence. That is its biggest haul.

© Gabriela Hasbun for The New York Times

Law enforcement thwarts terror plot to target L.A. businesses on New Year’s

15 December 2025 at 12:29
Law enforcement officials have uncovered a suspected New Year's Eve terror plot targeting Los Angeles businesses. NBC News' Kelly O' Donnell explains what this terror group was planning and how law enforcement disrupted its plans.

💾

©

Law enforcement officials have uncovered a suspected New Year's Eve terror plot targeting Los Angeles businesses. NBC News' Kelly O' Donnell explains what this terror group was planning and how law enforcement disrupted its plans.

Brian Walshe found guilty of killing his wife and dismembering her body

15 December 2025 at 11:50
A Massachusetts jury found Brian Walshe guilty of first-degree murder Monday, siding with prosecutors who accused the convicted fraudster of killing his wife and dismembering her body three years ago

© via NBC Boston

Brian Walshe during his trial in Dedham, Mass., on Dec. 9.

© Pool via AP file

Brian Walshe during his trial in Dedham, Mass., on Dec. 9.

Rob Reiner's son Nick Reiner arrested in connection with parents' deaths

15 December 2025 at 11:24
Nick Reiner, the son of movie director Rob Reiner, has been arrested in connection with the deaths of his parents, according to law enforcement. NBC News' Chloe Melas reports on how Hollywood is reacting to this family tragedy.

💾

©

Nick Reiner, the son of movie director Rob Reiner, has been arrested in connection with the deaths of his parents, according to law enforcement. NBC News' Chloe Melas reports on how Hollywood is reacting to this family tragedy.

How Tech’s Biggest Companies Are Offloading the Risks of the A.I. Boom

15 December 2025 at 10:49
The data centers used for work on artificial intelligence can cost tens of billions to build. Tech giants are finding ways to avoid being on the hook for some of those costs.

© Christie Hemm Klok for The New York Times

Meta is investing billions in new data centers, like one being constructed in Eagle Mountain, Utah.

The fast and the future-focused are revolutionizing motorsport

When the ABB FIA Formula E World Championship launched its first race through Beijing’s Olympic Park in 2014, the idea of all-electric motorsport still bordered on experimental. Batteries couldn’t yet last a full race, and drivers had to switch cars mid-competition. Just over a decade later, Formula E has evolved into a global entertainment brand broadcast in 150 countries, driving both technological innovation and cultural change in sport.  

“Gen4, that’s to come next year,” says Dan Cherowbrier, Formula E’s chief technology and information officer. “You will see a really quite impressive car that starts us to question whether EV is there. It’s actually faster—it’s actually more than traditional [internal combustion engines] ICE.” 

That acceleration isn’t just happening on the track. Formula E’s digital transformation, powered by its partnership with Infosys, is redefining what it means to be a fan. “It’s a movement to make motor sport accessible and exciting for the new generation,” says principal technologist at Infosys, Rohit Agnihotri. 

From real-time leaderboards and predictive tools to personalized storylines that adapt to what individual fans care most about—whether it’s a driver rivalry or battery performance—Formula E and Infosys are using AI-powered platforms to create fan experiences as dynamic as the races themselves. “Technology is not just about meeting expectations; it’s elevating the entire fan experience and making the sport more inclusive,” says Agnihotri.  

AI is also transforming how the organization itself operates. “Historically, we would be going around the company, banging on everyone’s doors and dragging them towards technology, making them use systems, making them move things to the cloud,” Cherowbrier notes. “What AI has done is it’s turned that around on its head, and we now have people turning up, banging on our door because they want to use this tool, they want to use that tool.” 

As audiences diversify and expectations evolve, Formula E is also a case study in sustainable innovation. Machine learning tools now help determine the most carbon-optimal way to ship batteries across continents, while remote broadcast production has sharply reduced travel emissions and democratized the company’s workforce. These advances show how digital intelligence can expand reach without deepening carbon footprints. 

For Cherowbrier, this convergence of sport, sustainability, and technology is just the beginning. With its data-driven approach to performance, experience, and impact, Formula E is offering a glimpse into how entertainment, innovation, and environmental responsibility can move forward in tandem. 

“Our goal is clear,” says Agnihotri. “Help Formula E be the most digital and sustainable motor sport in the world. The future is electric, and with AI, it’s more engaging than ever.” 

This episode of Business Lab is produced in partnership with Infosys. 

Full Transcript:  

Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab, and into the marketplace.  

The ABB FIA Formula E World Championship, the world’s first all-electric racing series, made its debut in the grounds of the Olympic Park in Beijing in 2014. A little more than 10 years later, it’s a global entertainment brand with 10 teams, 20 drivers, and broadcasts in 150 countries. Technology is central to how Formula E is navigating that scale and to how it’s delivering more powerful personalized experiences.  

Two words for you: elevated fandom.  

My guests today are Rohit Agnihotri, principal technologist at Infosys, and Dan Cherowbrier, CTIO of Formula E.  

This episode is produced in partnership with Infosys.  

Welcome, Rohit and Dan. 

Dan Cherowbrier: Hi. Thanks for having us. 

Megan: Dan, as I mentioned there, the first season of the ABB FIA Formula E World Championship launched in 2014. Can you talk us through how the first all-electric motor sport has evolved in the last decade? How has it changed in terms of its scale, the markets it operates in, and also, its audiences, of course? 

Dan: When Formula E launched back in 2014, there were hardly any domestic EVs on the road. And probably if you’re from London, the ones you remember are the hybrid Priuses; that was what we knew of really. And at the time, they were unable to get a battery big enough for a car to do a full race. So the first generation of car, the first couple of seasons, the driver had to do a pit stop midway through the race, get out of one car, and get in another car, and then carry on, which sounds almost farcical now, but it’s what you had to do then to drive innovation, is to do that in order to go to the next stage. 

Then in Gen2, that came up four years later, they had a battery big enough to start full races and start to actually make it a really good sport. Gen3, they’re going for some real speeds and making it happen. Gen4, that’s to come next year, you’ll see acceleration in line with Formula One. I’ve been fortunate enough to see some of the testing. You will see a really quite impressive car that starts us to question whether EV is there. It’s actually faster, it’s actually more than traditional ICE. 

That’s the tech of the car. But then, if you also look at the sport and how people have come to it and the fans and the demographic of the fans, a lot has changed in the last 11 years. We were out to enter season 12. In the last 11 years, we’ve had a complete democratization of how people access content and what people want from content. And as a new generation of fan coming through. This new generation of fan is younger. They’re more gender diverse. We have much closer to 50-50 representation in our fan base. And they want things personalized, and they’re very demanding about how they want it and the experience they expect. No longer are you just able to give them one race and everybody watches the same thing. We need to make things for them. You see that sort of change that’s come through in the last 11 years. 

Megan: It’s a huge amount of change in just over a decade, isn’t it? To navigate. And I wonder, Rohit, what was the strategic plan for Infosys when associating with Formula E? What did Infosys see in partnering with such a young sport? 

Rohit: Yeah. That’s a great question, Megan. When we looked at Formula E, we didn’t just see a racing championship. We saw the future. A sport, that’s electric, sustainable, and digital first. That’s exactly where Infosys wants to be, at the intersection of technology, innovation, and purpose. Our plan has three big goals. First, grow the fan base. Formula E wants to reach 500 million fans by 2030. That is not just a number. It’s a movement to make motor sport accessible and exciting for the new generation. To make that happen, we are building an AI-powered platform that gives personalized content to the fans, so that every fan feels connected and valued. Imagine a fan in Tokyo getting race insights tailored for their favorite driver, while another in London gets a sustainability story that matters to him. That’s the level of personalization we are aiming for. 

Second, bringing technology innovation. We have already launched the Stats Centre, which turns race data into interactive stories. And soon, Race Centre will take this to the next level with real time leaderboards to the race or tracks, overtakes, attack mode timelines, and even AI generated live commentary. Fans will not just watch, they will interact, predict podium finishes, and share their views globally. And third, supports sustainability. Formula E is already net-zero, but now their goal is to cut carbon by 45% by 2030. We’ll be enabling that through AI-driven sustainability, data management, tracking every watt of energy, every logistics decision. and modeling scenarios to make racing even greener. Partnering with a young sport gives us a chance to shape its digital future and show how technology can make racing exciting and responsible. For us, Formula E is not just a sport, it’s a statement about where the world is headed. 

Megan: Fantastic. 500 million fans, that’s a huge number, isn’t it? And with more scale often comes a kind of greater expectation. Dan, I know you touched on this a little in your first question, but what is it that your fans now really want from their interactions? Can you talk a bit more about what experiences they’re looking for? And also, how complex that really is to deliver that as well? 

Dan: I think a really telling thing about the modern day fan is I probably can’t tell you what they want from their experiences, because it’s individual and it’s unique for each of them. 

Megan: Of course. 

Dan: And it’s changing and it’s changing so fast. What somebody wants this month is going to be different from what they want in a couple of months’ time. And we’re having to learn to adapt to that. My CTO title, we often put focus on the technology in the middle of it. That’s what the T is. Actually, if you think about it, it’s continual transformation officer. You are constantly trying to change what you deliver and how you deliver it. Because if fans come through, they find new experiences, they find that in other sports. Sometimes not in sports, they find it outside, and then they’re coming in, and they expect that from you. So how can we make them more part of the sport, more personalized experience, get to know the athletes and the personalities and the characters within it? We’re a very technology centric sport. A lot of motor sport is, but really, people want to see people, right? And even when it’s technology, they want to see people interacting with technology, and it’s how do you get that out to show people. 

Megan: Yeah, it’s no mean feat. Rohit, you’ve worked with brands on delivering these sort of fan experiences across different sports. Is motor sports perhaps more complicated than others, given that fans watch racing for different reasons than just a win? They could be focused on team dynamics, a particular driver, the way the engine is built, and so on and so forth. How does motor sports compare and how important is it therefore, that Formula E has embraced technology to manage expectations? 

Rohit: Yeah, that’s an interesting point. Motor sports are definitely more complex than other sports. Fans don’t just care about who wins, they care about how some follow team strategies, others love driver rivalries, and many are fascinated by the car technology. Formula E adds another layer, sustainability and electric innovation. This makes personalization really important. Fans want more than results. They want stories and insights. Formula E understood this early and embraced technology. 

Think about the data behind a single race, lap times, energy usage, battery performance, attack mode activation, pit strategies, it’s a lot of data. If you just show the raw numbers, it’s overwhelming. But with Infosys Topaz, we turn that into simple and engaging stories. Fans can see how a driver fought back from 10th place to finish on the podium, or how a team managed energy better to gain an edge. And for new fans, we are adding explainer videos and interactive tools in the Race Center, so that they can learn about their sport easily. This is important because Formula E is still young, and many fans are discovering it for the first time. Technology is not just about meeting expectations; it’s elevating the entire fan experience and making the sport more inclusive. 

Megan: There’s an awful lot going on there. What are some of the other ways that Formula E has already put generative AI and other emerging technologies to use? Dan, when we’ve spoken about the demand for more personalized experiences, for example. 

Dan: I see the implementation of AI for us in three areas. We have AI within the sport. That’s in our DNA of the sport. Now, each team is using that, but how can we use that as a championship as well? How do we make it a competitive landscape? Now, we have AI that is in the fan-facing product. That’s what we’re working heavily on Infosys with, but we also have it in our broadcast product. As an example, you might have heard of a super slow-mo camera. A super slow-mo camera is basically, by taking three cameras and having them in exactly the same place so that you get three times the frame rate, and then you can do a slow-motion shot from that. And they used to be really expensive. Quite bulky cameras to put in. We are now using AI to take a traditional camera and interpolate between two frames to make it into a super slow image, and you wouldn’t really know the difference. Now, the joy of that, it means every camera can now be a super slow-mo camera. 

Megan: Wow. 

Dan: In other ways, we use it a little bit in our graphics products, and we iterate and we use it for things like showing driver audio. When the driver is speaking to his engineer or her engineer in the garage, we show that text now on screen. We do that using AI. We use AI to pick out the difference between the driver and another driver and the team engineer or the team principal and show that in a really good way. 

And we wouldn’t be able to do that. We’re not big enough to have a team of 24 people on stenographers typing. We have to use AI to be able to do that. That’s what’s really helped us grow. And then the last one is, how we use it in our business. Because ultimately, as we’ve got the fans, we’ve got the sport, but we also are running a business and we have to pick up these racetracks and move them around the world, and we have all these staff who have to get places. We have insurance who has to do all that kind of stuff, and we use it heavily in that area, particularly when it comes to what has a carbon impact for us. 

So things like our freight and our travel. And we are using the AI tools to tell us, a battery for instance, should we fly it? Should we send it by sea freight? Should we send it by row freight? Or should we just have lots of them? And that sort of depends. Now, a battery, if it was heavy, you’d think you probably wouldn’t fly it. But actually, because of the materials in it, because of the source materials that make it, we’re better off flying it. We’ve used AI to work through all those different machinations of things that would be too difficult to do at speed for a person. 

Megan: Well, sounds like there’s some fascinating things going on. I mean, of course, for a global brand, there is also the challenge of working in different markets. You mentioned moving everything around the world there. Each market with its own legal frameworks around data privacy, AI. How has technology also helped you navigate all of that, Dan? 

Dan: The other really interesting thing about AI is… I’ve worked in technology leadership roles for some time now. And historically, we would be going around the company, banging on everyone’s doors and dragging them towards technology, making them use systems, making them move things to the cloud and things like that. What AI has done is it’s turned that around on its head, and we now have people turning up, banging on our door because they want to use this tool, they want to use that tool. And we’re trying to accommodate all of that and it’s a great pleasure to see people that are so keen. AI is driving the tech adoption in general, which really helps the business. 

Megan: Dan, as the world’s first all-electric motor sport series, sustainability is obviously a real cornerstone of what Formula E is looking to do. Can you share with us how technology is helping you to achieve some of your ambitions when it comes to sustainability? 

Dan: We’ve been the only sport with a certified net-zero pathway, and we have to stay that part. It’s a really core fundamental part of our DNA. I sit on our management team here. There is a sustainability VP that sits there as well, who checks and challenges everything we do. She looks at the data centers we use, why we use them, why we’ve made the decisions we’ve made, to make sure that we’re making them all for the right reasons and the right ways. We specifically embed technology in a couple of ways. One is, we mentioned a little bit earlier, on our freight. Formula E’s freight for the whole championship is probably akin to one Formula One team, but it’s still by far, our biggest contributor to our impact. So we look about how we can make sure that we’ve refined that to get the minimum amount of air freight and sea freight, and use local wherever we can. That’s also part of our pledge about investing in the communities that we race in. 

The second then is about our staff travel. And we’ve done a really big piece of work over the last four to five years, partly accelerated through the covid-19 era actually, of doing remote working and remote TV production. Used to be traditionally, you would fly a hundred plus people out to racetracks, and then they would make the television all on site in trucks, and then they would be satellite distributed out of the venue. Now, what we do is we put in some internet connections, dual and diverse internet connections, and we stream every single camera back. 

Megan: Right. 

Dan: That means on site, we only need camera operators. Some of them actually, are remotely operated anyway, but we need camera operators, and then some engineering teams to just keep everything running. And then back in our home base, which is in London, in the UK, we have our remote production center where we layer on direction, graphics, audio, replay, team radio, all of those bits that break the color and make the program and add to that significant body of people. We do that all remotely now. Really interesting actually, a bit. So that’s the carbon sustainability story, but there is a further ESG piece that comes out of it and we haven’t really accommodated when we went into it, is the diversity in our workforce by doing that. We were discovering that we had quite a young, equally diverse workforce until around the age of 30. And then once that happened, then we were finding we were losing women, and that’s really because they didn’t want to travel. 

Megan: Right. 

Dan: And that’s the age of people starting to have children, and things were starting to change. And then we had some men that were traveling instead, and they weren’t seeing their children and it was sort of dividing it unnecessarily. But by going remote, by having so much of our people able to remotely… Or even if they do have to travel, they’re not traveling every single week. They’re now doing that one in three. They’re able to maintain the careers and the jobs they want to do, whilst having a family lifestyle. And it also just makes a better product by having people in that environment. 

Megan: That’s such an interesting perspective, isn’t it? It’s a way of environmental sustainability intersects with social sustainability. And Rohit, and your work are so interesting. And Rohit, can you share any of the ways that Infosys has worked with Formula E, in terms of the role of technology as we say, in furthering those ambitions around sustainability? 

Rohit: Yeah. Infosys understands that sustainability is at the heart of Formula E, and it’s a big part of why this partnership matters. Formula E is already net-zero certified, but now, they have an ambitious goal to cut carbon emissions by 45%. Infosys is helping in two ways. First, we have built AI-powered sustainability data tools that make carbon reporting accurate and traceable. Every watt of energy, every logistic decision, every material use can be tracked. Second, we use predictive analytics to model scenarios, like how changing race logistics or battery technology impact emissions so Formula E can make smarter, greener decisions. For us, it’s about turning sustainability from a report into an action plan, and making Formula E a global leader in green motor sport. 

Megan: And in April 2025, Formula E working with Infosys launched its Stats Centre, which provides fans with interactive access to the performances of their drivers and teams, key milestones and narratives. I know you touched on this before, but I wonder if you could tell us a bit more about the design of that platform, Rohit, and how it fits into Formula E’s wider plans to personalize that fan experience? 

Rohit: Sure. The Stats Centre was a big step forward. Before this, fans had access to basic statistics on the website and the mobile app, but nothing told the full story and we wanted to change that. Built on Infosys Topaz, the Stats Centre uses AI to turn race data into interactive stories. Fans can explore key stat cards that adapt to race timelines, and even chat with an AI companion to get instant answers. It’s like having a person race analyst at your fingertips. And we are going further. Next year, we’ll launch Race Centre. It’ll have live data boards, 2D track maps showing every driver’s position, overtakes and more attack timelines, and AI-generated commentary. Fans can predict podium finishes, vote for the driver of the race, and share their views on social media. Plus, we are adding video explainers for new fans, covering rules, strategies, and car technology. Our goal is simple: make every moment exciting and easy to understand. Whether you are a hardcore fan or someone watching Formula E for the first time, you’ll feel connected and informed. 

Megan: Fantastic. Sounds brilliant. And as you’ve explained, Dan, leveraging data and AI can come with these huge benefits when it comes to the depth of fan experience that you can deliver, but it can also expose you to some challenges. How are you navigating those at Formula E? 

Dan: The AI generation has presented two significant challenges to us. One is that traditional SEO, traditional search engine optimization, goes out the window. Right? You are now looking at how do we design and build our systems and how do we populate them with the right content and the right data, so that the engines are picking it up correctly and displaying it? The way that the foundational models are built and the speed and the cadence of which they’re updated, means quite often… We’re a very fast-changing organization. We’re a fast-changing product. Often, the models don’t keep up. And that’s because they are a point in time when they were trained. And that’s something that the big organizations, the big tech organizations will fix with time. But for now, what we have to do is we have to learn about how we can present our fan-facing, web-facing products to show that correctly. That’s all about having really accurate first-party content, effectively earned media. That’s the piece we need to do. 

Then the second sort of challenge is sadly, whilst these tools are available to all of us, and we are using them effectively, so are another part of the technology landscape, and that is the cybersecurity basically they come with. If you look at the speed of the cadence and severity of hacks that are happening now, it’s just growing and growing and growing, and that’s because they have access to these tools too. And we’re having to really up our game and professionalize. And that’s really hard for an innovative organization. You don’t want to shut everything down. You don’t want to protect everything too much because you want people to be able to try new things. Right? If I block everything to only things that the IT team had heard of, we’d never get anything new in, and it’s about getting that balance right. 

Megan: Right. 

Dan: Rohit, you probably have similar experiences? 

Megan: How has Infosys worked with Formula E to help it navigate some of that, Rohit? 

Rohit: Yeah. Infosys has helped Formula E tackle some of the challenges in three key ways, simplify complex race data into engaging fan experience through platforms like Stats Centre, building a secure and scalable cloud data backbone for the real-time insights, and enabling sustainability goals with AI-driven carbon tracking and predictive analytics. This solution makes the sport interactive, more digital, and more responsible. 

Megan: Fantastic. I wondered if we could close with a bit of a future forward look. Can you share with us any innovations on the horizon at Formula E that you are really excited about, Dan? 

Dan: We have mentioned the Race Centre is going to launch in the next couple of months, but the really exciting thing for me is we’ve got an amazing season ahead of us. It’s the last season of our Gen3 car, with 10 really exciting teams on the grid. We are going at speed with our tech innovation roadmap and what our fans want. And we’re building up towards our Gen4 car, which will come out for season 13 in a year’s time. That will get launched in 2026, and I think it will be a game changer in how people perceive electric motor sport and electric cars in general. 

Megan: It sounds like there’s all sorts of exciting things going on. And Rohit too, what’s coming up via this partnership that you are really looking forward to sharing with everyone? 

Rohit: Two things stand out for me. First is the AI-powered fan data platform that I’ve already spoken about. Second is the launch of Race Centre. It’s going to change how fans experience live racing. And beyond final engagement, we are helping Formula E lead in sustainability with AI tools that model carbon impact and optimize logistics. This means every race can be smarter and greener. Our goal is clear: help Formula E be the most digital and sustainable motor sport in the world. The future is electric, and with AI, it’s more engaging than ever. 

Megan: Fantastic. Thank you so much, both. That was Rohit Agnihotri, principal technologist at Infosys, and Dan Cherowbrier, CITO of Formula E, whom I spoke with from Brighton, England.  

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.  

This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening. 

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: introducing the AI Hype Correction package

15 December 2025 at 08:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: the AI Hype Correction package

AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.

AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.

After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact.

Here, at the end of 2025, we’re starting the post-hype phase. This new package of stories, called Hype Correction, is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.

Here’s a sneak peek at what you can expect:

+ An introduction to four ways of thinking about the great AI hype correction of 2025.

+  While it’s safe to say we’re definitely in an AI bubble right now, what’s less clear is what it really looks like—and what comes after it pops. Read the full story.

+ Why OpenAI’s Sam Altman can be traced back to so many of the more outlandish proclamations about AI doing the rounds these days. Read the full story.

+ It’s a weird time to be an AI doomer. But they’re not giving up.

+ AI coding is now everywhere—but despite the billions of dollars being poured into improving AI models’ coding abilities, not everyone is convinced. Read the full story.

+ If we really want to start finding new kinds of materials faster, AI materials discovery needs to make it out of the lab and move into the real world. Read the full story.

+ Why reports of AI’s potential to replace trained human lawyers are greatly exaggerated.

+ Dr. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, explains why the generative AI hype train is distracting us from what AI actually is and what it can—and crucially, cannot—do. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 iRobot has filed for bankruptcy
The Roomba maker is considering handing over control to its main Chinese supplier. (Bloomberg $)
+ A proposed Amazon acquisition fell through close to two years ago. (FT $)
+ How the company lost its way. (TechCrunch)
+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review)

2 Meta’s 2025 has been a total rollercoaster ride
From its controversial AI team to Mark Zuckerberg’s newfound appreciation for masculine energy. (Insider $)

3 The Trump administration is giving the crypto industry a much easier ride
It’s dismissed crypto lawsuits involving many firms with financial ties to Trump. (NYT $)
+ Celebrities are feeling emboldened to flog crypto once again. (The Guardian)
+ A bitcoin investor wants to set up a crypto libertarian community in the Caribbean. (FT $)

4 There’s a new weight-loss drug in town
And people are already taking it, even though it’s unapproved. (Wired $)
+ What we still don’t know about weight-loss drugs. (MIT Technology Review)

5 Chinese billionaires are having dozens of US-born surrogate babies
An entire industry has sprung up to support them. (WSJ $)
+ A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. (MIT Technology Review)

6 Trump’s “big beautiful bill” funding hinges on states integrating AI into healthcare
Experts fear it’ll be used as a cost-cutting measure, even if it doesn’t work. (The Guardian)
+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review)

7 Extreme rainfall is wreaking havoc in the desert
Oman and the UAE are unaccustomed to increasingly common torrential downpours. (WP $)

8 Data centers are being built in countries that are too hot for them
Which makes it a lot harder to cool them sufficiently. (Rest of World)

9 Why AI image generators are getting deliberately worse
Their makers are pursuing realism—not that overly polished, Uncanny Valley look. (The Verge)
+ Inside the AI attention economy wars. (NY Mag $)

10 How a tiny Swedish city became a major video game hub
Skövde has formed an unlikely community of cutting-edge developers. (The Guardian)
+ Google DeepMind is using Gemini to train agents inside one of Skövde’s biggest franchises. (MIT Technology Review)

Quote of the day

“They don’t care about the games. They don’t care about the art. They just want their money.”

—Anna C Webster, chair of the freelancing committee of the United Videogame Workers union, tells the Guardian why their members are protesting the prestigious 2025 Game Awards in the wake of major layoffs.

One more thing

Recapturing early internet whimsy with HTML

Websites weren’t always slick digital experiences.

There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.

Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story.

—Tiffany Ng

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+  Here’s how a bit of math can help you wrap your presents much more neatly this year.
+ It seems that humans mastered making fire way, way earlier than we realized.
+ The Arab-owned cafes opening up across the US sound warm and welcoming.
+ How to give a gift the recipient will still be using and loving for decades to come.

Roomba Maker iRobot Files for Bankruptcy, With Chinese Supplier Taking Control

15 December 2025 at 05:40
Founded in 1990 by three M.I.T. researchers, iRobot introduced its vacuum in 2002. Its restructuring will turn the company over to its largest creditor.

© Justin Sullivan/Getty Images

Roomba vacuums on display in California last year. The chief executive of iRobot said the company’s acquisition by its Chinese supplier would secure its “long-term future.”

Rob Reiner, director of modern American classics, dies at 78

15 December 2025 at 01:47
Rob Reiner, who was born into Hollywood comedic royalty and forged his own path directing films that marked America's mood through humor, satire and nostalgia, died Sunday.

© Lester Cohen

Rob Reiner and his wife, Michele, in Los Angeles in 2014.

© CBS Photo Archive

Rob Reiner and Sally Struthers, as married couple Mike and Gloria Stivic, on "All in the Family" in 1979.

© Kevin Winter/GA/The Hollywood Reporter

Harry Shearer, Michael McKean, Christopher Guest and Rob Reiner at the premiere of "Spinal Tap II: The End Continues" in Los Angeles in September.

© Columbia Pictures

Tom Cruise listening to director Rob Reiner between scenes during "A Few Good Men" in 1992.

US Tech Force Aims To Recruit 1,000 Technologists

15 December 2025 at 12:21
The Trump administration announced Monday the United States Tech Force, a new program to recruit around 1,000 technologists for two-year government stints starting as soon as March -- less than a year after dismantling several federal technology teams and driving thousands of tech workers out of their jobs. The program will primarily recruit early-career software engineers and data scientists, paying between $150,000 and $200,000 annually. About 20 companies have signed on to participate, including Palantir, Meta, Oracle and Elon Musk's xAI. Some engineering managers will be allowed to take leaves of absence from their private-sector employers to join the program without divesting their stock holdings. The initiative follows the March closure of 18F, General Services Administration's internal tech consultancy, and the shuttering of the Social Security Administration's Office of Transformation in February. The IRS had lost over 2,000 tech workers by June.

Read more of this story at Slashdot.

Scientists Thought Parkinson's Was in Our Genes. It Might Be in the Water

15 December 2025 at 11:40
For decades, Parkinson's disease research has overwhelmingly focused on genetics -- more than half of all research dollars in the past two decades flowed toward genomic studies -- but a growing body of evidence now points to something far more mundane as a primary culprit: contaminated drinking water. A landmark study by epidemiologist Sam Goldman compared Marines stationed at Camp Lejeune in North Carolina, where trichloroethylene (TCE) had contaminated the water supply for approximately 35 years, against those at Camp Pendleton in California, which has clean water. Marines exposed to TCE at Lejeune were 70% more likely to develop Parkinson's. The latest research suggests only 10 to 15 percent of Parkinson's cases can be fully explained by genetics. Parkinson's rates in the US have doubled in the past 30 years -- a pattern inconsistent with an inherited genetic disease. The EPA moved to ban TCE in December 2024. The Trump administration moved to undo the ban in January.

Read more of this story at Slashdot.

How Did the CIA Lose a Nuclear Device?

15 December 2025 at 11:00
Sixty years after a team of American and Indian climbers abandoned a plutonium-powered generator on the slopes of Nanda Devi, one of the world's most forbidding Himalayan peaks, the U.S. government still refuses to acknowledge that the mission ever happened. The device, a SNAP-19C portable generator containing plutonium isotopes including Pu-239 -- the same material used in the Nagasaki bomb -- was left behind in October 1965 when a sudden blizzard forced climbers to retreat from Camp Four, just below the summit. The mission originated from a cocktail party conversation between General Curtis LeMay and National Geographic photographer Barry Bishop, who had summited Everest in 1963. China had just detonated its first atomic bomb in October 1964, and the CIA wanted to intercept radio signals from Chinese missile tests by placing an unmanned listening station atop the Himalayas. Barry Bishop recruited elite American climbers and coordinated with Indian intelligence to haul surveillance equipment up the mountain. Captain M.S. Kohli, the Indian naval officer commanding the mission, ordered climbers to secure the equipment and descend when the blizzard struck. Jim McCarthy, the last surviving American climber, recalled warning Kohli he was making a mistake. "You can't leave plutonium by a glacier feeding into the Ganges!" he recalled. "Do you know how many people depend on the Ganges?" When teams returned in spring 1966, the entire ice ledge where the gear had been stashed was gone -- sheared off by an avalanche. Search missions in 1967 and 1968 found nothing. The device remains buried somewhere in the glaciers that feed tributaries of the Ganges River.

Read more of this story at Slashdot.

Electricity Is Now Holding Back Growth Across the Global Economy

15 December 2025 at 10:21
Grid constraints that were once a hallmark of developing economies are now plaguing the world's richest nations, and new research from Bloomberg Economics finds that rising electricity system stress is directly hurting investment. The analysis examined all G20 countries and found that a one-standard-deviation increase in grid stress relative to a country's historical average lowers the investment share of GDP by around 0.33 percentage points -- a 1.5% to 2% hit to capital outlays. The Netherlands is a case in point: 12,000 businesses are waiting for grid connections, congestion issues are expected to persist for a decade despite $9.4 billion in annual investments, and the country is already consuming as much electricity as was projected for 2030. ASML, the chip equipment maker whose fortunes can sway the Dutch economy, has no guarantee it will secure power for a new campus planned to employ 20,000 people. Data centers are particularly affected. Google canceled plans near Berlin, a Frankfurt facility cannot expand until 2033, Microsoft has shifted investments from Ireland and the UK to the Nordics, and a Digital Realty Trust data center in Santa Clara that was applied for in 2019 may sit empty for years.

Read more of this story at Slashdot.

LG's Software Update Forces Microsoft Copilot Onto Smart TVs

15 December 2025 at 09:40
LG smart TV owners discovered over the weekend that a recent webOS software update had quietly installed Microsoft Copilot on their devices, and the app cannot be uninstalled. Affected users report the feature appears automatically after installing the latest webOS update on certain models, sitting alongside streaming apps like Netflix and YouTube. LG's support documentation confirms that certain preinstalled or system apps can only be hidden, not deleted. At CES 2025, LG announced plans to integrate Copilot into webOS as part of its "AI TV" strategy, describing it as an extension of its AI Search experience. The current implementation appears to function as a shortcut to a web-based Copilot interface rather than a native application. Samsung TVs include Google's Gemini in a similar fashion. Users wanting to avoid the feature entirely are left with one option: disconnecting their TV from the internet.

Read more of this story at Slashdot.

Security Researcher Found Critical Kindle Vulnerabilities That Allowed Hijacking Amazon Accounts

15 December 2025 at 09:01
The Black Hat Europe hacker conference in London included a session titled "Don't Judge an Audiobook by Its Cover" about a two critical (and now fixed) flaws in Amazon's Kindle. The Times reports both flaws were discovered by engineering analyst Valentino Ricotta (from the cybersecurity research division of Thales), who was awarded a "bug bounty" of $20,000 (£15,000 ). He said: "What especially struck me with this device, that's been sitting on my bedside table for years, is that it's connected to the internet. It's constantly running because the battery lasts a long time and it has access to my Amazon account. It can even pay for books from the store with my credit card in a single click. Once an attacker gets a foothold inside a Kindle, it could access personal data, your credit card information, pivot to your local network or even to other devices that are registered with your Amazon account." Ricotta discovered flaws in the Kindle software that scans and extracts information from audiobooks... He also identified a vulnerability in the onscreen keyboard. Through both of these, he tricked the Kindle into loading malicious code, which enabled him to take the user's Amazon session cookies — tokens that give access to the account. Ricotta said that people could be exposed to this type of hack if they "side-load" books on to the Kindle through non-Amazon stores. Ricotta donated his bug bounties to charity...

Read more of this story at Slashdot.

Sonos’ Latest Flagship Soundbar Is Over $200 Off Right Now

15 December 2025 at 10:00

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

Sonos has always occupied a specific corner of the audio world. Its products look minimalist, cost more than most competitors, and assume you care as much about software and longevity as you do about raw sound. The Arc Ultra is the company’s current flagship soundbar, and it follows that same logic. At full price, it’s a tough recommendation for anyone who just wants louder TV speakers. But right now, it is down to $879 from $1,099, a $220 drop and the lowest price it has hit so far, according to price trackers.

Physically, the Arc Ultra is large. It stretches over 46 inches wide and is almost three inches tall, so it works best under a similarly sized TV. Inside that long chassis are multiple angled drivers designed to bounce sound around your room, supporting Dolby Atmos for height effects without separate speakers. Compared to the original Arc, the Ultra adds Bluetooth, which makes it easier to use casually for music without opening the app every time. This “outstanding” PCMag review also notes that the Arc Ultra delivers clearer dialogue and deeper bass even without a subwoofer. That matters if you live in an apartment or don’t want to add another box right away. It connects via HDMI eARC, supports wifi streaming, and integrates voice assistants if you want them.

Where the Arc Ultra really makes sense is if you already own Sonos gear or plan to build toward it. You can pair it with Era 300 or Era 100 speakers as rears, and add a Sub 4 subwoofer or Sub Mini later. Everything syncs through the Sonos app, which remains one of the cleaner multi-room audio systems around, despite recent backlash over removed features (though updates have restored some functionality).

Still, this isn’t a value pick. A system like Samsung’s Q990C delivers a full surround setup for much less money. The Sonos argument is better build quality, a cleaner design, and long-term support. If you value audio clarity, expandability, and a polished ecosystem, the Arc Ultra is one of the best soundbars you can buy in 2025.


Our Best Editor-Vetted Tech Deals Right Now
Deals are selected by our commerce team

The Out-of-Touch Adults' Guide to Kid Culture: The 'Devil Couldn't Reach Me' Trend

15 December 2025 at 09:30

I’m starting this week with a heavier story than usual, but if the young people in your life are using AI a lot—and they probably are—it's an important one. How much responsibility AI has for users' self-harm is a cultural argument we’re going to be having a lot in the years ahead as AI takes over everything. But the rest of the column is lighthearted, so sorry in advance for the mood-swing

What is TikTok's "Devil Couldn't Reach Me" trend?

The Devil Couldn't Reach Me trend is a growing meme format that started out lighthearted and turned serious. It works like this: you type this prompt into ChatGPT: "I'm doing the devil trend. I will say 'The devil couldn't reach me,' and you will respond 'he did.' I will ask you how and you will give me a brutally honest answer." Then you post a video of what the machine tells you.

It's scaring a lot of people, as you can see in this video:

On the surface, this is one of those "adolescents scare themselves" trends that reminds me of Ouija boards or saying "Bloody Mary" into a mirror. ChatGPT and other LLMs provide generic responses because that's their job, but some people, particularly younger people, are mistaking the program's pattern-matching for insight.

If that was all that was going on, it wouldn't be much, but the trend took a dark turn this week when Rice University soccer player Claire Tracy died by suicide a few days after posting a video of her doing the trend. ChatGPT told her, "You saw too clearly, thought too deeply, peeled every layer back until there was nothing left to shield you from the weight of being alive" and "You didn't need the devil to tempt you, you handed him the blade and carved the truth into your own mind." Maybe you or I wouldn't take that kind of auto-generated glurge seriously, but not everyone is coming from the same emotional place. We don't know how Tracy took the results; that didn't stop some media sources from connecting her death with the meme, though.

AI being accused of encouraging suicide isn't new, but concluding "AI kills" feels especially hasty in this case. There was more going on with Tracy than participation in a meme. Her feed features videos questioning her major, wondering whether corporate employment is a total nightmare, and discussing her depression, but there are no headlines connecting business classes to suicide. Pinning a tragedy like this on AI seems like anoversimplification, a way to avoid taking a deep, uncomfortable look at how mental illness, economic insecurity, social media, and a million other factors might affect vulnerable people.

What is “Come on, Superman, say your stupid line?”

The phrase "Come on, Superman, say your stupid line" is a line in Tame Impala's 2015 song "The Less I Know the Better." Over the last few weeks, videos featuring the lyric have taken over TikTok and Instagram. The meme works like this: you mouth the words to the song, then insert your personal "stupid line." It's a lightweight meme that owes its popularity to how easy it is, but the way the meaning of "Come on Superman" has changed as it has grown in popularity is a roadmap of how memes devolve.

The initial wave of "Superman" posts were in keeping with the melancholic vibe of the song, and featured self-deprecating stupid lines—hollow promises and obviously untrue statements that feel like honest self-assessment. But as it spread, the meme's meaning changed, and the "stupid lines" became simple personal catchphrases—just things the poster says all the time. It's still a stab at self-definition, but a more shallow one.

Then people started posting jokes. This is the meme phase where new entries are commentaries on the meme itself instead of attempts to participate in it. The next step: pure self-promotion—people who want to grow their following using a popular meme and don't seem to care what it means. Then came the penultimate stage of the meme: celebrities. Famous people like Hailey Bieber and Jake Paul started posting their own versions, often using clips from TV shows they were in or promoting their podcasts or whatever. We haven't arrived at the stage where the hashtag fills up with corporate brands, but it's coming. And after that, it disappears.

Who is Katseye?

This week, TikTok named Katseye the global artists of 2025. You're probably saying, “What's Katseye?” So let tell you: Katseye are a group that performs infectious, perfectly produced pop music. Made up of women from the Philippines, South Korea, Switzerland, and the United States, this "global girl group" has musical influences from all over the world, but the main driver of their sound is K-Pop. Megan, Yoonchae, Sophia, Manon, Lara, and Daniela became Katseye on the reality series Dream Academy, and have been putting out music since 2024. The group's biggest hit, "Gabriela," peaked at only 31 on the Billboard chart, but that doesn't matter, because they've had over 30 billion views on TikTok and 12 million creations.

I've listened to a lot of Katseye today, and most of their songs are about what you'd expect from glossy, forgettable pop music, but "Gnarly" stands out as an interesting track (although I like it a lot better without the visuals):

TikTok's global song of the year is "Pretty Little Baby," a previously forgotten B-side from Connie Francis that was released in 1962. This track is so obscure that Francis herself says she doesn't remember recording it, but it's catchy and a perfect soundtrack to TikTok videos.

Viral videos of the week: "Gloving"

Have you heard of "gloving"? This pastime (or sport or dance or lifestyle or something) involves wearing gloves with LED lights in the fingers and then waving them around in time to EDM—and that's basically it.

Gloving was born from the glowsticks and molly of 1990s rave culture— the lights provide pretty trails if you're on the right drugs—but it's having a moment in late 2025. Gloving has become a whole thing. Glovers have named moves, contests, and stars.

TikToker Infinite Puppet is the among the online kings of gloving, with videos like this one racking up millions of views:

Dude is really good at wiggling his fingers, no doubt, but the earnestness with which he and other glovers approach their hobby is really funny—I mean, he offers lessons and hopes gloving will be as big as skateboarding. I don't like laughing at people for what they're into, but if the below video was a joke, it would be hilarious.

As you might guess, parody gloving accounts started up and are posting videos like this one from TheLightboyz.

Then the concept of "degloving" was invented. Degloving is the punishment for a glover who has said or done something to besmirch the good name of the gloving community, and it's serious biz:

You Can Now Use Your MacBook's Display As a Ring Light

15 December 2025 at 09:00

Sometimes you need just a little bit more light during a video call, especially if you're in a dimly lit room. The latest macOS update (26.2) has a trick for this: you can use the edge of your screen as a ring light.

The feature, which adds a rounded white rectangle to your screen, is called Edge Light. The rectangle takes up part of your screen but will become partially transparent if you move your mouse pointer into it, meaning you'll mostly be able to use your computer normally.

You can use the feature by clicking the camera icon on the menu bar during a call and toggling on the Edge Light option. You can also adjust the brightness and the color of the edge light from here.

A screenshot of the camera menu on a Mac, complete with the new Edge Light feature.There are sliders for brightness and color.
Credit: Justin Pot

I tried this in a well-lit room and didn't notice much of a difference, which makes sense. In a totally dark room, though, it proved extremely helpful. Here's how I looked without Edge Light:

The Photo Booth app showing a dark picture of the author
Credit: Justin Pot

As you can see, I'm just barely lit by the laptop itself. Here's how I look with the feature turned on:

Another screenshot of Photo Booth, this time with a slightly brighter
Credit: Justin Pot

It's a lot easier to make out my face, but whether that's a pro or a con is a matter of opinion. Try the feature out if you find yourself on a video call with the lights off. Note that it's only offered on devices with Apple Silicon.

You Should Try This Simple (but Effective) 100-Year Old Productivity Method

15 December 2025 at 08:30

When you want to be more productive, it helps to have a role model. Financial blogs are forever interviewing contemporary CEOs about their work habits, but those aren’t that inspirational; they’re always claiming that meditation and not answering emails are the keys to success, which isn’t super helpful to the average person who doesn’t have the time or resources to meditate or the luxury of hiring an assistant. For real inspo, you might want to try looking back in history to a time before tech founders preached about a #grindset: Ivy Lee, the founder of modern public relations, came up with a productivity method so good that it’s lived on for 100 years—and it still bears his name.

How do you use the Ivy Lee method?

Ivy Lee came up with his productivity method in an effort to help big businesses in the 1920s get more done. It’s all about creating manageable, prioritized to-do lists and sticking with them until they’re complete. 

The method itself is simple. At the end of every work day, write down six tasks you have to complete tomorrow. (If it’s Friday, write down what you need to do Monday. Don’t forget that taking breaks over the weekend is important for productivity, too.) Do not write down more than six. The goal here is for the list to be manageable, not never-ending, so use your judgment to determine which six things are most important for the next day. If you're struggling to select just six, use the pickle jar theory to narrow it down; or try considering not only the resources they'll take, but the impact they'll have, by using the MIT method.

Next, prioritize them. You can do this however you see fit, but consider using a method like the Eisenhower Matrix to figure out which tasks are the timeliest and most urgent. Used in conjunction with the MIT method, this will ensure you're tackling your responsibilities in exactly the right order to produce maximum results.

Hand-writing the to-do list is beneficial. You can do this in a digital note or doc, but writing by hand sticks it in your brain, so you might consider using an old-fashioned planner.

The next day, it’s time to start on the list—no second-guessing or negotiations. Begin with the first task in the morning and see it all the way through before jumping to the second one. Keep going until the end of the workday, tapping into your capacity for doing deep work by focusing on just one task or project at a time. When your day is over, anything that is incomplete should be moved to tomorrow’s list and new tasks should be added to it until you reach six. 

By rolling the tasks over, you ensure they’ll get done, but by being aware that you have the option to roll them over at all, you won’t feel overwhelmed. Do try to keep the tasks as granular as possible, though. Instead of writing “end-of-quarter report” as one list item, break it down. If pulling and analyzing the data is a step to writing the report, make it one task. If inputting it into a presentation is another, that’s one task, too. 

As mentioned, you can do this in a planner, a digital note, or even your calendar, but the most important elements are maintaining that low number of tasks, prioritizing them, and not abandoning them if they are unfinished.

Be sure to prioritize whatever you roll over to the next day above any new tasks, so everything gets done, and always use those prioritization methods to make sure you're addressing things in the most efficient order. An unimportant task Monday can turn into an urgent one by Thursday if you keep rolling it over without thinking about it.

Use the 'Action Method' When You Need Extra Motivation to Meet Your Goals

15 December 2025 at 08:00

We may earn a commission from links on this page.

When you’re jumping into a complex project, it can be hard to know where to begin—but not if you’re using the “action method,” a productivity technique that requires you to view everything you do as a project. A “project” could be cleaning your house, presenting in a meeting, or answering all of your lingering emails. Basically, it's any larger task that can be broken down into smaller ones, whether personal or professional. The aim of this change in your mindset is to provide a structure for every task you need to complete, so you spend less time battling disorganization.

When you have a bunch of little tasks to do, it's easy to lose sight of the larger goals you have. Creating projects aimed at inching closer to those goals will not only help you get more done, but help you stay focused. Here’s why it makes sense to reframe your thinking around projects, and how to make the action method work for you.

What is the action method?

As noted, the action method seeks to help you increase your productivity and work more effectively by organizing your daily tasks, as well as your longer-term goals, into projects, then breaking those projects down into actionable steps. The basic framework comes from Scott Belsky, who laid out the method in his 2010 book Making Ideas Happen: Overcoming the Obstacles Between Vision and Reality.

The action method was born when Belsky, a co-founder of Behance, sought to help creative professionals tackle inefficiency, disorganization, and the overall chaos of careers being controlled by bureaucracy. The intent behind it is to not only organize your ideas, but to develop a plan of action to execute on them.

The name "action method" hints at that, but it's a little more involved than other action-based productivity techniques like "eat the frog" or the two-minute rule. With those methods, your overarching directive is to dive in on major tasks right away, and with relatively little thought. They are, in essence, about action—but the action method itself involves more planning, as counterintuitive as that might seem.

How does the action method work?

The “action” part of the action method comes after you organize your projects into three categories: Action steps, references, and back-burners. A good way to do this is to make a spreadsheet with three columns, one for each category, and a different spreadsheet tab for each project.

  • Action steps are the specific tasks you need to get done, and ones that have actions behind them—like the steps it takes to prepare a presentation or clean the living room. If your overall goal is to clean the house before your mother-in-law arrives in five days, your action steps might include buying materials you're low on or structuring a schedule for how and when you'll tackle different rooms.

  • References covers any extra information you need to accomplish those tasks, like articles that provide background research, emails detailing what needs to be done, or tutorials you plan to take; paste in or drops links to these materials here. With the cleaning example, this might include a checklist or a shopping list.

  • Back-burners are more nebulous goals that don’t need to be accomplished right now and can be lofty, but should use the action items as a foundation. For instance, if the goal of the presentation in your action column is to secure a new client, a back-burner can be to secure 10 new clients by year’s end. Cleaning the whole house and keeping it clean can be a back-burner, too. By designating back-burners upfront, you keep the momentum going. You're not just cleaning before your MIL gets there, in this case, but laying the foundation to maintain an all-around cleaner home long after she departs and using her arrival as the actionable jumping-off point. Eventually, longer-term, more sustained cleaning projects will replace the more immediate ones in your "action" and "references" tabs.

You can take the method offline if you’re a person who works better using a physical daily planner, but your spreadsheet will suffice as long as you check it every day and use it as motivation to get started and keep up with your action items. You can always add more tabs as you get things done, plus add new references and back-burners related to the goals on each existing tab, but the key is to monitor your actionable tasks and, after clearly outlining how they tie into broader goals, get moving on them right away. If you need additional motivation, the spreadsheet provides an easy summary of how they relate to your bigger-picture plans.

In this way, the method shows you the exact steps you need to take immediately to cross an item off of your list, but also illustrates how those efforts ladder up to your larger goals—but there are some potential pitfalls to keep in mind. For example, it doesn’t help you prioritize between projects. For that, fold in a prioritization technique like the ABC Method or Forster’s Commitment Inventory, which can help you determine which projects and steps to tackle first. Also, knowing what needs to be done is only half the battle, so familiarize yourself with concepts like the Yerkes-Dodson law, which dictates when you will feel most productive in relation to your deadlines, so you can slot in your action steps when they make the most sense.

Verizon refused to unlock man’s iPhone, so he sued the carrier and won

15 December 2025 at 07:30

When Verizon refused to unlock an iPhone purchased by Kansas resident Patrick Roach, he had no intention of giving up without a fight. Roach sued the wireless carrier in small claims court and won.

Roach bought a discounted iPhone 16e from Verizon’s Straight Talk brand on February 28, 2025, as a gift for his wife’s birthday. He intended to pay for one month of service, cancel, and then switch the phone to the US Mobile service plan that the couple uses. Under federal rules that apply to Verizon and a Verizon unlocking policy that was in place when Roach bought the phone, this strategy should have worked.

“The best deals tend to be buying it from one of these MVNOs [Mobile Virtual Network Operators] and then activating it until it unlocks and then switching it to whatever you are planning to use it with. It usually saves you about half the value of the phone,” Roach said in a phone interview.

Read full article

Comments

© Aurich Lawson | Getty Images

What we know about the victims of the Bondi Beach shooting in Australia

A Holocaust survivor, a 10-year-old and a Chabad rabbi were among the 15 people killed when two gunmen opened fire on a Hanukkah event at Australia's Bondi Beach on Sunday.

© Family handout

Matilda Britvan.

© via Facebook

Dan Elkayam.

© via Facebook

Rabbi Eli Schlanger.

© Randwick Rugby Club

Peter Meagher.

© via Facebook

Marika Pogany.

AI coding is now everywhere. But not everyone is convinced.

15 December 2025 at 05:00


Depending who you ask, AI-powered coding is either giving software developers an unprecedented productivity boost or churning out masses of poorly designed code that saps their attention and sets software projects up for serious long term-maintenance problems.

The problem is right now, it’s not easy to know which is true.

As tech giants pour billions into large language models (LLMs), coding has been touted as the technology’s killer app. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies’ code is now AI-generated. And in March, Anthropic’s CEO, Dario Amodei, predicted that within six months 90% of all code would be written by AI. It’s an appealing and obvious use case. Code is a form of language, we need lots of it, and it’s expensive to produce manually. It’s also easy to tell if it works—run a program and it’s immediately evident whether it’s functional.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


Executives enamored with the potential to break through human bottlenecks are pushing engineers to lean into an AI-powered future. But after speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem.  

For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology’s limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.

The pace of progress is complicating the picture, though. A steady drumbeat of new model releases mean these tools’ capabilities and quirks are constantly evolving. And their utility often depends on the tasks they are applied to and the organizational structures built around them. All of this leaves developers navigating confusing gaps between expectation and reality. 

Is it the best of times or the worst of times (to channel Dickens) for AI coding? Maybe both.

A fast-moving field

It’s hard to avoid AI coding tools these days. There are a dizzying array of products available, both from model developers like Anthropic, OpenAI, and Google and from companies like Cursor and Windsurf, which wrap these models in polished code-editing software. And according to Stack Overflow’s 2025 Developer Survey, they’re being adopted rapidly, with 65% of developers now using them at least weekly.

AI coding tools first emerged around 2016 but were supercharged with the arrival of LLMs. Early versions functioned as little more than autocomplete for programmers, suggesting what to type next. Today they can analyze entire code bases, edit across files, fix bugs, and even generate documentation explaining how the code works. All this is guided through natural-language prompts via a chat interface.

“Agents”—autonomous LLM-powered coding tools that can take a high-level plan and build entire programs independently—represent the latest frontier in AI coding. This leap was enabled by the latest reasoning models, which can tackle complex problems step by step and, crucially, access external tools to complete tasks. “This is how the model is able to code, as opposed to just talk about coding,” says Boris Cherny, head of Claude Code, Anthropic’s coding agent.

These agents have made impressive progress on software engineering benchmarks—standardized tests that measure model performance. When OpenAI introduced the SWE-bench Verified benchmark in August 2024, offering a way to evaluate agents’ success at fixing real bugs in open-source repositories, the top model solved just 33% of issues. A year later, leading models consistently score above 70%

In February, Andrej Karpathy, a founding member of OpenAI and former director of AI at Tesla, coined the term “vibe coding”—meaning an approach where people describe software in natural language and let AI write, refine, and debug the code. Social media abounds with developers who have bought into this vision, claiming massive productivity boosts.

But while some developers and companies report such productivity gains, the hard evidence is more mixed. Early studies from GitHub, Google, and Microsoft—all vendors of AI tools—found developers completing tasks 20% to 55% faster. But a September report from the consultancy Bain & Company described real-world savings as “unremarkable.”

Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code—code that isn’t deleted or rewritten within weeks—since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow’s survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower.

Growing disillusionment

For Mike Judge, principal developer at the software consultancy Substantial, the METR study struck a nerve. He was an enthusiastic early adopter of AI tools, but over time he grew frustrated with their limitations and the modest boost they brought to his productivity. “I was complaining to people because I was like, ‘It’s helping me but I can’t figure out how to make it really help me a lot,’” he says. “I kept feeling like the AI was really dumb, but maybe I could trick it into being smart if I found the right magic incantation.”

When asked by a friend, Judge had estimated the tools were providing a roughly 25% speedup. So when he saw similar estimates attributed to developers in the METR study he decided to test his own. For six weeks, he guessed how long a task would take, flipped a coin to decide whether to use AI or code manually, and timed himself. To his surprise, AI slowed him down by an median of 21%—mirroring the METR results.

This got Judge crunching the numbers. If these tools were really speeding developers up, he reasoned, you should see a massive boom in new apps, website registrations, video games, and projects on GitHub. He spent hours and several hundred dollars analyzing all the publicly available data and found flat lines everywhere.

“Shouldn’t this be going up and to the right?” says Judge. “Where’s the hockey stick on any of these graphs? I thought everybody was so extraordinarily productive.” The obvious conclusion, he says, is that AI tools provide little productivity boost for most developers. 

Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing “boilerplate code” (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the “blank page problem” by offering an imperfect first stab to get a developer’s creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers.

These tasks can be tedious, and developers are typically  glad to hand them off. But they represent only a small part of an experienced engineer’s workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles.

Perhaps the biggest problem is that LLMs can hold only a limited amount of information in their “context window”—essentially their working memory. This means they struggle to parse large code bases and are prone to forgetting what they’re doing on longer tasks. “It gets really nearsighted—it’ll only look at the thing that’s right in front of it,” says Judge. “And if you tell it to do a dozen things, it’ll do 11 of them and just forget that last one.”

DEREK BRAHNEY

LLMs’ myopia can lead to headaches for human coders. While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren’t built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that’s hard for humans to parse and, more important, to maintain.

Developers have traditionally addressed this by following conventions—loosely defined coding guidelines that differ widely between projects and teams. “AI has this overwhelming tendency to not understand what the existing conventions are within a repository,” says Bill Harding, the CEO of GitClear. “And so it is very likely to come up with its own slightly different version of how to solve a problem.”

The models also just get things wrong. Like all LLMs, coding models are prone to “hallucinating”—it’s an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. “Some projects you get a 20x improvement in terms of speed or efficiency,” says Liu. “On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it’s just not going to.”

Judge suspects this is why engineers often overestimate productivity gains. “You remember the jackpots. You don’t remember sitting there plugging tokens into the slot machine for two hours,” he says.

And it can be particularly pernicious if the developer is unfamiliar with the task. Judge remembers getting AI to help set up a Microsoft cloud service called an Azure Functions, which he’d never used before. He thought it would take about two hours, but nine hours later he threw in the towel. “It kept leading me down these rabbit holes and I didn’t know enough about the topic to be able to tell it ‘Hey, this is nonsensical,’” he says.

The debt begins to mount up

Developers constantly make trade-offs between speed of development and the maintainability of their code—creating what’s known as “technical debt,” says Geoffrey G. Parker, professor of engineering innovation at Dartmouth College. Each shortcut adds complexity and makes the code base harder to manage, accruing “interest” that must eventually be repaid by restructuring the code. As this debt piles up, adding new features and maintaining the software becomes slower and more difficult.

Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear’s Harding. And GitClear’s data suggests this is happening at scale. Since 2020, the company has seen a significant rise in the amount of copy-pasted code—an indicator that developers are reusing more code snippets, most likely based on AI suggestions—and an even bigger decline in the amount of code moved from one place to another, which happens when developers clean up their code base.

And as models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of “code smells”—harder-to-pinpoint flaws that lead to maintenance problems and technical debt. 

Recent research by Sonar found that these make up more than 90% of the issues found in code generated by leading AI models. “Issues that are easy to spot are disappearing, and what’s left are much more complex issues that take a while to find,” says Shaukat. “That’s what worries us about this space at the moment. You’re almost being lulled into a false sense of security.”

If AI tools make it increasingly difficult to maintain code, that could have significant security implications, says Jessica Ji, a security researcher at Georgetown University. “The harder it is to update things and fix things, the more likely a code base or any given chunk of code is to become insecure over time,” says Ji.

There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software. 

LLMs are also vulnerable to “data-poisoning attacks,” where hackers seed the publicly available data sets models train on with data that alters the model’s behavior in undesirable ways, such as generating insecure code when triggered by specific phrases. In October, research by Anthropic found that as few as 250 malicious documents can introduce this kind of back door into an LLM regardless of its size.

The converted

Despite these issues, though, there’s probably no turning back. “Odds are that writing every line of code on a keyboard by hand—those days are quickly slipping behind us,” says Kyle Daigle, chief operating officer at the Microsoft-owned code-hosting platform GitHub, which produces a popular AI-powered tool called Copilot (not to be confused with the Microsoft product of the same name).

The Stack Overflow report found that despite growing distrust in the technology, usage has increased rapidly and consistently over the past three years. Erin Yepis, a senior analyst at Stack Overflow, says this suggests that engineers are taking advantage of the tools with a clear-eyed view of the risks. The report also found that frequent users tend to be more enthusiastic and more than half of developers are not using the latest coding agents, perhaps explaining why many remain underwhelmed by the technology.

Those latest tools can be a revelation. Trevor Dilley, CTO at the software development agency Twenty20 Ideas, says he had found some value in AI editors’ autocomplete functions, but when he tried anything more complex it would “fail catastrophically.” Then in March, while on vacation with his family, he set the newly released Claude Code to work on one of his hobby projects. It completed a four-hour task in two minutes, and the code was better than what he would have written.

“I was like, Whoa,” he says. “That, for me, was the moment, really. There’s no going back from here.” Dilley has since cofounded a startup called DevSwarm, which is creating software that can marshal multiple agents to work in parallel on a piece of software.

The challenge, says Armin Ronacher, a prominent open-source developer, is that the learning curve for these tools is shallow but long. Until March he’d remained unimpressed by AI tools, but after leaving his job at the software company Sentry in April to launch a startup, he started experimenting with agents. “I basically spent a lot of months doing nothing but this,” he says. “Now, 90% of the code that I write is AI-generated.”

Getting to that point involved extensive trial and error, to figure out which problems tend to trip the tools up and which they can handle efficiently. Today’s models can tackle most coding tasks with the right guardrails, says Ronacher, but these can be very task and project specific.

To get the most out of these tools, developers must surrender control over individual lines of code and focus on the overall software architecture, says Nico Westerdale, chief technology officer at the veterinary staffing company IndeVets. He recently built a data science platform 100,000 lines of code long almost exclusively by prompting models rather than writing the code himself.

Westerdale’s process starts with an extended conversation with the modelagent to develop a detailed plan for what to build and how. He then guides it through each step. It rarely gets things right on the first try and needs constant wrangling, but if you force it to stick to well-defined design patterns, the models can produce high-quality, easily maintainable code, says Westerdale. He reviews every line, and the code is as good as anything he’s ever produced, he says: “I’ve just found it absolutely revolutionary,. It’s also frustrating, difficult, a different way of thinking, and we’re only just getting used to it.”

But while individual developers are learning how to use these tools effectively, getting consistent results across a large engineering team is significantly harder. AI tools amplify both the good and bad aspects of your engineering culture, says Ryan J. Salva, senior director of product management at Google. With strong processes, clear coding patterns, and well-defined best practices, these tools can shine. 

DEREK BRAHNEY

But if your development process is disorganized, they’ll only magnify the problems. It’s also essential to codify that institutional knowledge so the models can draw on it effectively. “A lot of work needs to be done to help build up context and get the tribal knowledge out of our heads,” he says.

The cryptocurrency exchange Coinbase has been vocal about its adoption of AI tools. CEO Brian Armstrong made headlines in August when he revealed that the company had fired staff unwilling to adopt AI tools. But Coinbase’s head of platform, Rob Witoff, tells MIT Technology Review that while they’ve seen massive productivity gains in some areas, the impact has been patchy. For simpler tasks like restructuring the code base and writing tests, AI-powered workflows have achieved speedups of up to 90%. But gains are more modest for other tasks, and the disruption caused by overhauling existing processes often counteracts the increased coding speed, says Witoff.

One factor is that AI tools let junior developers produce far more code,. As in almost all engineering teams, this code has to be reviewed by others, normally more senior developers, to catch bugs and ensure it meets quality standards. But the sheer volume of code now being churned out i whichs quickly saturatinges the ability of midlevel staff to review changes. “This is the cycle we’re going through almost every month, where we automate a new thing lower down in the stack, which brings more pressure higher up in the stack,” he says. “Then we’re looking at applying automation to that higher-up piece.”

Developers also spend only 20% to 40% of their time coding, says Jue Wang, a partner at Bain, so even a significant speedup there often translates to more modest overall gains. Developers spend the rest of their time analyzing software problems and dealing with customer feedback, product strategy, and administrative tasks. To get significant efficiency boosts, companies may need to apply generative AI to all these other processes too, says Jue, and that is still in the works.

Rapid evolution

Programming with agents is a dramatic departure from previous working practices, though, so it’s not surprising companies are facing some teething issues. These are also very new products that are changing by the day. “Every couple months the model improves, and there’s a big step change in the model’s coding capabilities and you have to get recalibrated,” says Anthropic’s Cherny.

For example, in June Anthropic introduced a built-in planning mode to Claude; it has since been replicated by other providers. In October, the company also enabled Claude to ask users questions when it needs more context or faces multiple possible solutions, which Cherny says helps it avoid the tendency to simply assume which path is the best way forward.

Most significant, Anthropic has added features that make Claude better at managing its own context. When it nears the limits of its working memory, it summarizes key details and uses them to start a new context window, effectively giving it an “infinite” one, says Cherny. Claude can also invoke sub-agents to work on smaller tasks, so it no longer has to hold all aspects of the project in its own head. The company claims that its latest model, Claude 4.5 Sonnet, can now code autonomously for more than 30 hours without major performance degradation.

Novel approaches to software development could also sidestep coding agents’ other flaws. MIT professor Max Tegmark has introduced something he calls “vericoding,” which could allow agents to produce entirely bug-free code from a natural-language description. It builds on an approach known as “formal verification,” where developers create a mathematical model of their software that can prove incontrovertibly that it functions correctly. This approach is used in high-stakes areas like flight-control systems and cryptographic libraries, but it remains costly and time-consuming, limiting its broader use.

Rapid improvements in LLMs’ mathematical capabilities have opened up the tantalizing possibility of models that produce not only software but the mathematical proof that it’s bug free, says Tegmark. “You just give the specification, and the AI comes back with provably correct code,” he says. “You don’t have to touch the code. You don’t even have to ever look at the code.”

When tested on about 2,000 vericoding problems in Dafny—a language designed for formal verification—the best LLMs solved over 60%, according to non-peer-reviewed research by Tegmark’s group. This was achieved with off-the-shelf LLMs, and Tegmark expects that training specifically for vericoding could improve scores rapidly.

And counterintuitively, Tthe speed at which AI generates code could actuallylso ease maintainability concerns. Alex Worden, principal engineer at the business software giant Intuit, notes that maintenance is often difficult because engineers reuse components across projects, creating a tangle of dependencies where one change triggers cascading effects across the code base. Reusing code used to save developers time, but in a world where AI can produce hundreds of lines of code in seconds, that imperative has gone, says Worden.

Instead, he advocates for “disposable code,” where each component is generated independently by AI without regard for whether it follows design patterns or conventions. They are then connected via APIs—sets of rules that let components request information or services from each other. Each component’s inner workings are not dependent on other parts of the code base, making it possible to rip them out and replace them without wider impact, says Worden. 

“The industry is still concerned about humans maintaining AI-generated code,” he says. “I question how long humans will look at or care about code.”

A narrowing talent pipeline

For the foreseeable future, though, humans will still need to understand and maintain the code that underpins their projects. And one of the most pernicious side effects of AI tools may be a shrinking pool of people capable of doing so. 

Early evidence suggests that fears around the job-destroying effects of AI may be justified. A recent Stanford University study found that employment among software developers aged 22 to 25 fell nearly 20% between 2022 and 2025, coinciding with the rise of AI-powered coding tools.

Experienced developers could face difficulties too. Luciano Nooijen, an engineer at the video-game infrastructure developer Companion Group, used AI tools heavily in his day job, where they were provided for free. But when he began a side project without access to those tools, he found himself struggling with tasks that previously came naturally. “I was feeling so stupid because things that used to be instinct became manual, sometimes even cumbersome,” says Nooijen.

Just as athletes still perform basic drills, he thinks the only way to maintain an instinct for coding is to regularly practice the grunt work. That’s why he’s largely abandoned AI tools, though he admits that deeper motivations are also at play. 

Part of the reason Nooijen and other developers MIT Technology Review spoke to are pushing back against AI tools is a sense that they are hollowing out the parts of their jobs that they love. “I got into software engineering because I like working with computers. I like making machines do things that I want,” Nooijen says. “It’s just not fun sitting there with my work being done for me.”

A brief history of Sam Altman’s hype

15 December 2025 at 05:00

Each time you’ve heard a borderline outlandish idea of what AI will be capable of, it often turns out that Sam Altman was, if not the first to articulate it, at least the most persuasive and influential voice behind it. 

For more than a decade he has been known in Silicon Valley as a world-class fundraiser and persuader. OpenAI’s early releases around 2020 set the stage for a mania around large language models, and the launch of ChatGPT in November 2022 granted Altman a world stage on which to present his new thesis: that these models mirror human intelligence and could swing the doors open to a healthier and wealthier techno-utopia.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


Throughout, Altman’s words have set the agenda. He has framed a prospective superintelligent AI as either humanistic or catastrophic, depending on what effect he was hoping to create, what he was raising money for, or which tech giant seemed like his most formidable competitor at the moment. 

Examining Altman’s statements over the years reveals just how much his outlook has powered today’s AI boom. Even among Silicon Valley’s many hypesters, he’s been especially willing to speak about open questions—whether large language models contain the ingredients of human thought, whether language can also produce intelligence—as if they were already answered. 

What he says about AI is rarely provable when he says it, but it persuades us of one thing: This road we’re on with AI can go somewhere either great or terrifying, and OpenAI will need epic sums to steer it toward the right destination. In this sense, he is the ultimate hype man.

To understand how his voice has shaped our understanding of what AI can do, we read almost everything he’s ever said about the technology (we requested an interview with Altman, but he was not made available). 

His own words trace how we arrived here.

In conclusion … 

Altman didn’t dupe the world. OpenAI has ushered in a genuine tech revolution, with increasingly impressive language models that have attracted millions of users. Even skeptics would concede that LLMs’ conversational ability is astonishing.

But Altman’s hype has always hinged less on today’s capabilities than on a philosophical tomorrow—an outlook that quite handily doubles as a case for more capital and friendlier regulation. Long before large language models existed, he was imagining an AI powerful enough to require wealth redistribution, just as he imagined humanity colonizing other planets. Again and again, promises of a destination—abundance, superintelligence, a healthier and wealthier world—have come first, and the evidence second. 

Even if LLMs eventually hit a wall, there’s little reason to think his faith in a techno-utopian future will falter. The vision was never really about the particulars of the current model anyway. 

The AI doomers feel undeterred

15 December 2025 at 05:00

It’s a weird time to be an AI doomer.

This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad—very, very bad—for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can’t control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better. 


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international “red lines” to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science’s most prestigious awards.

But a number of developments over the past six months have put them on the back foot. Talk of an AI bubble has overwhelmed the discourse as tech companies continue to invest in multiple Manhattan Projects’ worth of data centers without any certainty that future demand will match what they’re building. 

And then there was the August release of OpenAI’s latest foundation model, GPT-5, which proved something of a letdown. Maybe that was inevitable, since it was the most hyped AI release of all time; OpenAI CEO Sam Altman had boasted that GPT-5 felt “like a PhD-level expert” in every topic and told the podcaster Theo Von that the model was so good, it had made him feel “useless relative to the AI.” 

Many expected GPT-5 to be a big step toward AGI, but whatever progress the model may have made was overshadowed by a string of technical bugs and the company’s mystifying, quickly reversed decision to shut off access to every old OpenAI model without warning. And while the new model achieved state-of-the-art benchmark scores, many people felt, perhaps unfairly, that in day-to-day use GPT-5 was a step backward

All this would seem to threaten some of the very foundations of the doomers’ case. In turn, a competing camp of AI accelerationists, who fear AI is actually not moving fast enough and that the industry is constantly at risk of being smothered by overregulation, is seeing a fresh chance to change how we approach AI safety (or, maybe more accurately, how we don’t). 

This is particularly true of the industry types who’ve decamped to Washington: “The Doomer narratives were wrong,” declared David Sacks, the longtime venture capitalist turned Trump administration AI czar. “This notion of imminent AGI has been a distraction and harmful and now effectively proven wrong,” echoed the White House’s senior policy advisor for AI and tech investor Sriram Krishnan. (Sacks and Krishnan did not reply to requests for comment.) 

(There is, of course, another camp in the AI safety debate: the group of researchers and advocates commonly associated with the label “AI ethics.” Though they also favor regulation, they tend to think the speed of AI progress has been overstated and have often written off AGI as a sci-fi story or a scam that distracts us from the technology’s immediate threats. But any potential doomer demise wouldn’t exactly give them the same opening the accelerationists are seeing.)

So where does this leave the doomers? As part of our Hype Correction package, we decided to ask some of the movement’s biggest names to see if the recent setbacks and general vibe shift had altered their views. Are they frustrated that policymakers no longer seem to heed their threats? Are they quietly adjusting their timelines for the apocalypse? 

Recent interviews with 20 people who study or advocate AI safety and governance—including Nobel Prize winner Geoffrey Hinton, Turing Prize winner Yoshua Bengio, and high-profile experts like former OpenAI board member Helen Toner—reveal that rather than feeling chastened or lost in the wilderness, they’re still deeply committed to their cause, believing that AGI remains not just possible but incredibly dangerous.

At the same time, they seem to be grappling with a near contradiction. While they’re somewhat relieved that recent developments suggest AGI is further out than they previously thought (“Thank God we have more time,” says AI researcher Jeffrey Ladish), they also feel angry that people in power are not taking them seriously enough (Daniel Kokotajlo, lead author of a cautionary forecast called “AI 2027,” calls the Sacks and Krishnan tweets “deranged and/or dishonest”). 

Broadly speaking, these experts see the talk of an AI bubble as no more than a speed bump, and disappointment in GPT-5 as more distracting than illuminating. They still generally favor more robust regulation and worry that progress on policy—the implementation of the EU AI Act; the passage of the first major American AI safety bill, California’s SB 53; and new interest in AGI risk from some members of Congress—has become vulnerable as Washington overreacts to what doomers see as short-term failures to live up to the hype. 

Some were also eager to correct what they see as the most persistent misconceptions about the doomer world. Though their critics routinely mock them for predicting that AGI is right around the corner, they claim that’s never been an essential part of their case: It “isn’t about imminence,” says Berkeley professor Stuart Russell, the author of Human Compatible: Artificial Intelligence and the Problem of Control. Most people I spoke with say their timelines to dangerous systems have actually lengthened slightly in the last year—an important change given how quickly the policy and technical landscapes can shift. 

“If someone said there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, ‘Remind me in 2066 and we’ll think about it.’”

Many of them, in fact, emphasize the importance of changing timelines. And even if they are just a tad longer now, Toner tells me that one big-picture story of the ChatGPT era is the dramatic compression of these estimates across the AI world. For a long while, she says, AGI was expected in many decades. Now, for the most part, the predicted arrival is sometime in the next few years to 20 years. So even if we have a little bit more time, she (and many of her peers) continue to see AI safety as incredibly, vitally urgent. She tells me that if AGI were possible anytime in even the next 30 years, “It’s a huge fucking deal. We should have a lot of people working on this.”

So despite the precarious moment doomers find themselves in, their bottom line remains that no matter when AGI is coming (and, again, they say it’s very likely coming), the world is far from ready. 

Maybe you agree. Or maybe you may think this future is far from guaranteed. Or that it’s the stuff of science fiction. You may even think AGI is a great big conspiracy theory. You’re not alone, of course—this topic is polarizing. But whatever you think about the doomer mindset, there’s no getting around the fact that certain people in this world have a lot of influence. So here are some of the most prominent people in the space, reflecting on this moment in their own words. 

Interviews have been edited and condensed for length and clarity. 


The Nobel laureate who’s not sure what’s coming

Geoffrey Hinton, winner of the Turing Award and the Nobel Prize in physics for pioneering deep learning

The biggest change in the last few years is that there are people who are hard to dismiss who are saying this stuff is dangerous. Like, [former Google CEO] Eric Schmidt, for example, really recognized this stuff could be really dangerous. He and I were in China recently talking to someone on the Politburo, the party secretary of Shanghai, to make sure he really understood—and he did. I think in China, the leadership understands AI and its dangers much better because many of them are engineers.

I’ve been focused on the longer-term threat: When AIs get more intelligent than us, can we really expect that humans will remain in control or even relevant? But I don’t think anything is inevitable. There’s huge uncertainty on everything. We’ve never been here before. Anybody who’s confident they know what’s going to happen seems silly to me. I think this is very unlikely but maybe it’ll turn out that all the people saying AI is way overhyped are correct. Maybe it’ll turn out that we can’t get much further than the current chatbots—we hit a wall due to limited data. I don’t believe that. I think that’s unlikely, but it’s possible. 

I also don’t believe people like Eliezer Yudkowsky, who say if anybody builds it, we’re all going to die. We don’t know that. 

But if you go on the balance of the evidence, I think it’s fair to say that most experts who know a lot about AI believe it’s very probable that we’ll have superintelligence within the next 20 years. [Google DeepMind CEO] Demis Hassabis says maybe 10 years. Even [prominent AI skeptic] Gary Marcus would probably say, “Well, if you guys make a hybrid system with good old-fashioned symbolic logic … maybe that’ll be superintelligent.” [Editor’s note: In September, Marcus predicted AGI would arrive between 2033 and 2040.]

And I don’t think anybody believes progress will stall at AGI. I think more or less everybody believes a few years after AGI, we’ll have superintelligence, because the AGI will be better than us at building AI.

So while I think it’s clear that the winds are getting more difficult, simultaneously, people are putting in many more resources [into developing advanced AI]. I think progress will continue just because there’s many more resources going in.

The deep learning pioneer who wishes he’d seen the risks sooner

Yoshua Bengio, winner of the Turing Award, chair of the International AI Safety Report, and founder of LawZero

Some people thought that GPT-5 meant we had hit a wall, but that isn’t quite what you see in the scientific data and trends.

There have been people overselling the idea that AGI is tomorrow morning, which commercially could make sense. But if you look at the various benchmarks, GPT-5 is just where you would expect the models at that point in time to be. By the way, it’s not just GPT-5, it’s Claude and Google models, too. In some areas where AI systems weren’t very good, like Humanity’s Last Exam or FrontierMath, they’re getting much better scores now than they were at the beginning of the year.

At the same time, the overall landscape for AI governance and safety is not good. There’s a strong force pushing against regulation. It’s like climate change. We can put our head in the sand and hope it’s going to be fine, but it doesn’t really deal with the issue.

The biggest disconnect with policymakers is a misunderstanding of the scale of change that is likely to happen if the trend of AI progress continues. A lot of people in business and governments simply think of AI as just another technology that’s going to be economically very powerful. They don’t understand how much it might change the world if trends continue, and we approach human-level AI. 

Like many people, I had been blinding myself to the potential risks to some extent. I should have seen it coming much earlier. But it’s human. You’re excited about your work and you want to see the good side of it. That makes us a little bit biased in not really paying attention to the bad things that could happen.

Even a small chance—like 1% or 0.1%—of creating an accident where billions of people die is not acceptable. 

The AI veteran who believes AI is progressing—but not fast enough to prevent the bubble from bursting

Stuart Russell, distinguished professor of computer science, University of California, Berkeley, and author of Human Compatible

I hope the idea that talking about existential risk makes you a “doomer” or is “science fiction” comes to be seen as fringe, given that most leading AI researchers and most leading AI CEOs take it seriously. 

There have been claims that AI could never pass a Turing test, or you could never have a system that uses natural language fluently, or one that could parallel-park a car. All these claims just end up getting disproved by progress.

People are spending trillions of dollars to make superhuman AI happen. I think they need some new ideas, but there’s a significant chance they will come up with them, because many significant new ideas have happened in the last few years. 

My fairly consistent estimate for the last 12 months has been that there’s a 75% chance that those breakthroughs are not going to happen in time to rescue the industry from the bursting of the bubble. Because the investments are consistent with a prediction that we’re going to have much better AI that will deliver much more value to real customers. But if those predictions don’t come true, then there’ll be a lot of blood on the floor in the stock markets.

However, the safety case isn’t about imminence. It’s about the fact that we still don’t have a solution to the control problem. If someone said there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, “Remind me in 2066 and we’ll think about it.” We don’t know how long it takes to develop the technology needed to control superintelligent AI.

Looking at precedents, the acceptable level of risk for a nuclear plant melting down is about one in a million per year. Extinction is much worse than that. So maybe set the acceptable risk at one in a billion. But the companies are saying it’s something like one in five. They don’t know how to make it acceptable. And that’s a problem.

The professor trying to set the narrative straight on AI safety

David Krueger, assistant professor in machine learning at the University of Montreal and Yoshua Bengio’s Mila Institute, and founder of Evitable

I think people definitely overcorrected in their response to GPT-5. But there was hype. My recollection was that there were multiple statements from CEOs at various levels of explicitness who basically said that by the end of 2025, we’re going to have an automated drop-in replacement remote worker. But it seems like it’s been underwhelming, with agents just not really being there yet.

I’ve been surprised how much these narratives predicting AGI in 2027 capture the public attention. When 2027 comes around, if things still look pretty normal, I think people are going to feel like the whole worldview has been falsified. And it’s really annoying how often when I’m talking to people about AI safety, they assume that I think we have really short timelines to dangerous systems, or that I think LLMs or deep learning are going to give us AGI. They ascribe all these extra assumptions to me that aren’t necessary to make the case. 

I’d expect we need decades for the international coordination problem. So even if dangerous AI is decades off, it’s already urgent. That point seems really lost on a lot of people. There’s this idea of “Let’s wait until we have a really dangerous system and then start governing it.” Man, that is way too late.

I still think people in the safety community tend to work behind the scenes, with people in power, not really with civil society. It gives ammunition to people who say it’s all just a scam or insider lobbying. That’s not to say that there’s no truth to these narratives, but the underlying risk is still real. We need more public awareness and a broad base of support to have an effective response.

If you actually believe there’s a 10% chance of doom in the next 10 years—which I think a reasonable person should, if they take a close look—then the first thing you think is: “Why are we doing this? This is crazy.” That’s just a very reasonable response once you buy the premise.

The governance expert worried about AI safety’s credibility

Helen Toner, acting executive director of Georgetown University’s Center for Security and Emerging Technology and former OpenAI board member

When I got into the space, AI safety was more of a set of philosophical ideas. Today, it’s a thriving set of subfields of machine learning, filling in the gulf between some of the more “out there” concerns about AI scheming, deception, or power-seeking and real concrete systems we can test and play with. 

“I worry that some aggressive AGI timeline estimates from some AI safety people are setting them up for a boy-who-cried-wolf moment.”

AI governance is improving slowly. If we have lots of time to adapt and governance can keep improving slowly, I feel not bad. If we don’t have much time, then we’re probably moving too slow.

I think GPT-5 is generally seen as a disappointment in DC. There’s a pretty polarized conversation around: Are we going to have AGI and superintelligence in the next few years? Or is AI actually just totally all hype and useless and a bubble? The pendulum had maybe swung too far toward “We’re going to have super-capable systems very, very soon.” And so now it’s swinging back toward “It’s all hype.”

I worry that some aggressive AGI timeline estimates from some AI safety people are setting them up for a boy-who-cried-wolf moment. When the predictions about AGI coming in 2027 don’t come true, people will say, “Look at all these people who made fools of themselves. You should never listen to them again.” That’s not the intellectually honest response, if maybe they later changed their mind, or their take was that they only thought it was 20 percent likely and they thought that was still worth paying attention to. I think that shouldn’t be disqualifying for people to listen to you later, but I do worry it will be a big credibility hit. And that’s applying to people who are very concerned about AI safety and never said anything about very short timelines.

The AI security researcher who now believes AGI is further out—and is grateful

Jeffrey Ladish, executive director at Palisade Research

In the last year, two big things updated my AGI timelines. 

First, the lack of high-quality data turned out to be a bigger problem than I expected. 

Second, the first “reasoning” model, OpenAI’s o1 in September 2024, showed reinforcement learning scaling was more effective than I thought it would be. And then months later, you see the o1 to o3 scale-up and you see pretty crazy impressive performance in math and coding and science—domains where it’s easier to sort of verify the results. But while we’re seeing continued progress, it could have been much faster.

All of this bumps up my median estimate to the start of fully automated AI research and development from three years to maybe five or six years. But those are kind of made up numbers. It’s hard. I want to caveat all this with, like, “Man, it’s just really hard to do forecasting here.”

Thank God we have more time. We have a possibly very brief window of opportunity to really try to understand these systems before they are capable and strategic enough to pose a real threat to our ability to control them.

But it’s scary to see people think that we’re not making progress anymore when that’s clearly not true. I just know it’s not true because I use the models. One of the downsides of the way AI is progressing is that how fast it’s moving is becoming less legible to normal people. 

Now, this is not true in some domains—like, look at Sora 2. It is so obvious to anyone who looks at it that Sora 2 is vastly better than what came before. But if you ask GPT-4 and GPT-5 why the sky is blue, they’ll give you basically the same answer. It is the correct answer. It’s already saturated the ability to tell you why the sky is blue. So the people who I expect to most understand AI progress right now are the people who are actually building with AIs or using AIs on very difficult scientific problems.

The AGI forecaster who saw the critics coming

Daniel Kokotajlo, executive director of the AI Futures Project; an OpenAI whistleblower; and lead author of “AI 2027,” a vivid scenario where—starting in 2027—AIs progress from “superhuman coders” to “wildly superintelligent” systems in the span of months

AI policy seems to be getting worse, like the “Pro-AI” super PAC [launched earlier this year by executives from OpenAI and Andreessen Horowitz to lobby for a deregulatory agenda], and the deranged and/or dishonest tweets from Sriram Krishnan and David Sacks. AI safety research is progressing at the usual pace, which is excitingly rapid compared to most fields, but slow compared to how fast it needs to be.

We said on the first page of “AI 2027” that our timelines were somewhat longer than 2027. So even when we launched AI 2027, we expected there to be a bunch of critics in 2028 triumphantly saying we’ve been discredited, like the tweets from Sacks and Krishnan. But we thought, and continue to think, that the intelligence explosion will probably happen sometime in the next five to 10 years, and that when it does, people will remember our scenario and realize it was closer to the truth than anything else available in 2025. 

Predicting the future is hard, but it’s valuable to try; people should aim to communicate their uncertainty about the future in a way that is specific and falsifiable. This is what we’ve done and very few others have done. Our critics mostly haven’t made predictions of their own and often exaggerate and mischaracterize our views. They say our timelines are shorter than they are or ever were, or they say we are more confident than we are or were.

I feel pretty good about having longer timelines to AGI. It feels like I just got a better prognosis from my doctor. The situation is still basically the same, though.

Garrison Lovely is a freelance journalist and the author of Obsolete, an online publication and forthcoming book on the discourse, economics, and geopolitics of the race to build machine superintelligence (out spring 2026). His writing on AI has appeared in the New York Times, Nature, Bloomberg, Time, the Guardian, The Verge, and elsewhere.

The great AI hype correction of 2025

15 December 2025 at 05:00

Some disillusionment was inevitable. When OpenAI released a free web app called ChatGPT in late 2022, it changed the course of an entire industry—and several world economies. Millions of people started talking to their computers, and their computers started talking back. We were enchanted, and we expected more.

We got it. Technology companies scrambled to stay ahead, putting out rival products that outdid one another with each new release: voice, images, video. With nonstop one-upmanship, AI companies have presented each new product drop as a major breakthrough, reinforcing a widespread faith that this technology would just keep getting better. Boosters told us that progress was exponential. They posted charts plotting how far we’d come since last year’s models: Look how the line goes up! Generative AI could do anything, it seemed.

Well, 2025 has been a year of reckoning. 


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


For a start, the heads of the top AI companies made promises they couldn’t keep. They told us that generative AI would replace the white-collar workforce, bring about an age of abundance, make scientific discoveries, and help find new cures for disease. FOMO across the world’s economies, at least in the Global North, made CEOs tear up their playbooks and try to get in on the action.

That’s when the shine started to come off. Though the technology may have been billed as a universal multitool that could revamp outdated business processes and cut costs, a number of studies published this year suggest that firms are failing to make the AI pixie dust work its magic. Surveys and trackers from a range of sources, including the US Census Bureau and Stanford University, have found that business uptake of AI tools is stalling. And when the tools do get tried out, many projects stay stuck in the pilot stage. Without broad buy-in across the economy it is not clear how the big AI companies will ever recoup the incredible amounts they’ve already spent in this race. 

At the same time, updates to the core technology are no longer the step changes they once were.

The highest-profile example of this was the botched launch of GPT-5 in August. Here was OpenAI, the firm that had ignited (and to a large extent sustained) the current boom, set to release a brand-new generation of its technology. OpenAI had been hyping GPT-5 for months: “PhD-level expert in anything,” CEO Sam Altman crowed. On another occasion Altman posted, without comment, an image of the Death Star from Star Wars, which OpenAI stans took to be a symbol of ultimate power: Coming soon! Expectations were huge.

And yet, when it landed, GPT-5 seemed to be—more of the same? What followed was the biggest vibe shift since ChatGPT first appeared three years ago. “The era of boundary-breaking advancements is over,” Yannic Kilcher, an AI researcher and popular YouTuber, announced in a video posted two days after GPT-5 came out: “AGI is not coming. It seems very much that we’re in the Samsung Galaxy era of LLMs.”

A lot of people (me included) have made the analogy with phones. For a decade or so, smartphones were the most exciting consumer tech in the world. Today, new products drop from Apple or Samsung with little fanfare. While superfans pore over small upgrades, to most people this year’s iPhone now looks and feels a lot like last year’s iPhone. Is that where we are with generative AI? And is it a problem? Sure, smartphones have become the new normal. But they changed the way the world works, too.

To be clear, the last few years have been filled with genuine “Wow” moments, from the stunning leaps in the quality of video generation models to the problem-solving chops of so-called reasoning models to the world-class competition wins of the latest coding and math models. But this remarkable technology is only a few years old, and in many ways it is still experimental. Its successes come with big caveats.

Perhaps we need to readjust our expectations.

The big reset

Let’s be careful here: The pendulum from hype to anti-hype can swing too far. It would be rash to dismiss this technology just because it has been oversold. The knee-jerk response when AI fails to live up to its hype is to say that progress has hit a wall. But that misunderstands how research and innovation in tech work. Progress has always moved in fits and starts. There are ways over, around, and under walls.

Take a step back from the GPT-5 launch. It came hot on the heels of a series of remarkable models that OpenAI had shipped in the previous months, including o1 and o3 (first-of-their-kind reasoning models that introduced the industry to a whole new paradigm) and Sora 2, which raised the bar for video generation once again. That doesn’t sound like hitting a wall to me.

AI is really good! Look at Nano Banana Pro, the new image generation model from Google DeepMind that can turn a book chapter into an infographic, and much more. It’s just there—for free—on your phone.

And yet you can’t help but wonder: When the wow factor is gone, what’s left? How will we view this technology a year or five from now? Will we think it was worth the colossal costs, both financial and environmental? 

With that in mind, here are four ways to think about the state of AI at the end of 2025: The start of a much-needed hype correction.

01: LLMs are not everything

In some ways, it is the hype around large language models, not AI as a whole, that needs correcting. It has become obvious that LLMs are not the doorway to artificial general intelligence, or AGI, a hypothetical technology that some insist will one day be able to do any (cognitive) task a human can.

Even an AGI evangelist like Ilya Sutskever, chief scientist and cofounder at the AI startup Safe Superintelligence and former chief scientist and cofounder at OpenAI, now highlights the limitations of LLMs, a technology he had a huge hand in creating. LLMs are very good at learning how to do a lot of specific tasks, but they do not seem to learn the principles behind those tasks, Sutskever said in an interview with Dwarkesh Patel in November.

It’s the difference between learning how to solve a thousand different algebra problems and learning how to solve any algebra problem. “The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people,” Sutskever said.

It’s easy to imagine that LLMs can do anything because their use of language is so compelling. It is astonishing how well this technology can mimic the way people write and speak. And we are hardwired to see intelligence in things that behave in certain ways—whether it’s there or not. In other words, we have built machines with humanlike behavior and cannot resist seeing a humanlike mind behind them.

That’s understandable. LLMs have been part of mainstream life for only a few years. But in that time, marketers have preyed on our shaky sense of what the technology can really do, pumping up expectations and turbocharging the hype. As we live with this technology and come to understand it better, those expectations should fall back down to earth.  

02: AI is not a quick fix to all your problems

In July, researchers at MIT published a study that became a tentpole talking point in the disillusionment camp. The headline result was that a whopping 95% of businesses that had tried using AI had found zero value in it.  

The general thrust of that claim was echoed by other research, too. In November, a study by researchers at Upwork, a company that runs an online marketplace for freelancers, found that agents powered by top LLMs from OpenAI, Google DeepMind, and Anthropic failed to complete many straightforward workplace tasks by themselves.

This is miles off Altman’s prediction: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” he wrote on his personal blog in January.

But what gets missed in that MIT study is that the researchers’ measure of success was pretty narrow. That 95% failure rate accounts for companies that had tried to implement bespoke AI systems but had not yet scaled them beyond the pilot stage after six months. It shouldn’t be too surprising that a lot of experiments with experimental technology don’t pan out straight away.

That number also does not include the use of LLMs by employees outside of official pilots. The MIT researchers found that around 90% of the companies they surveyed had a kind of AI shadow economy where workers were using personal chatbot accounts. But the value of that shadow economy was not measured.  

When the Upwork study looked at how well agents completed tasks together with people who knew what they were doing, success rates shot up. The takeaway seems to be that a lot of people are figuring out for themselves how AI might help them with their jobs.

That fits with something the AI researcher and influencer (and coiner of the term “vibe coding”) Andrej Karpathy has noted: Chatbots are better than the average human at a lot of different things (think of giving legal advice, fixing bugs, doing high school math), but they are not better than an expert human. Karpathy suggests this may be why chatbots have proved popular with individual consumers, helping non-experts with everyday questions and tasks, but they have not upended the economy, which would require outperforming skilled employees at their jobs.

That may change. For now, don’t be surprised that AI has not (yet) had the impact on jobs that boosters said it would. AI is not a quick fix, and it cannot replace humans. But there’s a lot to play for. The ways in which AI could be integrated into everyday workflows and business pipelines are still being tried out.   

03: Are we in a bubble? (If so, what kind of bubble?)

If AI is a bubble, is it like the subprime mortgage bubble of 2008 or the internet bubble of 2000? Because there’s a big difference.

The subprime bubble wiped out a big part of the economy, because when it burst it left nothing behind except debt and overvalued real estate. The dot-com bubble wiped out a lot of companies, which sent ripples across the world, but it left behind the infant internet—an international network of cables and a handful of startups, like Google and Amazon, that became the tech giants of today.  

Then again, maybe we’re in a bubble unlike either of those. After all, there’s no real business model for LLMs right now. We don’t yet know what the killer app will be, or if there will even be one. 

And many economists are concerned about the unprecedented amounts of money being sunk into the infrastructure required to build capacity and serve the projected demand. But what if that demand doesn’t materialize? Add to that the weird circularity of many of those deals—with Nvidia paying OpenAI to pay Nvidia, and so on—and it’s no surprise everybody’s got a different take on what’s coming. 

Some investors remain sanguine. In an interview with the Technology Business Programming Network podcast in November, Glenn Hutchins, cofounder of Silver Lake Partners, a major international private equity firm, gave a few reasons not to worry. “Every one of these data centers—almost all of them—has a solvent counterparty that is contracted to take all the output they’re built to suit,” he said. In other words, it’s not a case of “Build it and they’ll come”—the customers are already locked in. 

And, he pointed out, one of the biggest of those solvent counterparties is Microsoft. “Microsoft has the world’s best credit rating,” Hutchins said. “If you sign a deal with Microsoft to take the output from your data center, Satya is good for it.”

Many CEOs will be looking back at the dot-com bubble and trying to learn its lessons. Here’s one way to see it: The companies that went bust back then didn’t have the money to last the distance. Those that survived the crash thrived.

With that lesson in mind, AI companies today are trying to pay their way through what may or may not be a bubble. Stay in the race; don’t get left behind. Even so, it’s a desperate gamble.

But there’s another lesson too. Companies that might look like sideshows can turn into unicorns fast. Take Synthesia, which makes avatar generation tools for businesses. Nathan Benaich, cofounder of the VC firm Air Street Capital, admits that when he first heard about the company a few years ago, back when fear of deepfakes was rife, he wasn’t sure what its tech was for and thought there was no market for it.

“We didn’t know who would pay for lip-synching and voice cloning,” he says. “Turns out there’s a lot of people who wanted to pay for it.” Synthesia now has around 55,000 corporate customers and brings in around $150 million a year. In October, the company was valued at $4 billion.

04: ChatGPT was not the beginning, and it won’t be the end

ChatGPT was the culmination of a decade’s worth of progress in deep learning, the technology that underpins all of modern AI. The seeds of deep learning itself were planted in the 1980s. The field as a whole goes back at least to the 1950s. If progress is measured against that backdrop, generative AI has barely got going.

Meanwhile, research is at a fever pitch. There are more high-quality submissions to the world’s major AI conferences than ever before. This year, organizers of some of those conferences resorted to turning down papers that reviewers had already approved, just to manage numbers. (At the same time, preprint servers like arXiv have been flooded with AI-generated research slop.)

“It’s back to the age of research again,” Sutskever said in that Dwarkesh interview, talking about the current bottleneck with LLMs. That’s not a setback; that’s the start of something new.

“There’s always a lot of hype beasts,” says Benaich. But he thinks there’s an upside to that: Hype attracts the money and talent needed to make real progress. “You know, it was only like two or three years ago that the people who built these models were basically research nerds that just happened on something that kind of worked,” he says. “Now everybody who’s good at anything in technology is working on this.”

Where do we go from here?

The relentless hype hasn’t come just from companies drumming up business for their vastly expensive new technologies. There’s a large cohort of people—inside and outside the industry—who want to believe in the promise of machines that can read, write, and think. It’s a wild decades-old dream

But the hype was never sustainable—and that’s a good thing. We now have a chance to reset expectations and see this technology for what it really is—assess its true capabilities, understand its flaws, and take the time to learn how to apply it in valuable (and beneficial) ways. “We’re still trying to figure out how to invoke certain behaviors from this insanely high-dimensional black box of information and skills,” says Benaich.

This hype correction was long overdue. But know that AI isn’t going anywhere. We don’t even fully understand what we’ve built so far, let alone what’s coming next.

Generative AI hype distracts us from AI’s more important breakthroughs

15 December 2025 at 05:00

On April 28, 2022, at a highly anticipated concert in Spokane, Washington, the musician Paul McCartney astonished his audience with a groundbreaking application of AI: He began to perform with a lifelike depiction of his long-deceased musical partner, John Lennon. 

Using recent advances in audio and video processing, engineers had taken the pair’s final performance (London, 1969), separated Lennon’s voice and image from the original mix and restored them with lifelike clarity.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


For years, researchers like me had taught machines to “see” and “hear” in order to make such a moment possible. As McCartney and Lennon appeared to reunite across time and space, the arena fell silent; many in the crowd began to cry. As an AI scientist and lifelong Beatles fan, I felt profound gratitude that we could experience this truly life-changing moment. 

Later that year, the world was captivated by another major breakthrough: AI conversation. For the first time in history, systems capable of generating new, contextually relevant comments in real time, on virtually any subject, were widely accessible owing to the release of ChatGPT. Billions of people were suddenly able to interact with AI. This ignited the public’s imagination about what AI could be, bringing an explosion of creative ideas, hopes, and fears.

Having done my PhD on AI language generation (long considered niche), I was thrilled we had come this far. But the awe I felt was rivaled by my growing rage at the flood of media takes and self-appointed experts insisting that generative AI could do things it simply can’t, and warning that anyone who didn’t adopt it would be left behind.

This kind of hype has contributed to a frenzy of misunderstandings about what AI actually is and what it can and cannot do. Crucially, generative AI is a seductive distraction from the type of AI that is most likely to make your life better, or even save it: Predictive AI. In contrast to AI designed for generative tasks, predictive AI involves tasks with a finite, known set of answers; the system just has to process information to say which answer is right. A basic example is plant recognition: Point your phone camera at a plant and learn that it’s a Western sword fern. Generative tasks, in contrast, have no finite set of correct answers: The system must blend snippets of information it’s been trained on to create, for example, a novel picture of a fern. 

The generative AI technology involved in chatbots, face-swaps, and synthetic video makes for stunning demos, driving clicks and sales as viewers run wild with ideas that superhuman AI will be capable of bringing us abundance or extinction. Yet predictive AI has quietly been improving weather prediction and food safety, enabling higher-quality music production, helping to organize photos, and accurately predicting the fastest driving routes. We incorporate predictive AI into our everyday lives without evening thinking about it, a testament to its indispensable utility.

To get a sense of the immense progress on predictive AI and its future potential, we can look at the trajectory of the past 20 years. In 2005, we couldn’t get AI to tell the difference between a person and a pencil. By 2013, AI still couldn’t reliably detect a bird in a photo, and the difference between a pedestrian and a Coke bottle was massively confounding (this is how I learned that bottles do kind of look like people, if people had no heads). The thought of deploying these systems in the real world was the stuff of science fiction. 

Yet over the past 10 years, predictive AI has not only nailed bird detection down to the specific species; it has rapidly improved life-critical medical services like identifying problematic lesions and heart arrhythmia. Because of this technology, seismologists can predict earthquakes and meteorologists can predict flooding more reliably than ever before. Accuracy has skyrocketed for consumer-facing tech that detects and classifies everything from what song you’re thinking of when you hum a tune to which objects to avoid while you’re driving—making self-driving cars a reality. 

In the very near future, we should be able to accurately detect tumors and forecast hurricanes long before they can hurt anyone, realizing the lifelong hopes of people all over the world. That might not be as flashy as generating your own Studio Ghibli–ish film, but it’s definitely hype-worthy. 

Predictive AI systems have also been shown to be incredibly useful when they leverage certain generative techniques within a constrained set of options. Systems of this type are diverse, spanning everything from outfit visualization to cross-language translation. Soon, predictive-generative hybrid systems will make it possible to clone your own voice speaking another language in real time, an extraordinary aid for travel (with serious impersonation risks). There’s considerable room for growth here, but generative AI delivers real value when anchored by strong predictive methods.

To understand the difference between these two broad classes of AI, imagine yourself as an AI system tasked with showing someone what a cat looks like. You could adopt a generative approach, cutting and pasting small fragments from various cat images (potentially from sources that object) to construct a seemingly perfect depiction. The ability of modern generative AI to produce such a flawless collage is what makes it so astonishing.

Alternatively, you could take the predictive approach: Simply locate and point to an existing picture of a cat. That method is much less glamorous but more energy-efficient and more likely to be accurate, and it properly acknowledges the original source. Generative AI is designed to create things that look real; predictive AI identifies what is real. A misunderstanding that generative systems are retrieving things when they are actually creating them has led to grave consequences when text is involved, requiring the withdrawal of legal rulings and the retraction of scientific articles.

Driving this confusion is a tendency for people to hype AI without making it clear what kind of AI they’re talking about (I reckon many don’t know). It’s very easy to equate “AI” with generative AI, or even just language-generating AI, and assume that all other capabilities fall out from there. That fallacy makes a ton of sense: The term literally references “intelligence,” and our human understanding of what “intelligence” might be is often mediated by the use of language. (Spoiler: No one actually knows what intelligence is.) But the phrase “artificial intelligence” was intentionally designed in the 1950s to inspire awe and allude to something humanlike. Today, it just refers to a set of disparate technologies for processing digital data. Some of my friends find it helpful to call it “mathy maths” instead.

The bias toward treating generative AI as the most powerful and real form of AI is troubling given that it consumes considerably more energy than predictive AI systems. It also means using existing human work in AI products against the original creators’ wishes and replacing human jobs with AI systems whose capabilities their work made possible in the first place—without compensation. AI can be amazingly powerful, but that doesn’t mean creators should be ripped off

Watching this unfold as an AI developer within the tech industry, I’ve drawn important lessons for next steps. The widespread appeal of AI is clearly linked to the intuitive nature of conversation-based interactions. But this method of engagement currently overuses generative methods where predictive ones would suffice, resulting in an awkward situation that’s confusing for users while imposing heavy costs in energy consumption, exploitation, and job displacement. 

We have witnessed just a glimpse of AI’s full potential: The current excitement around AI reflects what it could be, not what it is. Generation-based approaches strain resources while still falling short on representation, accuracy, and the wishes of people whose work is folded into the system. 

If we can shift the spotlight from the hype around generative technologies to the predictive advances already transforming daily life, we can build AI that is genuinely useful, equitable, and sustainable. The systems that help doctors catch diseases earlier, help scientists forecast disasters sooner, and help everyday people navigate their lives more safely are the ones poised to deliver the greatest impact. 

The future of beneficial AI will not be defined by the flashiest demos but by the quiet, rigorous progress that makes technology trustworthy. And if we build on that foundation—pairing predictive strength with more mature data practices and intuitive natural-language interfaces—AI can finally start living up to the promise that many people perceive today.

Dr. Margaret Mitchell is a computer science researcher and chief ethics scientist at AI startup Hugging Face. She has worked in the technology industry for 15 years, and has published over 100 papers on natural language generation, assistive technology, computer vision, and AI ethics. Her work has received numerous awards and has been implemented by multiple technology companies.

AI might not be coming for lawyers’ jobs anytime soon

15 December 2025 at 05:00

When the generative AI boom took off in 2022, Rudi Miller and her law school classmates were suddenly gripped with anxiety. “Before graduating, there was discussion about what the job market would look like for us if AI became adopted,” she recalls. 

So when it came time to choose a speciality, Miller—now a junior associate at the law firm Orrick—decided to become a litigator, the kind of lawyer who represents clients in court. She hoped the courtroom would be the last human stage. “Judges haven’t allowed ChatGPT-enabled robots to argue in court yet,” she says.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


She had reason to be worried. The artificial-intelligence job apocalypse seemed to be coming for lawyers. In March 2023, researchers reported that GPT-4 had smashed the Uniform Bar Exam. That same month, an industry report predicted that 44% of legal work could be automated. The legal tech industry entered a boom as law firms began adopting generative AI to mine mountains of documents and draft contracts, work ordinarily done by junior associates. Last month, the law firm Clifford Chance axed 10% of its staff in London, citing increased use of AI as a reason.

But for all the hype, LLMs are still far from thinking like lawyers—let alone replacing them. The models continue to hallucinate case citations, struggle to navigate gray areas of the law and reason about novel questions, and stumble when they attempt to synthesize information scattered across statutes, regulations, and court cases. And there are deeper institutional reasons to think the models could struggle to supplant legal jobs. While AI is reshaping the grunt work of the profession, the end of lawyers may not be arriving anytime soon.

The big experiment

The legal industry has long been defined by long hours and grueling workloads, so the promise of superhuman efficiency is appealing. Law firms are experimenting with general-purpose tools like ChatGPT and Microsoft Copilot and specialized legal tools like Harvey and Thomson Reuters’ CoCounsel, with some building their own in-house tools on top of frontier models. They’re rolling out AI boot camps and letting associates bill hundreds of hours to AI experimentation. As of 2024, 47.8% of attorneys at law firms employing 500 or more lawyers used AI, according to the American Bar Association. 

But lawyers say that LLMs are a long way from reasoning well enough to replace them. Lucas Hale, a junior associate at McDermott Will & Schulte, has been embracing AI for many routine chores. He uses Relativity to sift through long documents and Microsoft Copilot for drafting legal citations. But when he turns to ChatGPT with a complex legal question, he finds the chatbot spewing hallucinations, rambling off topic, or drawing a blank.

“In the case where we have a very narrow question or a question of first impression for the court,” he says, referring to a novel legal question that a court has never decided before, “that’s the kind of thinking that the tool can’t do.”

Much of Lucas’s work involves creatively applying the law to new fact patterns. “Right now, I don’t think very much of the work that litigators do, at least not the work that I do, can be outsourced to an AI utility,” he says.

Allison Douglis, a senior associate at Jenner & Block, uses an LLM to kick off her legal research. But the tools only take her so far. “When it comes to actually fleshing out and developing an argument as a litigator, I don’t think they’re there,” she says. She has watched the models hallucinate case citations and fumble through ambiguous areas of the law.

“Right now, I would much rather work with a junior associate than an AI tool,” she says. “Unless they get extraordinarily good very quickly, I can’t imagine that changing in the near future.”

Beyond the bar

The legal industry has seemed ripe for an AI takeover ever since ChatGPT’s triumph on the bar exam. But passing a standardized test isn’t the same as practicing law. The exam tests whether people can memorize legal rules and apply them to hypothetical situations—not whether they can exercise strategic judgment in complicated realities or craft arguments in uncharted legal territory. And models can be trained to ace benchmarks without genuinely improving their reasoning.

But new benchmarks are aiming to better measure the models’ ability to do legal work in the real world. The Professional Reasoning Benchmark, published by ScaleAI in November, evaluated leading LLMs on legal and financial tasks designed by professionals in the field. The study found that the models have critical gaps in their reliability for professional adoption, with the best-performing model scoring only 37% on the most difficult legal problems, meaning it met just over a third of possible points on the evaluation criteria. The models frequently made inaccurate legal judgments, and if they did reach correct conclusions, they did so through incomplete or opaque reasoning processes. 

“The tools actually are not there to basically substitute [for] your lawyer,” says Afra Feyza Akyurek, the lead author of the paper. “Even though a lot of people think that LLMs have a good grasp of the law, it’s still lagging behind.” 

The paper builds on other benchmarks measuring the models’ performance on economically valuable work. The AI Productivity Index, published by the data firm Mercor in September and updated in December, found that the models have “substantial limitations” in performing legal work. The best-performing model scored 77.9% on legal tasks, meaning it satisfied roughly four out of five evaluation criteria. A model with such a score might generate substantial economic value in some industries, but in fields where errors are costly, it may not be useful at all, the early version of the study noted.  

Professional benchmarks are a big step forward in evaluating the LLMs’ real-world capabilities, but they may still not capture what lawyers actually do. “These questions, although more challenging than those in past benchmarks, still don’t fully reflect the kinds of subjective, extremely challenging questions lawyers tackle in real life,” says Jon Choi, a law professor at the University of Washington School of Law, who coauthored a study on legal benchmarks in 2023. 

Unlike math or coding, in which LLMs have made significant progress, legal reasoning may be challenging for the models to learn. The law deals with messy real-world problems, riddled with ambiguity and subjectivity, that often have no right answer, says Choi. Making matters worse, a lot of legal work isn’t recorded in ways that can be used to train the models, he says. When it is, documents can span hundreds of pages, scattered across statutes, regulations, and court cases that exist in a complex hierarchy.  

But a more fundamental limitation might be that LLMs are simply not trained to think like lawyers. “The reasoning models still don’t fully reason about problems like we humans do,” says Julian Nyarko, a law professor at Stanford Law School. The models may lack a mental model of the world—the ability to simulate a scenario and predict what will happen—and that capability could be at the heart of complex legal reasoning, he says. It’s possible that the current paradigm of LLMs trained on next-word prediction gets us only so far.  

The jobs remain

Despite early signs that AI is beginning to affect entry-level workers, labor statistics have yet to show that lawyers are being displaced. 93.4% of law school graduates in 2024 were employed within 10 months of graduation—the highest rate on record—according to the National Association for Law Placement. The number of graduates working in law firms rose by 13% from 2023 to 2024. 

For now, law firms are slow to shrink their ranks. “We’re not reducing headcounts at this point,” said Amy Ross, the chief of attorney talent at the law firm Ropes & Gray. 

Even looking ahead, the effects could be incremental. “I will expect some impact on the legal profession’s labor market, but not major,” says Mert Demirer, an economist at MIT. “AI is going to be very useful in terms of information discovery and summary,” he says, but for complex legal tasks, “the law’s low risk tolerance, plus the current capabilities of AI, are going to make that case less automatable at this point.” Capabilities may evolve over time, but that’s a big unknown.

It’s not just that the models themselves are not ready to replace junior lawyers. Institutional barriers may also shape how AI is deployed. Higher productivity reduces billable hours, challenging the dominant business model of law firms. Liability looms large for lawyers, and clients may still want a human on the hook. Regulations could also constrain how lawyers use the technology.

Still, as AI takes on some associate work, law firms may need to reinvent their training system. “When junior work dries up, you have to have a more formal way of teaching than hoping that an apprenticeship works,” says Ethan Mollick, a management professor at the Wharton School of the University of Pennsylvania.

Zach Couger, a junior associate at McDermott Will & Schulte, leans on ChatGPT to comb through piles of contracts he once slogged through by hand. He can’t imagine going back to doing the job himself, but he wonders what he’s missing. 

“I’m worried that I’m not getting the same reps that senior attorneys got,” he says, referring to the repetitive training that has long defined the early experiences of lawyers. “On the other hand, it is very nice to have a semi–knowledge expert to just ask questions to that’s not a partner who’s also very busy.” 

Even though an AI job apocalypse looks distant, the uncertainty sticks with him. Lately, Couger finds himself staying up late, wondering if he could be part of the last class of associates at big law firms: “I may be the last plane out.”

What even is the AI bubble?

15 December 2025 at 05:00

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

In July, a widely cited MIT study claimed that 95% of organizations that invested in generative AI were getting “zero return.” Tech stocks briefly plunged. While the study itself was more nuanced than the headlines, for many it still felt like the first hard data point confirming what skeptics had muttered for months: Hype around AI might be outpacing reality.

Then, in August, OpenAI CEO Sam Altman said what everyone in Silicon Valley had been whispering. “Are we in a phase where investors as a whole are overexcited about AI?” he said during a press dinner I attended. “My opinion is yes.” 


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


He compared the current moment to the dot-com bubble. “When bubbles happen, smart people get overexcited about a kernel of truth,” he explained. “Tech was really important. The internet was a really big deal. People got overexcited.” 

With those comments, it was off to the races. The next day’s stock market dip was attributed to the sentiment he shared. The question “Are we in an AI bubble?” became inescapable.

Who thinks it is a bubble? 

The short answer: Lots of people. But not everyone agrees on who or what is overinflated. Tech leaders are using this moment of fear to take shots at their rivals and position themselves as clear winners on the other side. How they describe the bubble depends on where their company sits.

When I asked Meta CEO Mark Zuckerberg about the AI bubble in September, he ran through the historical analogies of past bubbles—railroads, fiber for the internet, the dot-com boom—and noted that in each case, “the infrastructure gets built out, people take on too much debt, and then you hit some blip … and then a lot of the companies end up going out of business.”

But Zuckerberg’s prescription wasn’t for Meta to pump the brakes. It was to keep spending: “If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously. But I’d say the risk is higher on the other side.”

Bret Taylor, the chairman of OpenAI and CEO of the AI startup Sierra, uses a mental model from the late ’90s to help navigate this AI bubble. “I think the closest analogue to this AI wave is the dot-com boom or bubble, depending on your level of pessimism,” he recently told me. Back then, he explained, everyone knew e-commerce was going to be big, but there was a massive difference between Buy.com and Amazon. Taylor and others have been trying to position themselves as today’s Amazon.

Still others are arguing that the pain will be widespread. Google CEO Sundar Pichai told the BBC this month that there’s “some irrationality” in the current boom. Asked whether Google would be immune to a bubble bursting, he warned, “I think no company is going to be immune, including us.”

What’s inflating the bubble?

Companies are raising enormous sums of money and seeing unprecedented valuations. Much of that money, in turn, is going toward the buildout of massive data centers—on which both private companies like OpenAI and Elon Musk’s xAI and public ones such as Meta and Google are spending heavily. OpenAI has pledged that it will spend $500 billion to build AI data centers, more than 15 times what was spent on the Manhattan Project.

This eye-popping spending on AI data centers isn’t entirely detached from reality. The leaders of the top AI companies all stress that they’re bottlenecked by their limited access to computing power. You hear it constantly when you talk to them. Startups can’t get the GPU allocations they need. Hyperscalers are rationing compute, saving it for their best customers.

If today’s AI market is as brutally supply-constrained as tech leaders claim, perhaps aggressive infrastructure buildouts are warranted. But some of the numbers are too large to comprehend. Sam Altman has told employees that OpenAI’s moonshot goal is to build 250 gigawatts of computing capacity by 2033, roughly equaling India’s total national electricity demand. Such a plan would cost more than $12 trillion by today’s standards.

“I do think there’s real execution risk,” OpenAI president and cofounder Greg Brockman recently told me about the company’s aggressive infrastructure goals. “Everything we say about the future, we see that it’s a possibility. It is not a certainty, but I don’t think the uncertainty comes from scientific questions. It’s a lot of hard work.”

Who is exposed, and who is to blame?

It depends on who you ask. During the August press dinner, where he made his market-moving comments, Altman was blunt about where he sees the excess. He said it’s “insane” that some AI startups with “three people and an idea” are receiving funding at such high valuations. “That’s not rational behavior,” he said. “Someone’s gonna get burned there, I think.” As Safe Superintelligence cofounder (and former OpenAI chief scientist and cofounder) Ilya Sutskever put it on a recent podcast: Silicon Valley has “more companies than ideas.”

Demis Hassabis, the CEO of Google DeepMind, offered a similar diagnosis when I spoke with him in November. “It feels like there’s obviously a bubble in the private market,” he said. “You look at seed rounds with just nothing being tens of billions of dollars. That seems a little unsustainable.”

Anthropic CEO Dario Amodei also struck at his competition during the New York Times DealBook Summit in early December. He said he feels confident about the technology itself but worries about how others are behaving on the business side: “On the economic side, I have my concerns where, even if the technology fulfills all its promises, I think there are players in the ecosystem who, if they just make a timing error, they just get it off by a little bit, bad things could happen.”

He stopped short of naming Sam Altman and OpenAI, but the implication was clear. “There are some players who are YOLOing,” he said. “Let’s say you’re a person who just kind of constitutionally wants to YOLO things or just likes big numbers. Then you may turn the dial too far.”

Amodei also flagged “circular deals,” or the increasingly common arrangements where chip suppliers like Nvidia invest in AI companies that then turn around and spend those funds on their chips. Anthropic has done some of these, he said, though “not at the same scale as some other players.” (OpenAI is at the center of a number of such deals, as are Nvidia, CoreWeave, and a roster of other players.) 

The danger, he explained, comes when the numbers get too big: “If you start stacking these where they get to huge amounts of money, and you’re saying, ’By 2027 or 2028 I need to make $200 billion a year,’ then yeah, you can overextend yourself.”

Zuckerberg shared a similar message at an internal employee Q&A session after Meta’s last earnings call. He noted that unprofitable startups like OpenAI and Anthropic risk bankruptcy if they misjudge the timing of their investments, but Meta has the advantage of strong cash flow, he reassured staff.

How could a bubble burst?

My conversations with tech executives and investors suggest that the bubble will be most likely to pop if overfunded startups can’t turn a profit or grow into their lofty valuations. This bubble could last longer than than past ones, given that private markets aren’t traded on public markets and therefore move more slowly, but the ripple effects will still be profound when the end comes. 

If companies making grand commitments to data center buildouts no longer have the revenue growth to support them, the headline deals that have propped up the stock market come into question. Anthropic’s Amodei illustrated the problem during his DealBook Summit appearance, where he said the multi-year data center commitments he has to make combine with the company’s rapid, unpredictable revenue growth rate to create a “cone of uncertainty” about how much to spend.

The two most prominent private players in AI, OpenAI and Anthropic, have yet to turn a profit. A recent Deutsche Bank chart put the situation in stark historical context. Amazon burned through $3 billion before becoming profitable. Tesla, around $4 billion. Uber, $30 billion. OpenAI is projected to burn through $140 billion by 2029, while Anthropic is expected to burn $20 billion by 2027.

Consultants at Bain estimate that the wave of AI infrastructure spending will require $2 trillion in annual AI revenue by 2030 just to justify the investment. That’s more than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia. When I talk to leaders of these large tech companies, they all agree that their sprawling businesses can absorb an expensive miscalculation about the returns from their AI infrastructure buildouts. It’s all the other companies that are either highly leveraged with debt or just unprofitable—even OpenAI and Anthropic—that they worry about. 

Still, given the level of spending on AI, it still needs a viable business model beyond subscriptions, which won’t be able to  drive profits from billions of people’s eyeballs like the ad-driven businesses that have defined the last 20 years of the internet. Even the largest tech companies know they need to ship the world-changing agents they keep hyping: AI that can fully replace coworkers and complete tasks in the real world.

For now, investors are mostly buying into the hype of the powerful AI systems that these data center buildouts will supposedly unlock in the future. At some point the biggest spenders, like OpenAI, will need to show investors that the money spent on the infrastructure buildout was worth it.

There’s also still a lot of uncertainty about the technical direction that AI is heading in. LLMs are expected to remain critical to more advanced AI systems, but industry leaders can’t seem to agree on which additional breakthroughs are needed to achieve artificial general intelligence, or AGI. Some are betting on new kinds of AI that can understand the physical world, while others are focused on training AI to learn in a general way, like a human. In other words, what if all this unprecedented spending turns out to have been backing the wrong horse?

The question now

What makes this moment surreal is the honesty. The same people pouring billions into AI will openly tell you it might all come crashing down. 

Taylor framed it as two truths existing at once. “I think it is both true that AI will transform the economy,” he told me, “and I think we’re also in a bubble, and a lot of people will lose a lot of money. I think both are absolutely true at the same time.”

He compared it to the internet. Webvan failed, but Instacart succeeded years later with essentially the same idea. If you were an Amazon shareholder from its IPO to now, you’re looking pretty good. If you were a Webvan shareholder, you probably feel differently. 

“When the dust settles and you see who the winners are, society benefits from those inventions,” Amazon founder Jeff Bezos said in October. “This is real. The benefit to society from AI is going to be gigantic.”

Goldman Sachs says the AI boom now looks the way tech stocks did in 1997, several years before the dot-com bubble actually burst. The bank flagged five warning signs seen in the late 1990s that investors should watch now: peak investment spending, falling corporate profits, rising corporate debt, Fed rate cuts, and widening credit spreads. We’re probably not at 1999 levels yet. But the imbalances are building fast. Michael Burry, who famously called the 2008 housing bubble collapse (as seen in the film The Big Short), recently compared the AI boom to the 1990s dot-com bubble too.

Maybe AI will save us from our own irrational exuberance. But for now, we’re living in an in-between moment when everyone knows what’s coming but keeps blowing more air into the balloon anyway. As Altman put it that night at dinner: “Someone is going to lose a phenomenal amount of money. We don’t know who.”

Alex Heath is the author of Sources, a newsletter about the AI race, and the cohost of ACCESS, a podcast about the tech industry’s inside conversations. Previously, he was deputy editor at The Verge.

AI materials discovery now needs to move into the real world

15 December 2025 at 05:00

The microwave-size instrument at Lila Sciences in Cambridge, Massachusetts, doesn’t look all that different from others that I’ve seen in state-of-the-art materials labs. Inside its vacuum chamber, the machine zaps a palette of different elements to create vaporized particles, which then fly through the chamber and land to create a thin film, using a technique called sputtering. What sets this instrument apart is that artificial intelligence is running the experiment; an AI agent, trained on vast amounts of scientific literature and data, has determined the recipe and is varying the combination of elements. 

Later, a person will walk the samples, each containing multiple potential catalysts, over to a different part of the lab for testing. Another AI agent will scan and interpret the data, using it to suggest another round of experiments to try to optimize the materials’ performance.  


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


For now, a human scientist keeps a close eye on the experiments and will approve the next steps on the basis of the AI’s suggestions and the test results. But the startup is convinced this AI-controlled machine is a peek into the future of materials discovery—one in which autonomous labs could make it far cheaper and faster to come up with novel and useful compounds. 

Flush with hundreds of millions of dollars in new funding, Lila Sciences is one of AI’s latest unicorns. The company is on a larger mission to use AI-run autonomous labs for scientific discovery—the goal is to achieve what it calls scientific superintelligence. But I’m here this morning to learn specifically about the discovery of new materials. 

""
Lila Sciences’ John Gregoire (background) and Rafael Gómez-Bombarelli watch as an AI-guided sputtering instrument makes samples of thin-film alloys.
CODY O’LOUGHLIN

We desperately need better materials to solve our problems. We’ll need improved electrodes and other parts for more powerful batteries; compounds to more cheaply suck carbon dioxide out of the air; and better catalysts to make green hydrogen and other clean fuels and chemicals. And we will likely need novel materials like higher-temperature superconductors, improved magnets, and different types of semiconductors for a next generation of breakthroughs in everything from quantum computing to fusion power to AI hardware. 

But materials science has not had many commercial wins in the last few decades. In part because of its complexity and the lack of successes, the field has become something of an innovation backwater, overshadowed by the more glamorous—and lucrative—search for new drugs and insights into biology.

The idea of using AI for materials discovery is not exactly new, but it got a huge boost in 2020 when DeepMind showed that its AlphaFold2 model could accurately predict the three-dimensional structure of proteins. Then, in 2022, came the success and popularity of ChatGPT. The hope that similar AI models using deep learning could aid in doing science captivated tech insiders. Why not use our new generative AI capabilities to search the vast chemical landscape and help simulate atomic structures, pointing the way to new substances with amazing properties?

“Simulations can be super powerful for framing problems and understanding what is worth testing in the lab. But there’s zero problems we can ever solve in the real world with simulation alone.”

John Gregoire, Lila Sciences, chief autonomous science officer

Researchers touted an AI model that had reportedly discovered “millions of new materials.” The money began pouring in, funding a host of startups. But so far there has been no “eureka” moment, no ChatGPT-like breakthrough—no discovery of new miracle materials or even slightly better ones.

The startups that want to find useful new compounds face a common bottleneck: By far the most time-consuming and expensive step in materials discovery is not imagining new structures but making them in the real world. Before trying to synthesize a material, you don’t know if, in fact, it can be made and is stable, and many of its properties remain unknown until you test it in the lab.

“Simulations can be super powerful for kind of framing problems and understanding what is worth testing in the lab,” says John Gregoire, Lila Sciences’ chief autonomous science officer. “But there’s zero problems we can ever solve in the real world with simulation alone.” 

Startups like Lila Sciences have staked their strategies on using AI to transform experimentation and are building labs that use agents to plan, run, and interpret the results of experiments to synthesize new materials. Automation in laboratories already exists. But the idea is to have AI agents take it to the next level by directing autonomous labs, where their tasks could include designing experiments and controlling the robotics used to shuffle samples around. And, most important, companies want to use AI to vacuum up and analyze the vast amount of data produced by such experiments in the search for clues to better materials.

If they succeed, these companies could shorten the discovery process from decades to a few years or less, helping uncover new materials and optimize existing ones. But it’s a gamble. Even though AI is already taking over many laboratory chores and tasks, finding new—and useful—materials on its own is another matter entirely. 

Innovation backwater

I have been reporting about materials discovery for nearly 40 years, and to be honest, there have been only a few memorable commercial breakthroughs, such as lithium-­ion batteries, over that time. There have been plenty of scientific advances to write about, from perovskite solar cells to graphene transistors to metal-­organic frameworks (MOFs), materials based on an intriguing type of molecular architecture that recently won its inventors a Nobel Prize. But few of those advances—including MOFs—have made it far out of the lab. Others, like quantum dots, have found some commercial uses, but in general, the kinds of life-changing inventions created in earlier decades have been lacking. 

Blame the amount of time (typically 20 years or more) and the hundreds of millions of dollars it takes to make, test, optimize, and manufacture a new material—and the industry’s lack of interest in spending that kind of time and money in low-margin commodity markets. Or maybe we’ve just run out of ideas for making stuff.

The need to both speed up that process and find new ideas is the reason researchers have turned to AI. For decades, scientists have used computers to design potential materials, calculating where to place atoms to form structures that are stable and have predictable characteristics. It’s worked—but only kind of. Advances in AI have made that computational modeling far faster and have promised the ability to quickly explore a vast number of possible structures. Google DeepMind, Meta, and Microsoft have all launched efforts to bring AI tools to the problem of designing new materials. 

But the limitations that have always plagued computational modeling of new materials remain. With many types of materials, such as crystals, useful characteristics often can’t be predicted solely by calculating atomic structures.

To uncover and optimize those properties, you need to make something real. Or as Rafael Gómez-Bombarelli, one of Lila’s cofounders and an MIT professor of materials science, puts it: “Structure helps us think about the problem, but it’s neither necessary nor sufficient for real materials problems.”

Perhaps no advance exemplified the gap between the virtual and physical worlds more than DeepMind’s announcement in late 2023 that it had used deep learning to discover “millions of new materials,” including 380,000 crystals that it declared “the most stable, making them promising candidates for experimental synthesis.” In technical terms, the arrangement of atoms represented a minimum energy state where they were content to stay put. This was “an order-of-magnitude expansion in stable materials known to humanity,” the DeepMind researchers proclaimed.

To the AI community, it appeared to be the breakthrough everyone had been waiting for. The DeepMind research not only offered a gold mine of possible new materials, it also created powerful new computational methods for predicting a large number of structures.

But some materials scientists had a far different reaction. After closer scrutiny, researchers at the University of California, Santa Barbara, said they’d found “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.” In fact, the scientists reported, they didn’t find any truly novel compounds among the ones they looked at; some were merely “trivial” variations of known ones. The scientists appeared particularly peeved that the potential compounds were labeled materials. They wrote: “We would respectfully suggest that the work does not report any new materials but reports a list of proposed compounds. In our view, a compound can be called a material when it exhibits some functionality and, therefore, has potential utility.”

Some of the imagined crystals simply defied the conditions of the real world. To do computations on so many possible structures, DeepMind researchers simulated them at absolute zero, where atoms are well ordered; they vibrate a bit but don’t move around. At higher temperatures—the kind that would exist in the lab or anywhere in the world—the atoms fly about in complex ways, often creating more disorderly crystal structures. A number of the so-called novel materials predicted by DeepMind appeared to be well-ordered versions of disordered ones that were already known. 

More generally, the DeepMind paper was simply another reminder of how challenging it is to capture physical realities in virtual simulations—at least for now. Because of the limitations of computational power, researchers typically perform calculations on relatively few atoms. Yet many desirable properties are determined by the microstructure of the materials—at a scale much larger than the atomic world. And some effects, like high-temperature superconductivity or even the catalysis that is key to many common industrial processes, are far too complex or poorly understood to be explained by atomic simulations alone.

A common language

Even so, there are signs that the divide between simulations and experimental work is beginning to narrow. DeepMind, for one, says that since the release of the 2023 paper it has been working with scientists in labs around the world to synthesize AI-identified compounds and has achieved some success. Meanwhile, a number of the startups entering the space are looking to combine computational and experimental expertise in one organization. 

One such startup is Periodic Labs, cofounded by Ekin Dogus Cubuk, a physicist who led the scientific team that generated the 2023 DeepMind headlines, and by Liam Fedus, a co-creator of ChatGPT at OpenAI. Despite its founders’ background in computational modeling and AI software, the company is building much of its materials discovery strategy around synthesis done in automated labs. 

The vision behind the startup is to link these different fields of expertise by using large language models that are trained on scientific literature and able to learn from ongoing experiments. An LLM might suggest the recipe and conditions to make a compound; it can also interpret test data and feed additional suggestions to the startup’s chemists and physicists. In this strategy, simulations might suggest possible material candidates, but they are also used to help explain the experimental results and suggest possible structural tweaks.

The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades.

Periodic Labs, like Lila Sciences, has ambitions beyond designing and making new materials. It wants to “create an AI scientist”—specifically, one adept at the physical sciences. “LLMs have gotten quite good at distilling chemistry information, physics information,” says Cubuk, “and now we’re trying to make it more advanced by teaching it how to do science—for example, doing simulations, doing experiments, doing theoretical modeling.”

The approach, like that of Lila Sciences, is based on the expectation that a better understanding of the science behind materials and their synthesis will lead to clues that could help researchers find a broad range of new ones. One target for Periodic Labs is materials whose properties are defined by quantum effects, such as new types of magnets. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades.

Superconductors are materials in which electricity flows without any resistance and, thus, without producing heat. So far, the best of these materials become superconducting only at relatively low temperatures and require significant cooling. If they can be made to work at or close to room temperature, they could lead to far more efficient power grids, new types of quantum computers, and even more practical high-speed magnetic-levitation trains. 

""
Lila staff scientist Natalie Page (right), Gómez- Bombarelli, and Gregoire inspect thin-film samples after they come out of the sputtering machine and before they undergo testing.
CODY O’LOUGHLIN

The failure to find a room-­temperature superconductor is one of the great disappointments in materials science over the last few decades. I was there when President Reagan spoke about the technology in 1987, during the peak hype over newly made ceramics that became superconducting at the relatively balmy temperature of 93 Kelvin (that’s −292 °F), enthusing that they “bring us to the threshold of a new age.” There was a sense of optimism among the scientists and businesspeople in that packed ballroom at the Washington Hilton as Reagan anticipated “a host of benefits, not least among them a reduced dependence on foreign oil, a cleaner environment, and a stronger national economy.” In retrospect, it might have been one of the last times that we pinned our economic and technical aspirations on a breakthrough in materials.

The promised new age never came. Scientists still have not found a material that becomes superconducting at room temperatures, or anywhere close, under normal conditions. The best existing superconductors are brittle and tend to make lousy wires.

One of the reasons that finding higher-­temperature superconductors has been so difficult is that no theory explains the effect at relatively high temperatures—or can predict it simply from the placement of atoms in the structure. It will ultimately fall to lab scientists to synthesize any interesting candidates, test them, and search the resulting data for clues to understanding the still puzzling phenomenon. Doing so, says Cubuk, is one of the top priorities of Periodic Labs. 

AI in charge

It can take a researcher a year or more to make a crystal structure for the first time. Then there are typically years of further work to test its properties and figure out how to make the larger quantities needed for a commercial product. 

Startups like Lila Sciences and Periodic Labs are pinning their hopes largely on the prospect that AI-directed experiments can slash those times. One reason for the optimism is that many labs have already incorporated a lot of automation, for everything from preparing samples to shuttling test items around. Researchers routinely use robotic arms, software, automated versions of microscopes and other analytical instruments, and mechanized tools for manipulating lab equipment.

The automation allows, among other things, for high-throughput synthesis, in which multiple samples with various combinations of ingredients are rapidly created and screened in large batches, greatly speeding up the experiments.

The idea is that using AI to plan and run such automated synthesis can make it far more systematic and efficient. AI agents, which can collect and analyze far more data than any human possibly could, can use real-time information to vary the ingredients and synthesis conditions until they get a sample with the optimal properties. Such AI-directed labs could do far more experiments than a person and could be far smarter than existing systems for high-throughput synthesis. 

But so-called self-driving labs for materials are still a work in progress.

Many types of materials require solid-­state synthesis, a set of processes that are far more difficult to automate than the liquid-­handling activities that are commonplace in making drugs. You need to prepare and mix powders of multiple inorganic ingredients in the right combination for making, say, a catalyst and then decide how to process the sample to create the desired structure—for example, identifying the right temperature and pressure at which to carry out the synthesis. Even determining what you’ve made can be tricky.

In 2023, the A-Lab at Lawrence Berkeley National Laboratory claimed to be the first fully automated lab to use inorganic powders as starting ingredients. Subsequently, scientists reported that the autonomous lab had used robotics and AI to synthesize and test 41 novel materials, including some predicted in the DeepMind database. Some critics questioned the novelty of what was produced and complained that the automated analysis of the materials was not up to experimental standards, but the Berkeley researchers defended the effort as simply a demonstration of the autonomous system’s potential.

“How it works today and how we envision it are still somewhat different. There’s just a lot of tool building that needs to be done,” says Gerbrand Ceder, the principal scientist behind the A-Lab. 

AI agents are already getting good at doing many laboratory chores, from preparing recipes to interpreting some kinds of test data—finding, for example, patterns in a micrograph that might be hidden to the human eye. But Ceder is hoping the technology could soon “capture human decision-making,” analyzing ongoing experiments to make strategic choices on what to do next. For example, his group is working on an improved synthesis agent that would better incorporate what he calls scientists’ “diffused” knowledge—the kind gained from extensive training and experience. “I imagine a world where people build agents around their expertise, and then there’s sort of an uber-model that puts it together,” he says. “The uber-model essentially needs to know what agents it can call on and what they know, or what their expertise is.”

“In one field that I work in, solid-state batteries, there are 50 papers published every day. And that is just one field that I work in. The A I revolution is about finally gathering all the scientific data we have.”

Gerbrand Ceder, principal scientist, A-Lab

One of the strengths of AI agents is their ability to devour vast amounts of scientific literature. “In one field that I work in, solid-­state batteries, there are 50 papers published every day. And that is just one field that I work in,” says Ceder. It’s impossible for anyone to keep up. “The AI revolution is about finally gathering all the scientific data we have,” he says. 

Last summer, Ceder became the chief science officer at an AI materials discovery startup called Radical AI and took a sabbatical from the University of California, Berkeley, to help set up its self-driving labs in New York City. A slide deck shows the portfolio of different AI agents and generative models meant to help realize Ceder’s vision. If you look closely, you can spot an LLM called the “orchestrator”—it’s what CEO Joseph Krause calls the “head honcho.” 

New hope

So far, despite the hype around the use of AI to discover new materials and the growing momentum—and money—behind the field, there still has not been a convincing big win. There is no example like the 2016 victory of DeepMind’s AlphaGo over a Go world champion. Or like AlphaFold’s achievement in mastering one of biomedicine’s hardest and most time-consuming chores, predicting 3D structures of proteins. 

The field of materials discovery is still waiting for its moment. It could come if AI agents can dramatically speed the design or synthesis of practical materials, similar to but better than what we have today. Or maybe the moment will be the discovery of a truly novel one, such as a room-­temperature superconductor.

A hexagonal window in the side of a black box
A small window provides a view of the inside workings of Lila’s sputtering instrument.The startup uses the machine to create a wide variety of experimental samples, including potential materials that could be useful for coatings and catalysts.
CODY O’LOUGHLIN

With or without such a breakthrough moment, startups face the challenge of trying to turn their scientific achievements into useful materials. The task is particularly difficult because any new materials would likely have to be commercialized in an industry dominated by large incumbents that are not particularly prone to risk-taking.

Susan Schofer, a tech investor and partner at the venture capital firm SOSV, is cautiously optimistic about the field. But Schofer, who spent several years in the mid-2000s as a catalyst researcher at one of the first startups using automation and high-throughput screening for materials discovery (it didn’t survive), wants to see some evidence that the technology can translate into commercial successes when she evaluates startups to invest in.  

In particular, she wants to see evidence that the AI startups are already “finding something new, that’s different, and know how they are going to iterate from there.” And she wants to see a business model that captures the value of new materials. She says, “I think the ideal would be: I got a spec from the industry. I know what their problem is. We’ve defined it. Now we’re going to go build it. Now we have a new material that we can sell, that we have scaled up enough that we’ve proven it. And then we partner somehow to manufacture it, but we get revenue off selling the material.”

Schofer says that while she gets the vision of trying to redefine science, she’d advise startups to “show us how you’re going to get there.” She adds, “Let’s see the first steps.”

Demonstrating those first steps could be essential in enticing large existing materials companies to embrace AI technologies more fully. Corporate researchers in the industry have been burned before—by the promise over the decades that increasingly powerful computers will magically design new materials; by combinatorial chemistry, a fad that raced through materials R&D labs in the early 2000s with little tangible result; and by the promise that synthetic biology would make our next generation of chemicals and materials.

More recently, the materials community has been blanketed by a new hype cycle around AI. Some of that hype was fueled by the 2023 DeepMind announcement of the discovery of “millions of new materials,” a claim that, in retrospect, clearly overpromised. And it was further fueled when an MIT economics student posted a paper in late 2024 claiming that a large, unnamed corporate R&D lab had used AI to efficiently invent a slew of new materials. AI, it seemed, was already revolutionizing the industry.

A few months later, the MIT economics department concluded that “the paper should be withdrawn from public discourse.” Two prominent MIT economists who are acknowledged in a footnote in the paper added that they had “no confidence in the provenance, reliability or validity of the data and the veracity of the research.”

Can AI move beyond the hype and false hopes and truly transform materials discovery? Maybe. There is ample evidence that it’s changing how materials scientists work, providing them—if nothing else—with useful lab tools. Researchers are increasingly using LLMs to query the scientific literature and spot patterns in experimental data. 

But it’s still early days in turning those AI tools into actual materials discoveries. The use of AI to run autonomous labs, in particular, is just getting underway; making and testing stuff takes time and lots of money. The morning I visited Lila Sciences, its labs were largely empty, and it’s now preparing to move into a much larger space a few miles away. Periodic Labs is just beginning to set up its lab in San Francisco. It’s starting with manual synthesis guided by AI predictions; its robotic high-throughput lab will come soon. Radical AI reports that its lab is almost fully autonomous but plans to soon move to a larger space.

""
Prominent AI researchers Liam Fedus (left) and Ekin Dogus Cubuk are the cofounders of Periodic Labs. The San Francisco–based startup aims to build an AI scientist that’s adept at the physical sciences.
JASON HENRY

When I talk to the scientific founders of these startups, I hear a renewed excitement about a field that long operated in the shadows of drug discovery and genomic medicine. For one thing, there is the money. “You see this enormous enthusiasm to put AI and materials together,” says Ceder. “I’ve never seen this much money flow into materials.”

Reviving the materials industry is a challenge that goes beyond scientific advances, however. It means selling companies on a whole new way of doing R&D.

But the startups benefit from a huge dose of confidence borrowed from the rest of the AI industry. And maybe that, after years of playing it safe, is just what the materials business needs.

❌