Reading view

Six Ways You Can Use an Old Chromecast (Beyond Streaming Movies and Shows)

Chromecasts were one of the most useful little gadgets that Google ever made, so of course it decided to ditch the product line. The Google Cast functionality lives on in the Google TV Streamer and Google TV devices and televisions, but sadly we won't see another Chromecast go on sale.

If you've got an older Chromecast hanging around, it'll still work fine for now. However, you might soon be moving on to a newer streaming device—or perhaps you already have—and that's left you wondering what to do with your older hardware. In fact, these small dongles are more versatile than you might have realized.

While streaming content from the likes of Netflix and Apple TV is going to be the primary use for these devices for most people, you can do plenty more with them—thanks to the casting support that Google and other developers have built into their apps.

Keep an eye on your property

If you've got a Chromecast-compatible security camera (including Google's Nest Cams), you can see a live feed on your Chromecast, making it easy to set up a mini security monitoring center if you have a smaller monitor or television somewhere to spare.

Getting the feed up on screen is as easy as saying "hey Google, show my..." followed by the camera name (as listed in the Google Home app). On the Chromecast with Google TV, you can also open the Google Home widget that appears on the main Settings pane.

Set up a second screen wirelessly

Google Chrome
You can cast anything from a Chrome tab. Credit: Lifehacker

Something else you can throw to a Chromecast in seconds: any tab you happen to have open in Google Chrome on your laptop or desktop. Just click the three dots in the top right corner of the tab, then choose Cast, Save and Share > Cast.

This means you can use the monitor or TV that your Chromecast is hooked up to as a second screen, with no cables required—just a wifi network.

Stream music, podcasts, and audiobooks

When it comes to slinging content to your TV screen, you're going to think about movies and shows first and foremost, but the Google Cast standard works with audio apps as well—including the likes of Spotify, Pocket Casts, and Audible.

This is especially worth looking into if you've got a soundbar or a high-end speaker system connected to your television, because it means you can enjoy your audio streams at a much higher volume and a much higher level of quality, compared to your phone.

Play some simple games

This one needs a Chromecast with local storage installed, so I'm primarily talking about the Chromecast with Google TV. That device supports local apps, which means it also lets you set up games to play with the remote or a connected Bluetooth controller.

See what you can find by browsing the Google Play Store, but Super Macro 64 showcases 25 different titles you can play easily, while the folks at XDA Developers have put together a full guide to creating a retro game emulator with the help of RetroArch.

Display photos and wallpapers

Google Home
Your Chromecast can display photos and even artwork. Credit: Lifehacker

Chromecasts work great as a way to add some ambience to a room when you're not actually watching something on a TV or monitor. You can show your own personal pictures, or a selection of nature shots, or pretty much anything you want.

Either cast via Google Photos (open an album, tap the three dots in the top right corner, then Cast), or set up a screensaver through the Google Home app. Select your Chromecast, tap the gear icon (top right), then choose Ambient mode.

Keep in touch

Trying to hold video calls—whether with family over the holidays or colleagues during a meeting—isn't always easy on a phone screen or even a laptop screen, so why not take advantage of a larger monitor or TV with a Chromecast plugged into it?

For this to work you need to be using Google Meet in a web browser on a computer. You can either choose the "cast this meeting" option before it starts, or click the three dots during the meeting (Google has full instructions online).

  •  

‘An unhealthy and creepy obsession’: Ilhan Omar on Trump’s attacks

The Zen-like US representative from Minnesota has had the highest level of death threats of any congressperson because of the president’s attacks

“That’s Teddy,” said Tim Mynett, husband of the US representative Ilhan Omar, as their five-year-old labrador retriever capered around her office on Capitol Hill. “If you make too much eye contact, he’ll lose it. He’s my best friend – and he’s our security detail these days.”

The couple were sitting on black leather furniture around a coffee table. Apart from a sneezing fit that took her husband by surprise, Omar had an unusual Zen-like calm for someone who receives frequent death threats and is the subject of a vendetta from the most powerful man in the world.

Continue reading...

© Photograph: Caroline Gutman/The Guardian

© Photograph: Caroline Gutman/The Guardian

© Photograph: Caroline Gutman/The Guardian

  •  

Behind the scenes at the Royal Opera’s spectacular Turandot – photo essay

Puccini’s opera returns to Covent Garden in a vivid staging that, although 40 years old, still feels fresh and fun. David Levene had exclusive access to rehearsals to witness the severed heads, the sumptuous costumes – and the executioner going green

Andrei Șerban’s staging, with dazzling designs by Sally Jacobs, made its debut in 1984 and is the Royal Opera’s longest-running production. This is its 19th revival: the performance on 18 December will be its 295th at Covent Garden. Turandot tackles grand emotions and even grander themes: love, fear, devotion, power, loyalty, life and death in a fantastical, fairytale version of imperial China. And, of course, there’s surely opera’s most famous moment, the showstopper aria Nessun Dorma.

“If the opera has depths, Șerban is content to ignore them, but for once it doesn’t seem to matter. The three-storey Chinese pagoda set, army of extras and troupe of masked dancers make his cartoon-coloured creation the nearest the company has to a West End spectacular,” wrote the Guardian’s Erica Jeal reviewing a 2005 revival.

Puccini’s libretto states that the emperor appears among “clouds of incense … among the clouds like a god”. In this production he does indeed appear as if from the heavens, his magnificent throne lowered slowly to the ground.

Continue reading...

© Photograph: David Levene/The Guardian

© Photograph: David Levene/The Guardian

© Photograph: David Levene/The Guardian

  •  

David Squires on … World Cup supply-and-demand ticket ultras, plus an Anfield truce

Our cartoonist on exorbitant World Cup ticket prices and peace breaking out on Merseyside

Continue reading...

© Illustration: David Squires/The Guardian

© Illustration: David Squires/The Guardian

© Illustration: David Squires/The Guardian

  •  

Michael Douglas on One Flew Over the Cuckoo’s Nest: ‘My half of the producing fee I gave to Dad’

The actor looks back on his first foray as producer as the Oscar-winning drama reaches its 50th anniversary

One Flew Over the Cuckoo’s Nest at 50: the spirit of rebellion lives on

His early career was defined by the Vietnam war with early roles in political films such as Hail, Hero! and Summertree. So it felt natural for Michael Douglas, just 31, to make his first foray into producing with One Flew Over the Cuckoo’s Nest, a tale of one man raging against the system.

Fifty years since its release, Douglas is struck how Cuckoo’s Nest resonates anew in today’s landscape. “It’s about as classic a story as we’ll ever have and it seems timeless now, with what’s going on in our country politically, about man versus the machine and individuality versus the corporate world,” the 81-year-old says via Zoom from Santa Barbara, California.

Continue reading...

© Photograph: Snap/Shutterstock

© Photograph: Snap/Shutterstock

© Photograph: Snap/Shutterstock

  •  

iFixit's New AI Assistant Can Help You Fix Almost Anything

Generative AI has advanced to the stage where you can ask bots such as ChatGPT or Gemini questions about almost anything, and get reasonable-sounding responses—and now renowned gadget repair site iFixit has joined the party with an AI assistant of its own, ready and willing to solve any of your hardware problems.

While you can already ask general-purpose chatbots for advice on how to repair a phone screen or diagnose a problem with a car engine, there's always the question of how accurate the AI replies will be. With FixBot, iFixit is trying to minimize mistakes by drawing on its vast library of verified repair guides, written by experts and users.

That's certainly reassuring: I don't want to waste time and money replacing a broken phone screen with a new display that's the wrong size or shape. And using a conversational AI bot to fix gadget problems is often going to feel like a more natural and intuitive experience than a Google search. As iFixit puts it, the bot "does what a good expert does" in guiding you to the right solutions.

How FixBot improves accuracy

The iFixit website has been around since 2003—practically ancient times, considering the rapid evolution of modern technology. The iFixit team has always prided itself on detailed, thorough, tested guides to repairing devices, and all of that information can now be tapped into by the FixBot tool.

iFixit says the bot is trained on more than 125,000 repair guides written by humans who have worked through the steps involved, as well as the question and answer forums attached to the site, and the "huge cache" of PDF manuals that iFixit has accumulated over the years that it's been business.

iFixit FixBot
FixBot uses an intuitive chatbot interface. Credit: Lifehacker

That gives me a lot more confidence that FixBot will get its answers right, compared to whatever ChatGPT or Gemini might tell me. iFixit hasn't said what AI models are powering the bot—only that they've been "hand-picked"—and there's also a custom-built search engine included to select data sources from the repair archives on the site.

"Every answer starts with a search for guides, parts, and repairs that worked," according to the iFixit team, and that conversational approach you'll recognize from other AI bots is here too: If you need clarification on something, then you can ask a follow-up question. In the same way, if the AI bot needs more information or specifics, it will ask you.

It's designed to be fast—responses should be returned in seconds—and the iFixit team also talks about an "evaluation harness" that tests the FixBot responses against thousands of real repair questions posed and answered by humans. That extra level of fact-checking should reduce the number of false answers you get.

However, it's not perfect, as iFixit admits: "FixBot is an AI, and AI sometimes gets things wrong." Whether or not those mistakes will be easy to spot remains to be seen, but users of the chatbot are being encouraged to upload their own documents and repair solutions to fix gaps in the knowledge that FixBot is drawing on.

Using FixBot to diagnose problems

iFixit says the FixBot is going to be free for everyone to use, for a limited time. At some point, there will be a free version with limitations, and paid tiers with the full set of features—including support for voice input and document uploads. You can give it a try for yourself now on the iFixit website.

I was reluctant to deliberately break one of my devices just so FixBot could help me repair it, but I did test it with a few issues I've had (and sorted out) in the past. One was a completely dead SSD drive stopping my Windows PC from booting: I started off with a vague description about the computer not starting up properly, and the bot did a good job at narrowing down what the problem was, and suggesting fixes.

iFixit FixBot
FixBot will refer back to articles and forum posts. Credit: Lifehacker

It went through everything I had already tried when the problem happened, including trying System Repair and troubleshooting the issue via the Command Prompt. Eventually, via a few links to repair guides on the iFixit website, it did conclude that my SSD drive had been corrupted by a power cut—which I knew was what had indeed happened.

I also tested the bot with a more general question about a phone restarting at random times—something one of my old handsets used to do. Again, the responses were accurate, and the troubleshooting steps I was asked to try made a lot of sense. I was also directed to the iFixit guide for the phone model.

iFixit FixBot
FixBot's answers are generally accurate and intelligent. Credit: Lifehacker

The bot is as enthusiastic as a lot of the others available now (I was regularly praised for the "excellent information" I was providing), and does appear to know what it's talking about. This is one of the scenarios where generative AI shows its worth, in distilling a large amount of information based on natural language prompts.

There's definitely potential here: Compare this approach to having to sift through dozens of forum posts, web articles, and documents manually. However, there's always that nagging sense that AI makes mistakes, as the on-screen FixBot disclaimer says. I'd recommend checking other sources before doing anything drastic with your hardware troubleshooting.

  •  

Security Researcher Found Critical Kindle Vulnerabilities That Allowed Hijacking Amazon Accounts

The Black Hat Europe hacker conference in London included a session titled "Don't Judge an Audiobook by Its Cover" about a two critical (and now fixed) flaws in Amazon's Kindle. The Times reports both flaws were discovered by engineering analyst Valentino Ricotta (from the cybersecurity research division of Thales), who was awarded a "bug bounty" of $20,000 (£15,000 ). He said: "What especially struck me with this device, that's been sitting on my bedside table for years, is that it's connected to the internet. It's constantly running because the battery lasts a long time and it has access to my Amazon account. It can even pay for books from the store with my credit card in a single click. Once an attacker gets a foothold inside a Kindle, it could access personal data, your credit card information, pivot to your local network or even to other devices that are registered with your Amazon account." Ricotta discovered flaws in the Kindle software that scans and extracts information from audiobooks... He also identified a vulnerability in the onscreen keyboard. Through both of these, he tricked the Kindle into loading malicious code, which enabled him to take the user's Amazon session cookies — tokens that give access to the account. Ricotta said that people could be exposed to this type of hack if they "side-load" books on to the Kindle through non-Amazon stores. Ricotta donated his bug bounties to charity...

Read more of this story at Slashdot.

  •  

‘A lot of stories but very few facts’: sceptics push back on buzzy UFO documentary

The Age of Disclosure was granted a Capitol Hill screening and has broken digital rental records but does it really offer proof of alien life?

It has been hailed as a game changer in public attitudes towards UFOs, ending a culture of silence around claims once dismissed as the preserve of conspiracy theorists and crackpots.

The Age of Disclosure has been boosted in its effort to shift the conversation about extraterrestrials from the fringe to the mainstream with a Capitol Hill screening and considerable commercial success. It broke the record for highest-grossing documentary on Amazon’s Prime Video within 48 hours of its release, Deadline reported this week.

Continue reading...

© Photograph: 'Age of Disclosure'

© Photograph: 'Age of Disclosure'

© Photograph: 'Age of Disclosure'

  •  

Pulse by Cynan Jones review – short stories that show the vitality of the form

The Welsh author vividly captures the solitude, hard labour, dramas and dangers of rural life

In these six stories of human frailty and responsibility, Welsh writer Cynan Jones explores the imperatives of love and the labour of making and sustaining lives. Each is told with a compelling immediacy and intensity, and with the quality of returning to a memory.

In the story Reindeer a man is seeking a bear, which has been woken by hunger from hibernation and is now raiding livestock from the farms of a small isolated community. “There was no true sunshine. There was no gleam in the snow, but the lateness of the left daylight put a cold faint blue through the slopes.” The story’s world is one in which skill, endurance, even stubbornness might be insufficient to succeed, but are just enough to persist.

Continue reading...

© Photograph: Mark Newman/Getty Images

© Photograph: Mark Newman/Getty Images

© Photograph: Mark Newman/Getty Images

  •  

AI materials discovery now needs to move into the real world

The microwave-size instrument at Lila Sciences in Cambridge, Massachusetts, doesn’t look all that different from others that I’ve seen in state-of-the-art materials labs. Inside its vacuum chamber, the machine zaps a palette of different elements to create vaporized particles, which then fly through the chamber and land to create a thin film, using a technique called sputtering. What sets this instrument apart is that artificial intelligence is running the experiment; an AI agent, trained on vast amounts of scientific literature and data, has determined the recipe and is varying the combination of elements. 

Later, a person will walk the samples, each containing multiple potential catalysts, over to a different part of the lab for testing. Another AI agent will scan and interpret the data, using it to suggest another round of experiments to try to optimize the materials’ performance.  


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


For now, a human scientist keeps a close eye on the experiments and will approve the next steps on the basis of the AI’s suggestions and the test results. But the startup is convinced this AI-controlled machine is a peek into the future of materials discovery—one in which autonomous labs could make it far cheaper and faster to come up with novel and useful compounds. 

Flush with hundreds of millions of dollars in new funding, Lila Sciences is one of AI’s latest unicorns. The company is on a larger mission to use AI-run autonomous labs for scientific discovery—the goal is to achieve what it calls scientific superintelligence. But I’m here this morning to learn specifically about the discovery of new materials. 

""
Lila Sciences’ John Gregoire (background) and Rafael Gómez-Bombarelli watch as an AI-guided sputtering instrument makes samples of thin-film alloys.
CODY O’LOUGHLIN

We desperately need better materials to solve our problems. We’ll need improved electrodes and other parts for more powerful batteries; compounds to more cheaply suck carbon dioxide out of the air; and better catalysts to make green hydrogen and other clean fuels and chemicals. And we will likely need novel materials like higher-temperature superconductors, improved magnets, and different types of semiconductors for a next generation of breakthroughs in everything from quantum computing to fusion power to AI hardware. 

But materials science has not had many commercial wins in the last few decades. In part because of its complexity and the lack of successes, the field has become something of an innovation backwater, overshadowed by the more glamorous—and lucrative—search for new drugs and insights into biology.

The idea of using AI for materials discovery is not exactly new, but it got a huge boost in 2020 when DeepMind showed that its AlphaFold2 model could accurately predict the three-dimensional structure of proteins. Then, in 2022, came the success and popularity of ChatGPT. The hope that similar AI models using deep learning could aid in doing science captivated tech insiders. Why not use our new generative AI capabilities to search the vast chemical landscape and help simulate atomic structures, pointing the way to new substances with amazing properties?

“Simulations can be super powerful for framing problems and understanding what is worth testing in the lab. But there’s zero problems we can ever solve in the real world with simulation alone.”

John Gregoire, Lila Sciences, chief autonomous science officer

Researchers touted an AI model that had reportedly discovered “millions of new materials.” The money began pouring in, funding a host of startups. But so far there has been no “eureka” moment, no ChatGPT-like breakthrough—no discovery of new miracle materials or even slightly better ones.

The startups that want to find useful new compounds face a common bottleneck: By far the most time-consuming and expensive step in materials discovery is not imagining new structures but making them in the real world. Before trying to synthesize a material, you don’t know if, in fact, it can be made and is stable, and many of its properties remain unknown until you test it in the lab.

“Simulations can be super powerful for kind of framing problems and understanding what is worth testing in the lab,” says John Gregoire, Lila Sciences’ chief autonomous science officer. “But there’s zero problems we can ever solve in the real world with simulation alone.” 

Startups like Lila Sciences have staked their strategies on using AI to transform experimentation and are building labs that use agents to plan, run, and interpret the results of experiments to synthesize new materials. Automation in laboratories already exists. But the idea is to have AI agents take it to the next level by directing autonomous labs, where their tasks could include designing experiments and controlling the robotics used to shuffle samples around. And, most important, companies want to use AI to vacuum up and analyze the vast amount of data produced by such experiments in the search for clues to better materials.

If they succeed, these companies could shorten the discovery process from decades to a few years or less, helping uncover new materials and optimize existing ones. But it’s a gamble. Even though AI is already taking over many laboratory chores and tasks, finding new—and useful—materials on its own is another matter entirely. 

Innovation backwater

I have been reporting about materials discovery for nearly 40 years, and to be honest, there have been only a few memorable commercial breakthroughs, such as lithium-­ion batteries, over that time. There have been plenty of scientific advances to write about, from perovskite solar cells to graphene transistors to metal-­organic frameworks (MOFs), materials based on an intriguing type of molecular architecture that recently won its inventors a Nobel Prize. But few of those advances—including MOFs—have made it far out of the lab. Others, like quantum dots, have found some commercial uses, but in general, the kinds of life-changing inventions created in earlier decades have been lacking. 

Blame the amount of time (typically 20 years or more) and the hundreds of millions of dollars it takes to make, test, optimize, and manufacture a new material—and the industry’s lack of interest in spending that kind of time and money in low-margin commodity markets. Or maybe we’ve just run out of ideas for making stuff.

The need to both speed up that process and find new ideas is the reason researchers have turned to AI. For decades, scientists have used computers to design potential materials, calculating where to place atoms to form structures that are stable and have predictable characteristics. It’s worked—but only kind of. Advances in AI have made that computational modeling far faster and have promised the ability to quickly explore a vast number of possible structures. Google DeepMind, Meta, and Microsoft have all launched efforts to bring AI tools to the problem of designing new materials. 

But the limitations that have always plagued computational modeling of new materials remain. With many types of materials, such as crystals, useful characteristics often can’t be predicted solely by calculating atomic structures.

To uncover and optimize those properties, you need to make something real. Or as Rafael Gómez-Bombarelli, one of Lila’s cofounders and an MIT professor of materials science, puts it: “Structure helps us think about the problem, but it’s neither necessary nor sufficient for real materials problems.”

Perhaps no advance exemplified the gap between the virtual and physical worlds more than DeepMind’s announcement in late 2023 that it had used deep learning to discover “millions of new materials,” including 380,000 crystals that it declared “the most stable, making them promising candidates for experimental synthesis.” In technical terms, the arrangement of atoms represented a minimum energy state where they were content to stay put. This was “an order-of-magnitude expansion in stable materials known to humanity,” the DeepMind researchers proclaimed.

To the AI community, it appeared to be the breakthrough everyone had been waiting for. The DeepMind research not only offered a gold mine of possible new materials, it also created powerful new computational methods for predicting a large number of structures.

But some materials scientists had a far different reaction. After closer scrutiny, researchers at the University of California, Santa Barbara, said they’d found “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.” In fact, the scientists reported, they didn’t find any truly novel compounds among the ones they looked at; some were merely “trivial” variations of known ones. The scientists appeared particularly peeved that the potential compounds were labeled materials. They wrote: “We would respectfully suggest that the work does not report any new materials but reports a list of proposed compounds. In our view, a compound can be called a material when it exhibits some functionality and, therefore, has potential utility.”

Some of the imagined crystals simply defied the conditions of the real world. To do computations on so many possible structures, DeepMind researchers simulated them at absolute zero, where atoms are well ordered; they vibrate a bit but don’t move around. At higher temperatures—the kind that would exist in the lab or anywhere in the world—the atoms fly about in complex ways, often creating more disorderly crystal structures. A number of the so-called novel materials predicted by DeepMind appeared to be well-ordered versions of disordered ones that were already known. 

More generally, the DeepMind paper was simply another reminder of how challenging it is to capture physical realities in virtual simulations—at least for now. Because of the limitations of computational power, researchers typically perform calculations on relatively few atoms. Yet many desirable properties are determined by the microstructure of the materials—at a scale much larger than the atomic world. And some effects, like high-temperature superconductivity or even the catalysis that is key to many common industrial processes, are far too complex or poorly understood to be explained by atomic simulations alone.

A common language

Even so, there are signs that the divide between simulations and experimental work is beginning to narrow. DeepMind, for one, says that since the release of the 2023 paper it has been working with scientists in labs around the world to synthesize AI-identified compounds and has achieved some success. Meanwhile, a number of the startups entering the space are looking to combine computational and experimental expertise in one organization. 

One such startup is Periodic Labs, cofounded by Ekin Dogus Cubuk, a physicist who led the scientific team that generated the 2023 DeepMind headlines, and by Liam Fedus, a co-creator of ChatGPT at OpenAI. Despite its founders’ background in computational modeling and AI software, the company is building much of its materials discovery strategy around synthesis done in automated labs. 

The vision behind the startup is to link these different fields of expertise by using large language models that are trained on scientific literature and able to learn from ongoing experiments. An LLM might suggest the recipe and conditions to make a compound; it can also interpret test data and feed additional suggestions to the startup’s chemists and physicists. In this strategy, simulations might suggest possible material candidates, but they are also used to help explain the experimental results and suggest possible structural tweaks.

The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades.

Periodic Labs, like Lila Sciences, has ambitions beyond designing and making new materials. It wants to “create an AI scientist”—specifically, one adept at the physical sciences. “LLMs have gotten quite good at distilling chemistry information, physics information,” says Cubuk, “and now we’re trying to make it more advanced by teaching it how to do science—for example, doing simulations, doing experiments, doing theoretical modeling.”

The approach, like that of Lila Sciences, is based on the expectation that a better understanding of the science behind materials and their synthesis will lead to clues that could help researchers find a broad range of new ones. One target for Periodic Labs is materials whose properties are defined by quantum effects, such as new types of magnets. The grand prize would be a room-temperature superconductor, a material that could transform computing and electricity but that has eluded scientists for decades.

Superconductors are materials in which electricity flows without any resistance and, thus, without producing heat. So far, the best of these materials become superconducting only at relatively low temperatures and require significant cooling. If they can be made to work at or close to room temperature, they could lead to far more efficient power grids, new types of quantum computers, and even more practical high-speed magnetic-levitation trains. 

""
Lila staff scientist Natalie Page (right), Gómez- Bombarelli, and Gregoire inspect thin-film samples after they come out of the sputtering machine and before they undergo testing.
CODY O’LOUGHLIN

The failure to find a room-­temperature superconductor is one of the great disappointments in materials science over the last few decades. I was there when President Reagan spoke about the technology in 1987, during the peak hype over newly made ceramics that became superconducting at the relatively balmy temperature of 93 Kelvin (that’s −292 °F), enthusing that they “bring us to the threshold of a new age.” There was a sense of optimism among the scientists and businesspeople in that packed ballroom at the Washington Hilton as Reagan anticipated “a host of benefits, not least among them a reduced dependence on foreign oil, a cleaner environment, and a stronger national economy.” In retrospect, it might have been one of the last times that we pinned our economic and technical aspirations on a breakthrough in materials.

The promised new age never came. Scientists still have not found a material that becomes superconducting at room temperatures, or anywhere close, under normal conditions. The best existing superconductors are brittle and tend to make lousy wires.

One of the reasons that finding higher-­temperature superconductors has been so difficult is that no theory explains the effect at relatively high temperatures—or can predict it simply from the placement of atoms in the structure. It will ultimately fall to lab scientists to synthesize any interesting candidates, test them, and search the resulting data for clues to understanding the still puzzling phenomenon. Doing so, says Cubuk, is one of the top priorities of Periodic Labs. 

AI in charge

It can take a researcher a year or more to make a crystal structure for the first time. Then there are typically years of further work to test its properties and figure out how to make the larger quantities needed for a commercial product. 

Startups like Lila Sciences and Periodic Labs are pinning their hopes largely on the prospect that AI-directed experiments can slash those times. One reason for the optimism is that many labs have already incorporated a lot of automation, for everything from preparing samples to shuttling test items around. Researchers routinely use robotic arms, software, automated versions of microscopes and other analytical instruments, and mechanized tools for manipulating lab equipment.

The automation allows, among other things, for high-throughput synthesis, in which multiple samples with various combinations of ingredients are rapidly created and screened in large batches, greatly speeding up the experiments.

The idea is that using AI to plan and run such automated synthesis can make it far more systematic and efficient. AI agents, which can collect and analyze far more data than any human possibly could, can use real-time information to vary the ingredients and synthesis conditions until they get a sample with the optimal properties. Such AI-directed labs could do far more experiments than a person and could be far smarter than existing systems for high-throughput synthesis. 

But so-called self-driving labs for materials are still a work in progress.

Many types of materials require solid-­state synthesis, a set of processes that are far more difficult to automate than the liquid-­handling activities that are commonplace in making drugs. You need to prepare and mix powders of multiple inorganic ingredients in the right combination for making, say, a catalyst and then decide how to process the sample to create the desired structure—for example, identifying the right temperature and pressure at which to carry out the synthesis. Even determining what you’ve made can be tricky.

In 2023, the A-Lab at Lawrence Berkeley National Laboratory claimed to be the first fully automated lab to use inorganic powders as starting ingredients. Subsequently, scientists reported that the autonomous lab had used robotics and AI to synthesize and test 41 novel materials, including some predicted in the DeepMind database. Some critics questioned the novelty of what was produced and complained that the automated analysis of the materials was not up to experimental standards, but the Berkeley researchers defended the effort as simply a demonstration of the autonomous system’s potential.

“How it works today and how we envision it are still somewhat different. There’s just a lot of tool building that needs to be done,” says Gerbrand Ceder, the principal scientist behind the A-Lab. 

AI agents are already getting good at doing many laboratory chores, from preparing recipes to interpreting some kinds of test data—finding, for example, patterns in a micrograph that might be hidden to the human eye. But Ceder is hoping the technology could soon “capture human decision-making,” analyzing ongoing experiments to make strategic choices on what to do next. For example, his group is working on an improved synthesis agent that would better incorporate what he calls scientists’ “diffused” knowledge—the kind gained from extensive training and experience. “I imagine a world where people build agents around their expertise, and then there’s sort of an uber-model that puts it together,” he says. “The uber-model essentially needs to know what agents it can call on and what they know, or what their expertise is.”

“In one field that I work in, solid-state batteries, there are 50 papers published every day. And that is just one field that I work in. The A I revolution is about finally gathering all the scientific data we have.”

Gerbrand Ceder, principal scientist, A-Lab

One of the strengths of AI agents is their ability to devour vast amounts of scientific literature. “In one field that I work in, solid-­state batteries, there are 50 papers published every day. And that is just one field that I work in,” says Ceder. It’s impossible for anyone to keep up. “The AI revolution is about finally gathering all the scientific data we have,” he says. 

Last summer, Ceder became the chief science officer at an AI materials discovery startup called Radical AI and took a sabbatical from the University of California, Berkeley, to help set up its self-driving labs in New York City. A slide deck shows the portfolio of different AI agents and generative models meant to help realize Ceder’s vision. If you look closely, you can spot an LLM called the “orchestrator”—it’s what CEO Joseph Krause calls the “head honcho.” 

New hope

So far, despite the hype around the use of AI to discover new materials and the growing momentum—and money—behind the field, there still has not been a convincing big win. There is no example like the 2016 victory of DeepMind’s AlphaGo over a Go world champion. Or like AlphaFold’s achievement in mastering one of biomedicine’s hardest and most time-consuming chores, predicting 3D structures of proteins. 

The field of materials discovery is still waiting for its moment. It could come if AI agents can dramatically speed the design or synthesis of practical materials, similar to but better than what we have today. Or maybe the moment will be the discovery of a truly novel one, such as a room-­temperature superconductor.

A hexagonal window in the side of a black box
A small window provides a view of the inside workings of Lila’s sputtering instrument.The startup uses the machine to create a wide variety of experimental samples, including potential materials that could be useful for coatings and catalysts.
CODY O’LOUGHLIN

With or without such a breakthrough moment, startups face the challenge of trying to turn their scientific achievements into useful materials. The task is particularly difficult because any new materials would likely have to be commercialized in an industry dominated by large incumbents that are not particularly prone to risk-taking.

Susan Schofer, a tech investor and partner at the venture capital firm SOSV, is cautiously optimistic about the field. But Schofer, who spent several years in the mid-2000s as a catalyst researcher at one of the first startups using automation and high-throughput screening for materials discovery (it didn’t survive), wants to see some evidence that the technology can translate into commercial successes when she evaluates startups to invest in.  

In particular, she wants to see evidence that the AI startups are already “finding something new, that’s different, and know how they are going to iterate from there.” And she wants to see a business model that captures the value of new materials. She says, “I think the ideal would be: I got a spec from the industry. I know what their problem is. We’ve defined it. Now we’re going to go build it. Now we have a new material that we can sell, that we have scaled up enough that we’ve proven it. And then we partner somehow to manufacture it, but we get revenue off selling the material.”

Schofer says that while she gets the vision of trying to redefine science, she’d advise startups to “show us how you’re going to get there.” She adds, “Let’s see the first steps.”

Demonstrating those first steps could be essential in enticing large existing materials companies to embrace AI technologies more fully. Corporate researchers in the industry have been burned before—by the promise over the decades that increasingly powerful computers will magically design new materials; by combinatorial chemistry, a fad that raced through materials R&D labs in the early 2000s with little tangible result; and by the promise that synthetic biology would make our next generation of chemicals and materials.

More recently, the materials community has been blanketed by a new hype cycle around AI. Some of that hype was fueled by the 2023 DeepMind announcement of the discovery of “millions of new materials,” a claim that, in retrospect, clearly overpromised. And it was further fueled when an MIT economics student posted a paper in late 2024 claiming that a large, unnamed corporate R&D lab had used AI to efficiently invent a slew of new materials. AI, it seemed, was already revolutionizing the industry.

A few months later, the MIT economics department concluded that “the paper should be withdrawn from public discourse.” Two prominent MIT economists who are acknowledged in a footnote in the paper added that they had “no confidence in the provenance, reliability or validity of the data and the veracity of the research.”

Can AI move beyond the hype and false hopes and truly transform materials discovery? Maybe. There is ample evidence that it’s changing how materials scientists work, providing them—if nothing else—with useful lab tools. Researchers are increasingly using LLMs to query the scientific literature and spot patterns in experimental data. 

But it’s still early days in turning those AI tools into actual materials discoveries. The use of AI to run autonomous labs, in particular, is just getting underway; making and testing stuff takes time and lots of money. The morning I visited Lila Sciences, its labs were largely empty, and it’s now preparing to move into a much larger space a few miles away. Periodic Labs is just beginning to set up its lab in San Francisco. It’s starting with manual synthesis guided by AI predictions; its robotic high-throughput lab will come soon. Radical AI reports that its lab is almost fully autonomous but plans to soon move to a larger space.

""
Prominent AI researchers Liam Fedus (left) and Ekin Dogus Cubuk are the cofounders of Periodic Labs. The San Francisco–based startup aims to build an AI scientist that’s adept at the physical sciences.
JASON HENRY

When I talk to the scientific founders of these startups, I hear a renewed excitement about a field that long operated in the shadows of drug discovery and genomic medicine. For one thing, there is the money. “You see this enormous enthusiasm to put AI and materials together,” says Ceder. “I’ve never seen this much money flow into materials.”

Reviving the materials industry is a challenge that goes beyond scientific advances, however. It means selling companies on a whole new way of doing R&D.

But the startups benefit from a huge dose of confidence borrowed from the rest of the AI industry. And maybe that, after years of playing it safe, is just what the materials business needs.

  •  

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power?

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..." "When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions... We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory. Some key points: "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival...""When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk...""Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... ""Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power...""Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction...""The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve...""The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods..." [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed...""These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..." He's ultimately warning us about "politics masked as predictions..." "The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation. "It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."

Read more of this story at Slashdot.

  •  

Jimmy Lai: conviction of Hong Kong pro-democracy figure decried as attack on press freedom

Rights groups dismiss ‘sham conviction’ of media tycoon on national security offences in city’s most closely watched rulings in decades

Jimmy Lai, the Hong Kong pro-democracy media tycoon, is facing life in prison after being found guilty of national security and sedition offences, in one of the most closely watched rulings since the city’s return to Chinese rule in 1997.

Soon after the ruling was delivered, rights and press groups decried the verdict as a “sham conviction” and an attack on press freedom.

Continue reading...

© Photograph: Leung Man Hei/AFP/Getty Images

© Photograph: Leung Man Hei/AFP/Getty Images

© Photograph: Leung Man Hei/AFP/Getty Images

  •  

The rise and fall of Jimmy Lai, whose trajectory mirrored that of Hong Kong itself

Progressing from child labourer to billionaire, Lai used his power and wealth to promote democracy, which ultimately pitted him against authorities in Beijing

On Monday, a Hong Kong court convicted Jimmy Lai of national security offences, the end to a landmark trial for the city and its hobbled protest movement.

The verdict was expected. Long a thorn in the side of Beijing, Lai, a 78-year-old media tycoon and activist, was a primary target of the most recent and definitive crackdown on Hong Kong’s pro-democracy movement. Authorities cast him as a traitor and a criminal.

Continue reading...

© Photograph: Athit Perawongmetha/Reuters

© Photograph: Athit Perawongmetha/Reuters

© Photograph: Athit Perawongmetha/Reuters

  •  

SpaceX Alleges a Chinese-Deployed Satellite Risked Colliding with Starlink

"A SpaceX executive says a satellite deployed from a Chinese rocket risked colliding with a Starlink satellite," reports PC Magazine: On Friday, company VP for Starlink engineering, Michael Nicolls, tweeted about the incident and blamed a lack of coordination from the Chinese launch provider CAS Space. "When satellite operators do not share ephemeris for their satellites, dangerously close approaches can occur in space," he wrote, referring to the publication of predicted orbital positions for such satellites... [I]t looks like one of the satellites veered relatively close to a Starlink sat that's been in service for over two years. "As far as we know, no coordination or deconfliction with existing satellites operating in space was performed, resulting in a 200 meter (656 feet) close approach between one of the deployed satellites and STARLINK-6079 (56120) at 560 km altitude," Nicolls wrote... "Most of the risk of operating in space comes from the lack of coordination between satellite operators — this needs to change," he added. Chinese launch provider CAS Space told PCMag that "As a launch service provider, our responsibility ends once the satellites are deployed, meaning we do not have control over the satellites' maneuvers." And the article also cites astronomer/satellite tracking expert Jonathan McDowell, who had tweeted that CAS Space's response "seems reasonable." (In an email to PC Magazine, he'd said "Two days after launch is beyond the window usually used for predicting launch related risks." But "The coordination that Nicolls cited is becoming more and more important," notes Space.com, since "Earth orbit is getting more and more crowded." In 2020, for example, fewer than 3,400 functional satellites were whizzing around our planet. Just five years later, that number has soared to about 13,000, and more spacecraft are going up all the time. Most of them belong to SpaceX. The company currently operates nearly 9,300 Starlink satellites, more than 3,000 of which have launched this year alone. Starlink satellites avoid potential collisions autonomously, maneuvering themselves away from conjunctions predicted by available tracking data. And this sort of evasive action is quite common: Starlink spacecraft performed about 145,000 avoidance maneuvers in the first six months of 2025, which works out to around four maneuvers per satellite per month. That's an impressive record. But many other spacecraft aren't quite so capable, and even Starlink satellites can be blindsided by spacecraft whose operators don't share their trajectory data, as Nicolls noted. And even a single collision — between two satellites, or involving pieces of space junk, which are plentiful in Earth orbit as well — could spawn a huge cloud of debris, which could cause further collisions. Indeed, the nightmare scenario, known as the Kessler syndrome, is a debris cascade that makes it difficult or impossible to operate satellites in parts of the final frontier.

Read more of this story at Slashdot.

  •  

Roomba Maker 'iRobot' Files for Bankruptcy After 35 Years

Roomba manufacturer iRobot filed for bankruptcy today, reports Bloomberg. After 35 years, iRobot reached a "restructuring support agrement that will hand control of the consumer robot maker to Shenzhen PICEA Robotics Co, its main supplier and lender, and Santrum Hong Kong Compny." Under the restructuring, vacuum cleaner maker Shenzhen PICEA will receive the entire equity stake in the reorganised company... The plan will allow the debtor to remain as a going concern and continue to meet its commitments to employees and make timely payments in full to vendors and other creditors for amounts owed throughout the court-supervised process, according to an iRobot statement... he company warned of potential bankruptcy in December after years of declining earnings. Roomba says it's sold over 50 million robots, the article points out, but earnings "began to decline since 2021 due to supply chain headwinds and increased competition. "A hoped-for by acquisition by Amazon.com in 2023 collapsed over regulatory concerns."

Read more of this story at Slashdot.

  •  

Like Australia, Denmark Plans to Severely Restrict Social Media Use for Teenagers

"As Australia began enforcing a world-first social media ban for children under 16 years old this week, Denmark is planning to follow its lead," reports the Associated Press, "and severely restrict social media access for young people." The Danish government announced last month that it had secured an agreement by three governing coalition and two opposition parties in parliament to ban access to social media for anyone under the age of 15. Such a measure would be the most sweeping step yet by a European Union nation to limit use of social media among teens and children. The Danish government's plans could become law as soon as mid-2026. The proposed measure would give some parents the right to let their children access social media from age 13, local media reported, but the ministry has not yet fully shared the plans... [A] new "digital evidence" app, announced by the Digital Affairs Ministry last month and expected to launch next spring, will likely form the backbone of the Danish plans. The app will display an age certificate to ensure users comply with social media age limits, the ministry said. The article also notes Malaysia "is expected to ban social media accounts for people under the age of 16 starting at the beginning of next year, and Norway is also taking steps to restrict social media access for children and teens. "China — which manufacturers many of the world's digital devices — has set limits on online gaming time and smartphone time for kids."

Read more of this story at Slashdot.

  •  

CEOs Plan to Spend More on AI in 2026 - Despite Spotty Returns

The Wall Street Journal reports that 68% of CEOs "plan to spend even more on AI in 2026, according to an annual survey of more than 350 public-company CEOs from advisory firm Teneo." And yet "less than half of current AI projects had generated more in returns than they had cost, respondents said." They reported the most success using AI in marketing and customer service and challenges using it in higher-risk areas such as security, legal and human resources. Teneo also surveyed about 400 institutional investors, of which 53% expect that AI initiatives would begin to deliver returns on investments within six months. That compares to the 84% of CEOs of large companies — those with revenue of $10 billion or more — who believe it will take more than six months. Surprisingly, 67% of CEOs believe AI will increase their entry-level head count, while 58% believe AI will increase senior leadership head count. All the surveyed CEOS were from public companies with revenue over $1 billion...

Read more of this story at Slashdot.

  •  

'Investors in Limbo'. Will the TikTok Deal's Deadline Be Extended Again?

An anonymous reader shared this report from the BBC: A billionaire investor keen on buying TikTok's US operations has told the BBC he has been left in limbo as the latest deadline for the app's sale looms. The US has repeatedly delayed the date by which the platform's Chinese owner, Bytedance, must sell or be blocked for American users. US President Donald Trump appears poised to extend the deadline for a fifth time on Tuesday. "We're just standing by and waiting to see what happens," investor Frank McCourt told BBC News... The president...said "sophisticated" US investors would acquire the app, including two of his allies: Oracle chairman Larry Ellison and Dell Technologies' Michael Dell. Members of the Trump administration had indicated the deal would be formalised in a meeting between Trump and Xi in October — however it concluded without an agreement being reached. Neither TikTok's Chinese owner ByteDance nor Beijing have since announced approval of a sale, despite Trump's claims. This time there are no such claims a deal is imminent, leading most analysts to conclude another extension is inevitable. Other investors besides McCourt include Reddit co-founder Alexis Ohanian and Shark Tank entrepreneur Kevin O'Leary.

Read more of this story at Slashdot.

  •  

Podcast Industry Under Siege as AI Bots Flood Airways with Thousands of Programs

An anonymous reader shared this report from the Los Angeles Times: Popular podcast host Steven Bartlett has used an AI clone to launch a new kind of content aimed at the 13 million followers of his podcast "Diary of a CEO." On YouTube, his clone narrates "100 CEOs With Steven Bartlett," which adds AI-generated animation to Bartlett's cloned voice to tell the life stories of entrepreneurs such as Steve Jobs and Richard Branson. Erica Mandy, the Redondo Beach-based host of the daily news podcast called "The Newsworthy," let an AI voice fill in for her earlier this year after she lost her voice from laryngitis and her backup host bailed out... In podcasting, many listeners feel strong bonds to hosts they listen to regularly. The slow encroachment of AI voices for one-off episodes, canned ad reads, sentence replacement in postproduction or translation into multiple languages has sparked anger as well as curiosity from both creators and consumers of the content. Augmenting or replacing host reads with AI is perceived by many as a breach of trust and as trivializing the human connection listeners have with hosts, said Megan Lazovick, vice president of Edison Research, a podcast research company... Still, platforms such as YouTube and Spotify have introduced features for creators to clone their voice and translate their content into multiple languages to increase reach and revenue. A new generation of voice cloning companies, many with operations in California, offers better emotion, tone, pacing and overall voice quality... Some are using the tech to carpet-bomb the market with content. Los Angeles podcasting studio Inception Point AI has produced its 200,000 podcast episodes, in some weeks accounting for 1% of all podcasts published that week on the internet, according to CEO Jeanine Wright. The podcasts are so cheap to make that they can focus on tiny topics, like local weather, small sports teams, gardening and other niche subjects. Instead of a studio searching for a specific "hit" podcast idea, it takes just $1 to produce an episode so that they can be profitable with just 25 people listening... One of its popular synthetic hosts is Vivian Steele, an AI celebrity gossip columnist with a sassy voice and a sharp tongue... Inception Point has built a roster of more than 100 AI personalities whose characteristics, voices and likenesses are crafted for podcast audiences. Its AI hosts include Clare Delish, a cooking guidance expert, and garden enthusiastNigel Thistledown... Across Apple and Spotify, Inception Point podcasts have now garnered 400,000 subscribers.

Read more of this story at Slashdot.

  •  

Entry-Level Tech Workers Confront an AI-Fueled Jobpocalypse

AI "has gutted entry-level roles in the tech industry," reports Rest of World. One student at a high-ranking engineering college in India tells them that among his 400 classmates, "fewer than 25% have secured job offers... there's a sense of panic on the campus." Students at engineering colleges in India, China, Dubai, and Kenya are facing a "jobpocalypse" as artificial intelligence replaces humans in entry-level roles. Tasks once assigned to fresh graduates, such as debugging, testing, and routine software maintenance, are now increasingly automated. Over the last three years, the number of fresh graduates hired by big tech companies globally has declined by more than 50%, according to a report published by SignalFire, a San Francisco-based venture capital firm. Even though hiring rebounded slightly in 2024, only 7% of new hires were recent graduates. As many as 37% of managers said they'd rather use AI than hire a Gen Z employee... Indian IT services companies have reduced entry-level roles by 20%-25% thanks to automation and AI, consulting firm EY said in a report last month. Job platforms like LinkedIn, Indeed, and Eures noted a 35% decline in junior tech positions across major EU countries during 2024... "Five years ago, there was a real war for [coders and developers]. There was bidding to hire," and 90% of the hires were for off-the-shelf technical roles, or positions that utilize ready-made technology products rather than requiring in-house development, said Vahid Haghzare, director at IT hiring firm Silicon Valley Associates Recruitment in Dubai. Since the rise of AI, "it has dropped dramatically," he said. "I don't even think it's touching 5%. It's almost completely vanished." The company headhunts workers from multiple countries including China, Singapore, and the U.K... The current system, where a student commits three to five years to learn computer science and then looks for a job, is "not sustainable," Haghzare said. Students are "falling down a hole, and they don't know how to get out of it."

Read more of this story at Slashdot.

  •  

Polar Bears are Rewiring Their Own Genetics to Survive a Warming Climate

"Polar bears are still sadly expected to go extinct this century," with two-thirds of the population gone by 2050," says the lead researcher on a new study from the University of East Anglia in Britain. But their research also suggests polar bears "are rapidly rewiring their own genetics in a bid to survive," reports NBC News, in "the first documented case of rising temperatures driving genetic change in a mammal." "I believe our work really does offer a glimmer of hope — a window of opportunity for us to reduce our carbon emissions to slow down the rate of climate change and to give these bears more time to adapt to these stark changes in their habitats," [the lead author of the study told NBC News]. Building on earlier University of Washington research, [lead researcher] Godden's team analyzed blood samples from polar bears in northeastern and southeastern Greenland. In the slightly warmer south, they found that genes linked to heat stress, aging and metabolism behaved differently from those in northern bears. "Essentially this means that different groups of bears are having different sections of their DNA changed at different rates, and this activity seems linked to their specific environment and climate," Godden said in a university press release. She said this shows, for the first time, that a unique group of one species has been forced to "rewrite their own DNA," adding that this process can be considered "a desperate survival mechanism against melting sea ice...." Researchers say warming ocean temperatures have reduced vital sea ice platforms that the bears use to hunt seals, leading to isolation and food scarcity. This led to genetic changes as the animals' digestive system adapts to a diet of plants and low fats in the absence of prey, Godden told NBC News.

Read more of this story at Slashdot.

  •  

America Adds 11.7 GW of New Solar Capacity in Q3 - Third Largest Quarter on Record

America's solar industry "just delivered another huge quarter," reports Electrek, "installing 11.7 gigawatts (GW) of new capacity in Q3 2025. That makes it the third-largest quarter on record and pushes total solar additions this year past 30 GW..." According to the new "US Solar Market Insight Q4 2025" report from Solar Energy Industries Association (SEIA) and Wood Mackenzie, 85% of all new power added to the grid during the first nine months of the Trump administration came from solar and storage. And here's the twist: Most of that growth — 73% — happened in red [Republican-leaning] states. Eight of the top 10 states for new installations fall into that category, including Texas, Indiana, Florida, Arizona, Ohio, Utah, Kentucky, and Arkansas... Two new solar module factories opened this year in Louisiana and South Carolina, adding a combined 4.7 GW of capacity. That brings the total new U.S. module manufacturing capacity added in 2025 to 17.7 GW. With a new wafer facility coming online in Michigan in Q3, the U.S. can now produce every major component of the solar module supply chain... SEIA also noted that, following an analysis of EIA data, it found that more than 73 GW of solar projects across the U.S. are stuck in permitting limbo and at risk of politically motivated delays or cancellations.

Read more of this story at Slashdot.

  •  

Purdue University Approves New AI Requirement For All Undergrads

Nonprofit Code.org released its 2025 State of AI & Computer Science Education report this week with a state-by-state analysis of school policies complaining that "0 out of 50 states require AI+CS for graduation." But meanwhile, at the college level, "Purdue University will begin requiring that all of its undergraduate students demonstrate basic competency in AI," writes former college president Michael Nietzel, "starting with freshmen who enter the university in 2026." The new "AI working competency" graduation requirement was approved by the university's Board of Trustees at its meeting on December 12... The requirement will be embedded into every undergraduate program at Purdue, but it won't be done in a "one-size-fits-all" manner. Instead, the Board is delegating authority to the provost, who will work with the deans of all the academic colleges to develop discipline-specific criteria and proficiency standards for the new campus-wide requirement. [Purdue president] Chiang said students will have to demonstrate a working competence through projects that are tailored to the goals of individual programs. The intent is to not require students to take more credit hours, but to integrate the new AI expectation into existing academic requirements... While the news release claimed that Purdue may be the first school to establish such a requirement, at least one other university has introduced its own institution-wide expectation that all its graduates acquire basic AI skills. Earlier this year, The Ohio State University launched an AI Fluency initiative, infusing basic AI education into core undergraduate requirements and majors, with the goal of helping students understand and use AI tools — no matter their major. Purdue wants its new initiative to help graduates: — Understand and use the latest AI tools effectively in their chosen fields, including being able to identify the key strengths and limits of AI technologies; — Recognize and communicate clearly about AI, including developing and defending decisions informed by AI, as well as recognizing the influence and consequences of AI in decision-making; — Adapt to and work with future AI developments effectively.

Read more of this story at Slashdot.

  •  

Repeal Section 230 and Its Platform Protections, Urges New Bipartisan US Bill

U.S. Senator Sheldon Whitehouse said Friday he was moving to file a bipartisan bill to repeal Section 230 of America's Communications Decency Act. "The law prevents most civil suits against users or services that are based on what others say," explains an EFF blog post. "Experts argue that a repeal of Section 230 could kill free speech on the internet," writes LiveMint — though America's last two presidents both supported a repeal: During his first presidency, U.S. President Donald Trump called to repeal the law and signed an executive order attempting to curb some of its protections, though it was challenged in court. Subsequently, former President Joe Biden also voiced his opinion against the law. An EFF blog post explains the case for Section 230: Congress passed this bipartisan legislation because it recognized that promoting more user speech online outweighed potential harms. When harmful speech takes place, it's the speaker that should be held responsible, not the service that hosts the speech... Without Section 230, the Internet is different. In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects. In non-democratic countries, governments can directly censor the internet, controlling the speech of platforms and users. If the law makes us liable for the speech of others, the biggest platforms would likely become locked-down and heavily censored. The next great websites and apps won't even get started, because they'll face overwhelming legal risk to host users' speech. But "I strongly believe that Section 230 has long outlived its use," Senator Whitehouse said this week, saying Section 230 "a real vessel for evil that needs to come to an end." "The laws that Section 230 protect these big platforms from are very often laws that go back to the common law of England, that we inherited when this country was initially founded. I mean, these are long-lasting, well-tested, important legal constraints that have — they've met the test of time, not by the year or by the decade, but by the century. "And yet because of this crazy Section 230, these ancient and highly respected doctrines just don't reach these people. And it really makes no sense, that if you're an internet platform you get treated one way; you do the exact same thing and you're a publisher, you get treated a completely different way. "And so I think that the time has come.... It really makes no sense... [Testimony before the committee] shows how alone and stranded people are when they don't have the chance to even get justice. It's bad enough to have to live through the tragedy... But to be told by a law of Congress, you can't get justice because of the platform — not because the law is wrong, not because the rule is wrong, not because this is anything new — simply because the wrong type of entity created this harm."

Read more of this story at Slashdot.

  •  

Beware Trump’s two-pronged strategy undermining democracy | David Cole

The president announces non-existent emergencies to invoke extraordinary powers – and neutralizes the opposition

This month, we learned that, in the course of bombing a boat of suspected drug smugglers, the US military intentionally killed two survivors clinging to the wreckage after its initial air assault. In addition, Donald Trump said it was seditious for Democratic members of Congress to inform members of the military that they can, and indeed, must, resist patently illegal orders, and the FBI and Pentagon are reportedly investigating the members’ speech. Those related developments – the murder of civilians and an attack on free speech – exemplify two of Trump’s principal tactics in his second term. The first involves the assertion of extraordinary emergency powers in the absence of any actual emergency. The second seeks to suppress dissent by punishing those who dare to raise their voices. Both moves have been replicated time and time again since January 2025. How courts and the public respond will determine the future of constitutional democracy in the United States.

Nothing is more essential to a liberal democracy than the rule of law – that is, the notion that a democratic government is guided by laws, not discretionary whims; that the laws respect basic liberties for all; and that independent courts have the authority to hold political officials accountable when they violate those laws. These principles, forged in the United Kingdom, adopted and revised by the United States, are the bedrock of constitutional democracy. But they depend on courts being willing and able to check government abuse, and citizens exercising their rights to speak out in defense of the fundamental values when those values are under attack.

David Cole is the Honorable George J Mitchell professor in law and public policy at Georgetown University and former national legal director of the American Civil Liberties Union. This essay is adapted from his international rule of law lecture sponsored by the Bar Council.

Continue reading...

© Photograph: Alex Brandon/AP

© Photograph: Alex Brandon/AP

© Photograph: Alex Brandon/AP

  •  

Time Magazine's 'Person of the Year': the Architects of AI

Time magazine used its 98th annual "Person of the Year" cover to "recognize a force that has dominated the year's headlines, for better or for worse. For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME's 2025 Person of the Year." One cover illustration shows eight AI executives sitting precariously on a beam high above the city, while Time's 6,700-word article promises "the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how [Nvidia CEO] Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods." Time describes them betting on "one of the biggest physical infrastructure projects of all time," mentioning all the usual worries — datacenters' energy consumption, chatbot psychosis, predictions of "wiping out huge numbers of jobs" and the possibility of an AI stock market bubble. (Although "The drumbeat of warning that advanced AI could kill us all has mostly quieted"). But it also notes AI's potential to jumpstart innovation (and economic productivity) This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible. "Every industry needs it, every company uses it, and every nation needs to build it," Huang tells TIME in a 75-minute interview in November, two days after announcing that Nvidia, the world's first $5 trillion company, had once again smashed Wall Street's earnings expectations. "This is the single most impactful technology of our time..." The risk-averse are no longer in the driver's seat. Thanks to Huang, Son, Altman, and other AI titans, humanity is now flying down the highway, all gas no brakes, toward a highly automated and highly uncertain future. Perhaps Trump said it best, speaking directly to Huang with a jovial laugh in the U.K. in September: "I don't know what you're doing here. I hope you're right."

Read more of this story at Slashdot.

  •  

Trump Ban on Wind Energy Permits 'Unlawful', Court Rules

A January order blocking wind energy projects in America has now been vacated by a U.S. judge and declared unlawful, reports the Associated Press: [Judge Saris of the U.S. district court for the district of Massachusetts] ruled in favor of a coalition of state attorneys general from 17 states and Washington DC, led by Letitia James, New York's attorney general, that challenged President Trump's day one order that paused leasing and permitting for wind energy projects... The coalition that opposed Trump's order argued that Trump does not have the authority to halt project permitting, and that doing so jeopardizes the states' economies, energy mix, public health and climate goals. The coalition includes Arizona, California, Colorado, Connecticut, Delaware, Illinois, Maine, Maryland, Massachusetts, Michigan, Minnesota, New Jersey, New Mexico, New York, Oregon, Rhode Island, Washington state and Washington DC. They say they have invested hundreds of millions of dollars collectively to develop wind energy and even more on upgrading transmission lines to bring wind energy to the electrical grid... Wind is the United States' largest source of renewable energy, providing about 10% of the electricity generated in the nation, according to the American Clean Power Association. But the BBC quotes Timothy Fox, managing director at the Washington, DC-based research firm ClearView Energy Partners, as saying he doesn't expect the ruling to reinvigorate the industry: "It's more symbolic than substantive," he said. "All the court is saying is ... you need to go back to work and consider these applications. What does that really mean?" he said. Officials could still deny permits or bog applications down in lengthy reviews, he noted.

Read more of this story at Slashdot.

  •  

New Rule Forbids GNOME Shell Extensions Made Using AI-Generated Code

An anonymous reader shared this report from Phoronix: Due to the growing number of GNOME Shell extensions looking to appear on extensions.gnome.org that were generated using AI, it's now prohibited. The new rule in their guidelines note that AI-generated code will be explicitly rejected: "Extensions must not be AI-generated While it is not prohibited to use AI as a learning aid or a development tool (i.e. code completions), extension developers should be able to justify and explain the code they submit, within reason. Submissions with large amounts of unnecessary code, inconsistent code style, imaginary API usage, comments serving as LLM prompts, or other indications of AI-generated output will be rejected." In a blog post, GNOME developer Javad Rahmatzadeh explains that "Some devs are using AI without understanding the code..."

Read more of this story at Slashdot.

  •  

Is the R Programming Language Surging in Popularity?

The R programming language "is sometimes frowned upon by 'traditional' software engineers," says the CEO of software quality services vendor Tiobe, "due to its unconventional syntax and limited scalability for large production systems." But he says it "continues to thrive at universities and in research-driven industries, and "for domain experts, it remains a powerful and elegant tool." Yet it's now gaining more popularity as statistics and large-scale data visualization become important (a trend he also sees reflected in the rise of Wolfram/Mathematica). That's according to December's edition of his TIOBE Index, which attempts to rank the popularity of programming languages based on search-engine results for courses, third-party vendors, and skilled engineers. InfoWorld explains: In the December 2025 index, published December 7, R ranks 10th with a 1.96% rating. R has cracked the Tiobe index's top 10 before, such as in April 2020 and July 2020, but not in recent years. The rival Pypl Popularity of Programming Language Index, meanwhile, has R ranked fifth this month with a 5.84% share. "Programming language R is known for fitting statisticians and data scientists like a glove," said Paul Jansen, CEO of software quality services vendor Tiobe, in a bulletin accompanying the December index... Although data science rival Python has eclipsed R in terms of general adoption, Jansen said R has carved out a solid and enduring niche, excelling at rapid experimentation, statistical modeling, and exploratory data analysis. "We have seen many Tiobe index top 10 entrants rising and falling," Jansen wrote. "It will be interesting to see whether R can maintain its current position." "Python remains ahead at 23.64%," notes TechRepublic, "while the familiar chase group behind it holds steady for the moment. The real movement comes deeper in the list, where SQL edges upward, R rises to the top 10, and Delphi/Object Pascal slips away... SQLclimbs from tenth to eighth at 2.10%, adding a small +0.11% that's enough to move it upward in a tightly packed section of the table. Perl holds ninth at 1.97%, strengthened by a +1.33% gain that extends its late-year resurgence." It's interesting to see how TIOBE's ranking compare with PYPL's (which ranks languages based solely on how often language tutorials are searched on Google): TIOBE PYPL Python Python C C/C++ C++ Objective-C Java Java C# R JavaScript JavaScript Visual Basic Swift SQL C# Perl PHP R Rust Despite their different methodologies, both lists put Python at #1, Java at #5, and JavaScript at #7.

Read more of this story at Slashdot.

  •  

Mosquera’s last-gasp own goal hands Arsenal dramatic win against luckless Wolves

No easy games? Surely this one would be for Arsenal. Never before in English football history had a team endured a worse league record after 15 matches than Wolves. In any of the professional divisions. Their haul of two points gave an outline of the grimness, although by no means all of the detail.

Before kick-off, the bookmakers had Wolves at 28-1 to win; it was 8-1 for the draw. You just had to hand it to the club’s 3,000 travelling fans who took up their full ticket allocation. There were no trains back to Wolverhampton after the game, obviously. It was a weekend. Mission impossible? This felt like the definition of it.

Continue reading...

© Photograph: David Price/Arsenal FC/Getty Images

© Photograph: David Price/Arsenal FC/Getty Images

© Photograph: David Price/Arsenal FC/Getty Images

  •  

System76 Launches First Stable Release of COSMIC Desktop and Pop!_OS 24.04 LTS

This week System76 launched the first stable release of its Rust-based COSMIC desktop environment. Announced in 2021, it's designed for all GNU/Linux distributions — and it shipping with Pop!_OS 24.04 LTS (based on Ubuntu 24.04 LTS). An anonymous reader shared this report from 9to5Linux: Previous Pop!_OS releases used a version of the COSMIC desktop that was based on the GNOME desktop environment. However, System76 wanted to create a new desktop environment from scratch while keeping the same familiar interface and user experience built for efficiency and fun. This means that some GNOME apps have been replaced by COSMIC apps, including COSMIC Files instead of Nautilus (Files), COSMIC Terminal instead of GNOME Terminal, COSMIC Text Editor instead of GNOME Text Editor, and COSMIC Media Player instead of Totem (Video Player). Also, the Pop!_Shop graphical package manager used in previous Pop!_OS releases has now been replaced by a new app called COSMIC Store. "If you're ambitious enough, or maybe just crazy enough, there eventually comes a time when you realize you've reached the limits of current potential, and must create something completely new if you're to go further..." explains System76 founder/CEO Carl Richell: For twenty years we have shipped Linux computers. For seven years we've built the Pop!_OS Linux distribution. Three years ago it became clear we had reached the limit of our current potential and had to create something new. Today, we break through that limit with the release of Pop!_OS 24.04 LTS with the COSMIC Desktop Environment. Today is special not only in that it's the culmination of over three years of work, but even more so in that System76 has built a complete desktop environment for the open source community... I hope you love what we've built for you. Now go out there and create. Push the limits, make incredible things, and have fun doing it!

Read more of this story at Slashdot.

  •  

'Free Software Awards' Winners Announced: Andy Wingo, Alx Sa, Govdirectory

This week the Free Software Foundation honored Andy Wingo, Alx Sa, and Govdirectory with this year's annual Free Software Awards (given to community members and groups making "significant" contributions to software freedom): Andy Wingo is one of the co-maintainers of GNU Guile, the official extension language of the GNU operating system and the Scheme "backbone" of GNU Guix. Upon receiving the award, he stated: "Since I learned about free software, the vision of a world in which hackers freely share and build on each others' work has been a profound inspiration to me, and I am humbled by this recognition of my small efforts in the context of the Guile Scheme implementation. I thank my co-maintainer, Ludovic Courtès, for his comradery over the years: we are just building on the work of the past maintainers of Guile, and I hope that we live long enough to congratulate its many future maintainers." The 2024 Award for Outstanding New Free Software Contributor went to Alx Sa for work on the GNU Image Manipulation Program (GIMP). When asked to comment, Alx responded: "I am honored to receive this recognition! I started contributing to the GNU Image Manipulation Program as a way to return the favor because of all the cool things it's allowed me to do. Thanks to the help and mentorship of amazing people like Jehan Pagès, Jacob Boerema, Liam Quin, and so many others, I hope I've been able to help other people do some cool new things, too." Govdirectory was presented with this year's Award for Projects of Social Benefit, given to a project or team responsible for applying free software, or the ideas of the free software movement, to intentionally and significantly benefit society. Govdirectory provides a collaborative and fact-checked listing of government addresses, phone numbers, websites, and social media accounts, all of which can be viewed with free software and under a free license, allowing people to always reach their representatives in freedom... The FSF plans to further highlight the Free Software Award winners in a series of events scheduled for the new year to celebrate their contributions to free software.

Read more of this story at Slashdot.

  •  

Applets Are Officially Going, But Java In the Browser Is Better Than Ever

"The entire java.applet package has been removed from JDK 26, which will release in March 2026," notes Inside Java. But long-time Slashdot reader AirHog links to this blog post reminding us that "Applets Are Officially Gone, But Java In The Browser Is Better Than Ever." This brings to an official end the era of applets, which began in 1996. However, for years it has been possible to build modern, interactive web pages in Java without needing applets or plugins. TeaVM provides fast, performant, and lightweight tooling to transpile Java to run natively in the browser... TeaVM, at its heart, transpiles Java code into JavaScript (or, these days, WASM). However, in order for Java code to be useful for web apps, much more is required, and TeaVM delivers. It includes a minifier, to shrink the generated code and obfuscate the intent, to complicate reverse-engineering. It has a tree-shaker to eliminate unused methods and classes, keeping your app download compact. It packages your code into a single file for easy distribution and inclusion in your HTML page. It also includes wrappers for all popular browser APIs, so you can invoke them from your Java code easily, with full IDE assistance and auto-correct. The blog post also touts Flavour, an open-source framework "for coding, packaging, and optimizing single-page apps implemented in Java... a full front-end toolkit with templates, routing, components, and more" to "build your modern single-page app using 100% Java."

Read more of this story at Slashdot.

  •  

Startup Successfully Uses AI to Find New Geothermal Energy Reservoirs

A Utah-based startup announced last week it used AI to locate a 250-degree Fahrenheit geothermal reservoir, reports CNN. It'll start producing electricity in three to five years, the company estimates — and at least one geologist believes AI could be an exciting "gamechanger" for the geothermal industry. [Startup Zanskar Geothermal & Minerals] named it "Big Blind," because this kind of site — which has no visual indication of its existence, no hot springs or geysers above ground, and no history of geothermal exploration — is known as a "blind" system. It's the first industry-discovered blind site in more than three decades, said Carl Hoiland, co-founder and CEO of Zanskar. "The idea that geothermal is tapped out has been the narrative for decades," but that's far from the case, he told CNN. He believes there are many more hidden sites across the Western U.S. Geothermal energy is a potential gamechanger. It offers the tantalizing prospect of a huge source of clean energy to meet burgeoning demand. It's near limitless, produces scarcely any climate pollution, and is constantly available, unlike wind and solar, which are cheap but rely on the sun shining and the wind blowing. The problem, however, has been how to find and scale it. It requires a specific geology: underground reservoirs of hot water or steam, along with porous rocks that allow the water to move through them, heat up, and be brought to the surface where it can power turbines... The AI models Zanskar uses are fed information on where blind systems already exist. This data is plentiful as, over the last century and more, humans have accidentally stumbled on many around the world while drilling for other resources such as oil and gas. The models then scour huge amounts of data — everything from rock composition to magnetic fields — to find patterns that point to the existence of geothermal reserves. AI models have "gotten really good over the last 10 years at being able to pull those types of signals out of noise," Hoiland said... Zanskar's discovery "is very significant," said James Faulds, a professor of geosciences at Nevada Bureau of Mines and Geology.... Estimates suggest over three-quarters of US geothermal resources are blind, Faulds told CNN. "Refining methods to find such systems has the potential to unleash many tens and perhaps hundreds of gigawatts in the western US alone," he said... Big Blind is the company's first blind site discovery, but it's the third site it has drilled and hit commercial resources. "We expect dozens, to eventually hundreds, of new sites to be coming to market," Hoiland said.... Hoiland says Zanskar's work shows conventional geothermal still has huge untapped potential. Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

  •  

Firefox Survey Finds Only 16% Feel In Control of Their Privacy Choices Online

Choosing your browser "is one of the most important digital decisions you can make, shaping how you experience the web, protect your data, and express yourself online," says the Firefox blog. They've urged readers to "take a stand for independence and control in your digital life." But they also recently polled 8,000 adults in France, Germany, the UK and the U.S. on "how they navigate choice and control both online and offline" (attending in-person events in Chicago, Berlin, LA, and Munich, San Diego, Stuttgart): The survey, conducted by research agency YouGov, showcases a tension between people's desire to have control over their data and digital privacy, and the reality of the internet today — a reality defined by Big Tech platforms that make it difficult for people to exercise meaningful choice online: — Only 16% feel in control of their privacy choices (highest in Germany at 21%) — 24% feel it's "too late" because Big Tech already has too much control or knows too much about them. And 36% said the feeling of Big Tech companies knowing too much about them is frustrating — highest among respondents in the U.S. (43%) and the UK (40%) — Practices respondents said frustrated them were Big Tech using their data to train AI without their permission (38%) and tracking their data without asking (47%; highest in U.S. — 55% and lowest in France — 39%) And from our existing research on browser choice, we know more about how defaults that are hard to change and confusing settings can bury alternatives, limiting people's ability to choose for themselves — the real problem that fuels these dynamics. Taken together our new and existing insights could also explain why, when asked which actions feel like the strongest expressions of their independence online, choosing not to share their data (44%) was among the top three responses in each country (46% in the UK; 45% in the U.S.; 44% in France; 39% in Germany)... We also see a powerful signal in how people think about choosing the communities and platforms they join — for 29% of respondents, this was one of their top three expressions of independence online. "For Firefox, community has always been at the heart of what we do," says their VP of Global Marketing, "and we'll keep fighting to put real choice and control back in people's hands so the web once again feels like it belongs to the communities that shape it." At TwitchCon in San Diego Firefox even launched a satirical new online card game with a privacy theme called Data War.

Read more of this story at Slashdot.

  •  

The World's Electric Car Sales Have Spiked 21% So Far in 2025

Electrek reports: EV and battery supply chain research specialists Benchmark Mineral Intelligence reports that 2.0 million electric vehicles were sold globally in November 2025, bringing global EV sales to 18.5 million units year-to-date. That's a 21% increase compared to the same period in 2024. Europe was the clear growth leader in November, while North America continued to lag following the expiration of US EV tax credits. China, meanwhile, remains the world's largest EV market by a wide margin. Europe's EV market jumped 36% year-over-year in November 2025, with BEV sales up 35% and plug-in hybrid (PHEV) sales rising 39%. That brings Europe's total EV sales to 3.8 million units for the year so far, up 33% compared to January-November 2024... In North America, EV sales in the US did tick up month-over-month in November, following a sharp October drop after federal tax credits expired on September 30, 2025. Brands including Kia (up 30%), Hyundai (up 20%), Honda (up 11%), and Subaru (232 Solterra sales versus just 13 the month before) all saw gains, but overall volumes remain below levels when the federal tax credit was still available... [North America shows a -1% drop in EV sales from January to November 2025 vs. January to November 2024] Year-to-date, EV sales in China are up 19%, with 11.6 million units sold. One of the biggest headlines out of China is exports. BYD reported a record 131,935 EV exports in November, blowing past its previous high of around 90,000 units set in June. BYD sales in Europe have jumped more than fourfold this year to around 200,000 vehicles, doubled in Southeast Asia, and climbed by more than 50% in South America... "Overall, EV demand remains resilient, supported by expanding model ranges and sustained policy incentives worldwide," said Rho Motion data manager Charles Lester. Beyond China, Europe, and North America, the rest of the world saw a 48% spike in EV sales in 2025 vs the same 11 months in 2024, representing 1.5 million EVs sold. "The takeaway: EV demand continues to grow worldwide," the article adds, "but policy support — or the lack thereof — is increasingly shaping where this growth shows up."

Read more of this story at Slashdot.

  •  

How a 23-Year-Old in 1975 Built the World's First Handheld Digital Camera

In 1975, 23-year-old electrical engineer Steve Sasson joined Kodak. And in a new interview with the BBC, he remembers that he'd found the whole photographic process "really annoying.... I wanted to build a camera with no moving parts. Now that was just to annoy the mechanical engineers..." "You take your picture, you have to wait a long time, you have to fiddle with these chemicals. Well, you know, I was raised on Star Trek, and all the good ideas come from Star Trek. So I said what if we could just do it all electronically...?" Researchers at Bell Labs in the US had, in 1969, created a type of integrated circuit called a charge-coupled device (CCD). An electric charge could be stored on a metal-oxide semiconductor (MOS), and could be passed from one MOS to another. Its creators believed one of its applications might one day be used as part of an imaging device — though they hadn't worked out how that might happen. The CCD, nevertheless, was quickly developed. By 1974, the US microchip company Fairchild Semiconductors had built the first commercial CCD, measuring just 100 x 100 pixels — the tiny electronic samples taken of an original image. The new device's ability to capture an image was only theoretical — no-one had, as yet, tried to take an image and display it. (NASA, it turned out, was also looking at this technology, but not for consumer cameras....) The CCD circuit responded to light but could only form an image if Sasson was somehow able to attach a lens to it. He could then convert the light into digital information — a blizzard of 1s and 0s — but there was just one problem: money. "I had no money to build this thing. Nobody told me to build it, and I certainly couldn't demand any money for it," he says. "I basically stole all the parts, I was in Kodak and the apparatus division, which had a lot of parts. I stole the optical assembly from an XL movie camera downstairs in a used parts bin. I was just walking by, you see it, and you take it, you know." He was also able to source an analogue to digital converter from a $12 (about £5 in 1974) digital voltmeter, rather than spending hundreds on the part. I could manage to get all these parts without anybody really noticing," he says.... The bulky device needed a way to store the information the CCD was capturing, so Sasson used an audio cassette deck. But he also needed a way to view the image once it was saved on the magnetic tape. "We had to build a playback unit," Sasson says. "And, again, nobody asked me to do that either. So all I got to do is the reverse of what I did with the camera, and then I have to turn that digital pattern into an NTSC television signal." NTSC (National Television System Committee) was the conversion standard used by American TV sets. Sasson had to turn only 100 lines of digital code captured by the camera into the 400 lines that would form a television signal. The solution was a Motorola microprocessor, and by December 1975, the camera and its playback unit was complete, the article points out. With his colleague Jim Schueckler, Sasson had spent more than a year putting together the "increasingly bulky" device, that "looked like an oversized toaster." The camera had a shutter that would take an image at about 1/20th of a second, and — if everything worked as it should — the cassette tape would start to move as the camera transferred the stored information from its CCD [which took 23 seconds]. "It took about 23 seconds to play it back, and then about eight seconds to reconfigure it to make it look like a television signal, and send it to the TV set that I stole from another lab...." In 1978, Kodak was granted the first patent for a digital camera. It was Sasson's first invention. The patent is thought to have earned Eastman Kodak billions in licensing and infringement payments by the time they sold the rights to it, fearing bankruptcy, in 2012... As for Sasson, he never worked on anything other than the digital technology he had helped to create until he retired from Eastman Kodak in 2009. Thanks to long-time Slashdot reader sinij for sharing the article.

Read more of this story at Slashdot.

  •  

More of America's Coal-Fired Power Plants Cease Operations

New England's last coal-fired power plant "has ceased operations three years ahead of its planned retirement date," reports the New Hampshire Bulletin. "The closure of the New Hampshire facility paves the way for its owner to press ahead with an initiative to transform the site into a clean energy complex including solar panels and battery storage systems." "The end of coal is real, and it is here," said Catherine Corkery, chapter director for Sierra Club New Hampshire. "We're really excited about the next chapter...." The closure in New Hampshire — so far undisputed by the federal government — demonstrates that prolonging operations at some facilities just doesn't make economic sense for their owners. "Coal has been incredibly challenged in the New England market for over adecade," said Dan Dolan, president of the New England Power Generators Association. Merrimack Station, a 438-megawatt power plant, came online in the1960s and provided baseload power to the New England region for decades. Gradually, though, natural gas — which is cheaper and more efficient — took over the regional market... Additionally, solar power production accelerated from 2010 on, lowering demand on the grid during the day and creating more evening peaks. Coal plants take longer to ramp up production than other sources, and are therefore less economical for these shorter bursts of demand, Dolan said. In recent years, Merrimack operated only a few weeks annually. In 2024, the plant generated just0.22% of the region's electricity. It wasn't making enough money to justify continued operations, observers said. The closure "is emblematic of the transition that has been occurring in the generation fleet in New England for many years," Dolan said. "The combination of all those factors has meant that coal facilities are no longer economic in this market." Meanwhile Los Angeles — America's second-largest city — confirmed that the last coal-fired power plant supplying its electricity stopped operations just before Thanksgiving, reports the Utah News Dispatch: Advocates from the Sierra Club highlighted in a news release that shutting down the units had no impact on customers, and questioned who should "shoulder the cost of keeping an obsolete coal facility on standby...." Before ceasing operations, the coal units had been working at low capacities for several years because the agency's users hadn't been calling on the power [said John Ward, spokesperson for Intermountain Power Agency]. The coal-powered units "had a combined capacity of around 1,800 megawatts when fully operational," notes Electrek, "and as recently as 2024, they still supplied around 11% of LA's electricity. The plant sits in Utah's Great Basin region and powered Southern California for decades." Now, for the first time, none of California's power comes from coal. There's a political hiccup with IPP, though: the Republican-controlled Utah Legislature blocked the Intermountain Power Agency from fully retiring the coal units this year, ordering that they can't be disconnected or decommissioned. But despite that mandate, no buyers have stepped forward to keep the outdated coal units online. The Los Angeles Department of Water and Power (LADWP) is transitioning to newly built, hydrogen-capable generating units at the same IPP location, part of a modernization effort called IPP Renewed. These new units currently run on natural gas, but they're designed to burn a blend of natural gas and up to 30% green hydrogen, and eventually100% green hydrogen. LADWP plans to start adding green hydrogen to the fuel mix in 2026. "With the plant now idled but legally required to remain connected, serious questions remain about who will shoulder the cost of keeping an obsolete coal facility on standby," says the Sierra Club. One of the natural gas units started commerical operations last Octoboer, with the second starting later this month, IPP spokesperson John Ward told Agency]. the Utah News Dispatch.

Read more of this story at Slashdot.

  •  

Trump attacks old foe Biden – but presidential parallels hard to avoid

US president finds himself shouldering same burdens of affordability crisis and the inexorable march of time

He was supposed to be touting the economy but could not resist taking aim at an old foe. “Which is better: Sleepy Joe or Crooked Joe?” Donald Trump teased supporters in Pennsylvania this week, still toying with nicknames for his predecessor Joe Biden. “Typically, Crooked Joe wins. I’m surprised because to me he’s a sleepy son of a bitch.”

Exulting in Biden’s drowsiness, the US president and his supporters seemed blissfully ignorant of a rich irony: that 79-year-old Trump himself has recently been spotted apparently dozing off at various meetings.

Continue reading...

© Photograph: Alex Wong/Getty Images

© Photograph: Alex Wong/Getty Images

© Photograph: Alex Wong/Getty Images

  •  

Rust in Linux's Kernel 'is No Longer Experimental'

Steven J. Vaughan-Nichols files this report from Tokyo: At the invitation-only Linux Kernel Maintainers Summit here, the top Linux maintainers decided, as Jonathan Corbet, Linux kernel developer, put it, "The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the 'experimental' tag will be coming off." As Linux kernel maintainer Steven Rosted told me, "There was zero pushback." This has been a long time coming. This shift caps five years of sometimes-fierce debate over whether the memory-safe language belonged alongside C at the heart of the world's most widely deployed open source operating system... It all began when Alex Gaynor and Geoffrey Thomas at the 2019 Linux Security Summit said that about two-thirds of Linux kernel vulnerabilities come from memory safety issues. Rust, in theory, could avoid these by using Rust's inherently safer application programming interfaces (API)... In those early days, the plan was not to rewrite Linux in Rust; it still isn't, but to adopt it selectively where it can provide the most security benefit without destabilizing mature C code. In short, new drivers, subsystems, and helper libraries would be the first targets... Despite the fuss, more and more programs were ported to Rust. By April 2025, the Linux kernel contained about 34 million lines of C code, with only 25 thousand lines written in Rust. At the same time, more and more drivers and higher-level utilities were being written in Rust. For instance, the Debian Linux distro developers announced that going forward, Rust would be a required dependency in its foundational Advanced Package Tool (APT). This change doesn't mean everyone will need to use Rust. C is not going anywhere. Still, as several maintainers told me, they expect to see many more drivers being written in Rust. In particular, Rust looks especially attractive for "leaf" drivers (network, storage, NVMe, etc.), where the Rust-for-Linux bindings expose safe wrappers over kernel C APIs. Nevertheless, for would-be kernel and systems programmers, Rust's new status in Linux hints at a career path that blends deep understanding of C with fluency in Rust's safety guarantees. This combination may define the next generation of low-level development work.

Read more of this story at Slashdot.

  •  

‘A master of complications’: Felicity Kendal returns to Tom Stoppard’s Indian Ink after three decades

The writer’s former partner and her co-star Ruby Ashbourne Serkis describe the bittersweet nature of remounting his 90s play so soon after his death

‘We were swimming in the mind pool of Tom Stoppard!’ – actors salute the great playwright

I won’t, I promise, refer to Felicity Kendal as Tom Stoppard’s muse. “No,” she says firmly. “Not this week.” Speaking to Stoppard’s former partner and longtime leading lady is delicate in the immediate aftermath of the writer’s death. But she is previewing a revival of his Indian Ink, so he shimmers through the conversation. The way Kendal refers to Stoppard in the present tense tells its own poignant story.

Settling into a squishy brown sofa at Hampstead theatre, Kendal describes revisiting the 1995 work, developed from a 1991 radio play. “It’s a play that I always thought I’d like to go back to.” Previously starring as Flora Crewe, a provocative British poet visiting 1930s India, she now plays Eleanor Swan, Flora’s sister. We meet Eleanor in the 1980s, fending off an intrusive biographer but uncovering her sister’s rapt and nuanced relationships in India.

Continue reading...

© Photograph: Johan Persson

© Photograph: Johan Persson

© Photograph: Johan Persson

  •  

‘Getting lost is good’: skybridge and floating stairs bring fun and thrills to mighty new Taiwan museum

With its soaring ceilings, meandering pathways and mesh-like walls, Taichung Art Museum, designed by Sanaa, sweeps visitors from library to gallery to rooftop garden for rousing views

Walking through the brand new Taichung Art Museum in central Taiwan, directions are kind of an abstract concept. Designed by powerhouse Japanese architecture firm Sanaa, the complex is a collection of eight askew buildings, melding an art museum and municipal library, encased in silver mesh-like walls, with soaring ceilings and meandering pathways.

Past the lobby – a breezy open space that is neither inside nor out – the visitor wanders around paths and ramps, finding themselves in the library one minute and a world-class art exhibition the next. A door might suddenly step through to a skybridge over a rooftop garden, with sweeping views across Taichung’s Central Park, or into a cosy teenage reading room. Staircases float on the outside of buildings, floor levels are disparate, complementing a particular space’s purpose and vibe rather than having an overall consistency.

Continue reading...

© Photograph: Iwan Baan/Image courtesy of Cultural Affairs Bureau, Taichung City Government. © Iwan Baan

© Photograph: Iwan Baan/Image courtesy of Cultural Affairs Bureau, Taichung City Government. © Iwan Baan

© Photograph: Iwan Baan/Image courtesy of Cultural Affairs Bureau, Taichung City Government. © Iwan Baan

  •  

Philips Hue’s New Security Camera Is Surprisingly Useful

We may earn a commission from links on this page.

Philips Hue is one of the most well-respected and popular brands in smart lights—but what about its smart security cameras? Parent company Signify has been developing Hue cameras for a couple of years now, with a video doorbell and 2K camera upgrades recently added to the portfolio of devices. (Note: This 2K version hasn't yet landed in the U.S., but the existing 1080p versions are quite similar.)

I got a chance to test out the new 2K Hue Secure camera, and alongside all the basics of a camera like this, it came with an extra bonus that worked better than I expected: seamless integration with Philips Hue lights. These two product categories actually work better together than you might think.

While you can certainly connect cameras and lights across a variety of smart home platforms, Philips Hue is one of very few manufacturers making both types of device (TP-Link is another). That gives you a simplicity and interoperability you don't really get elsewhere.

Setting up a Hue camera

Philips Hue app
All the basic security camera features are covered. Credit: Lifehacker

Hue cameras are controlled inside the same Hue app for Android or iOS as the Hue lights. You don't necessarily need a Hue Bridge to connect the camera, too, as it can link to your wifi directly, but the Bridge is required if you want it to be able to sync with your lights—which is one of the key features here. (If you already have the lights, you'll already have the Bridge anyway.)

The 2K Hue Secure wired camera I've been testing comes with a 2K video resolution (as the name suggests). two-way audio, a built-in siren, infrared night vision, and weatherproofing (so you can use it indoors or out). As well as the wired version I've got here, there's also a battery-powered option, and a model that comes with a desktop stand.

Once configured, the camera lives in the same Home tab inside the mobile app as any Philips Hue lights you've got. The main panel doesn't show the camera feed—instead, it shows the armed status of the camera, which can be configured separately depending on whether you're at home or not. The idea is that you don't get disturbed with a flurry of unnecessary notifications when you're moving around.

The basic functionality is the same as every other security camera: Motion is detected and you get a ping to your phone with details, with a saved clip of the event that stays available for 24 hours. You can also tap into the live feed from the camera at any time, should you want to check in on the pets or the backyard.

As is often the case with security cameras, there is an optional subscription plan that gives you long-term video clip storage, activity zone settings, and AI-powered identification of people, animals, vehicles, and packages. That will set you back from $4 a month, with a discount if you pay for a year at a time.

Syncing a camera with smart lights

Philips Hue app
Your cameras can be used as customized triggers for your lights. Credit: Lifehacker

I started off a little unsure about just how useful it would be to connect up the Hue cameras and Hue lights—it's not a combination that gets talked about much—but it's surprisingly useful. If you delve into the camera settings inside the Hue app, there's a Trigger lights section especially for this.

You get to choose which of your lights are affected—they don't all have to go on and off together—and there are customizations for color and brightness across certain time schedules. You could have your bulbs glowing red during the night, for example, or turning bright blue during the daytime. The duration the lights stay on for can also be set.

It's not the most sophisticated system, but it works: If someone is loitering around your property, you can have a selected number of lights turn on to put them off, or to suggest that someone is in fact at home. This is in addition to everything else you can do, including sounding a siren through the camera, and because it works through the Hue Bridge it all happens pretty much instantaneously.

You can also set specific cameras as basic motion sensors for you and your family—lighting up the way to the bathroom late at night, for example. This can work even when the system is disarmed, so there's no wifi video streaming happening, but the cameras are still watching out for movement and responding accordingly.

There's one more option worth mentioning in the security settings in the Hue app: "mimic presence." This can randomly turn your lights on and off at certain points in the day, and the schedule you choose can be controlled by whether or not your Hue security is armed or disarmed (so nothing happens when everyone is at home).

  •  

Hacks Up, Budgets Down: OT Oversight Must Be An IT Priority

OT oversight is an expensive industrial paradox. It’s hard to believe that an area can be simultaneously underappreciated, underfunded, and under increasing attack. And yet, with ransomware hackers knowing that downtime equals disaster and companies not monitoring in kind, this is an open and glaring hole across many ecosystems. Even a glance at the numbers..

The post Hacks Up, Budgets Down: OT Oversight Must Be An IT Priority appeared first on Security Boulevard.

  •