Normal view

Received before yesterday

Elon Musk teams with El Salvador to bring Grok chatbot to public schools

11 December 2025 at 18:11

President Nayib Bukele entrusting chatbot known for calling itself ‘MechaHitler’ to create ‘AI-powered’ curricula

Elon Musk is partnering with the government of El Salvador to bring his artificial intelligence company’s chatbot, Grok, to more than 1 million students across the country, according to a Thursday announcement by xAI. Over the next two years, the plan is to “deploy” the chatbot to more than 5,000 public schools in an “AI-powered education program”.

xAI’s Grok is more known for referring to itself as “MechaHitler” and espousing far-right conspiracy theories than it is for public education. Over the past year, the chatbot has spewed various antisemitic content, decried “white genocide” and claimed Donald Trump won the 2020 election.

Continue reading...

© Photograph: Evan Vucci/AP

© Photograph: Evan Vucci/AP

© Photograph: Evan Vucci/AP

Rethinking Security as Access Control Moves to the Edge

11 December 2025 at 13:35
attacks, cyberattacks, cybersecurity, lobin, CISOs, encryption, organizations, recovery, Fenix24, Edgeless digital immunity, digital security, confidential Oracle recovery gateway, security

The convergence of physical and digital security is driving a shift toward software-driven, open-architecture edge computing. Access control has typically been treated as a physical domain problem — managing who can open which doors, using specialized systems largely isolated from broader enterprise IT. However, the boundary between physical and digital security is increasingly blurring. With..

The post Rethinking Security as Access Control Moves to the Edge appeared first on Security Boulevard.

Disappointing Oracle results knock $80bn off value amid AI bubble fears

11 December 2025 at 13:37

Weaker-than-forecast quarterly data for Larry Ellison’s tech company shows slowdown in revenue growth and big rise in spending

Oracle’s shares tumbled 15% on Thursday in response to the company’s quarterly financial results, disclosed the day before.

Roughly $80bn vanish from the value of the business software company co-founded by Donald Trump ally Larry Ellison, falling from $630bn (£470bn) to $550bn and fuelling fears of a bubble in artificial intelligence-related stocks. Shares in the chipmaker Nvidia, seen as a bellwether for the AI boom, fell after Oracle’s.

Continue reading...

© Photograph: Sundry Photography/Alamy

© Photograph: Sundry Photography/Alamy

© Photograph: Sundry Photography/Alamy

Disney to invest $1bn in OpenAI, allowing characters in Sora video tool

11 December 2025 at 09:31

Agreement comes amid anxiety in Hollywood over impact of AI on the industry, expression and rights of creators

Walt Disney has announced a $1bn equity investment in OpenAI, enabling the AI startup’s Sora video generation tool to use its characters.

Users of Sora will be able to generate short, user-prompted social videos that draw on more than 200 Disney, Marvel, Pixar and Star Wars characters as part of a three-year licensing agreement between OpenAI and the entertainment giant.

Continue reading...

© Photograph: Gary Hershorn/Getty Images

© Photograph: Gary Hershorn/Getty Images

© Photograph: Gary Hershorn/Getty Images

Consumer test drive: can AI do your Christmas gift shopping for you?

10 December 2025 at 08:01

The short answer is yes, but if you don’t want big brands or to use Amazon then more time and a lot more prompts are needed

The question “what present do you recommend for …” will be tapped into phones and computers countless times over this festive period, as more people turn to AI platforms to help choose gifts for loved ones.

With a quarter of Britons using AI to find products, brands are increasingly adapting their strategies to ensure their products are the ones recommended, especially those trying to reach younger audiences.

Continue reading...

© Photograph: Peter Morgan/AP

© Photograph: Peter Morgan/AP

© Photograph: Peter Morgan/AP

Securing VMware workloads in regulated industries

At a regional hospital, a cardiac patient’s lab results sit behind layers of encryption, accessible to his surgeon but shielded from those without strictly need-to-know status. Across the street at a credit union, a small business owner anxiously awaits the all-clear for a wire transfer, unaware that fraud detection systems have flagged it for further review.

Such scenarios illustrate how companies in regulated industries juggle competing directives: Move data and process transactions quickly enough to save lives and support livelihoods, but carefully enough to maintain ironclad security and satisfy regulatory scrutiny.

Organizations subject to such oversight walk a fine line every day. And recently, a number of curveballs have thrown off that hard-won equilibrium. Agencies are ramping up oversight thanks to escalating data privacy concerns; insurers are tightening underwriting and requiring controls like MFA and privileged-access governance as a condition of coverage. Meanwhile, the shifting VMware landscape has introduced more complexity for IT teams tasked with planning long-term infrastructure strategies. 

Download the full article

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

AI researchers are to blame for serving up slop | Letter

9 December 2025 at 11:43

They have unleashed irresponsible innovations on the world and their slop generators have flooded academia, says Dr Craig Reeves

I’m not surprised to read that the field of artificial intelligence research is complaining about being overwhelmed by the very slop that it has pioneered (Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’, 6 December). But this is a bit like bears getting indignant about all the shit in the woods.

It serves AI researchers right for the irresponsible innovations that they’ve unleashed on the world, without ever bothering to ask the rest of us whether we wanted it.

Continue reading...

© Photograph: Dado Ruvić/Reuters

© Photograph: Dado Ruvić/Reuters

© Photograph: Dado Ruvić/Reuters

Don’t use ‘admin’: UK’s top 20 most-used passwords revealed as scams soar

7 December 2025 at 02:00

Easy-to-guess words and figures still dominate, alarming cysbersecurity experts and delighting hackers

It is a hacker’s dream. Even in the face of repeated warnings to protect online accounts, a new study reveals that “admin” is the most commonly used password in the UK.

The second most popular, “123456”, is also unlikely to keep hackers at bay.

Continue reading...

© Photograph: imageBROKER.com/Alamy

© Photograph: imageBROKER.com/Alamy

© Photograph: imageBROKER.com/Alamy

Meta Weighs Cuts to Its Metaverse Unit

4 December 2025 at 15:37
Meta plans to direct its investments to focus on wearables like its augmented reality glasses but does not plan to abandon building the metaverse.

© Jim Wilson/The New York Times

Meta’s virtual reality headset last year. The company’s augmented reality glasses have become a surprise hit.

Microsoft drops AI sales targets in half after salespeople miss their quotas

3 December 2025 at 13:24

Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.

AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”

The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.

Read full article

Comments

© Wong Yu Liang via Getty Images

On recreating the lost SDK for a 42-year-old operating system: VisiCorp VisiOn

3 December 2025 at 17:37

I would think most of us here at OSNews are aware of VisiOn, the graphical multitasking operating system for the IBM PC which was one of the first operating systems with a graphical user interface, predating Windows, GEM, the Mac, and even the Apple Lisa. While VisiOn was technically an “open” platform anybody could develop an application for, the operating system’s SDK cost $7000 at the time and required a VAX system. This, combined with VisiOn failing in the market, means nobody knows how to develop an application for it.

Until now. Over the past few months, Nina Kalinina painstakingly unraveled VisiOn so that she she could recreate the SDK from scratch. In turn, this allowed developer Atsuko to develop a clean-room application for VisiOn – which is most likely the very first third-party application ever developed and released for VisiOn. I’ve been following along with the pains Kalinina had to go through for this endeavour over on Fedi, and it sure was a wild ride few would be willing (and capable) to undertake.

It took me a month of working 1-2 hours a day to produce a specification that allowed Atsuko to implement a clean-room homebrew application for VisiOn that is capable of bitmap display, menus and mouse handling.

If you’re wondering what it felt like: this project is the largest “Sudoku puzzle” I have ever tried to solve. In this note, I have tried to explain the process of solving this puzzle, as well as noteworthy things about VisiOn and its internals.

↫ Nina Kalinina

The article contains both a detailed look at VisiOn, as well as the full process of recreating its SDK and developing an application with it. Near the end of the article, after going over all the work that was required to get here, there’s a sobering clarification:

This reverse-engineering project ended up being much bigger than I anticipated. We have a working application, yes, but so far I’ve documented less than 10% of all the VisiHost and VisiOp calls. We still don’t know how to implement keyboard input, or how to work with timers and background processes (if it is possible).

↫ Nina Kalinina

I’d love for more people to be interested in helping this effort out, as it’s not just an extremely difficult challenge, but also a massive contribution to software preservation. VisiOn may not be more than a small footnote in computing history, but it still deserves to be remembered and understood, and Kalinina and Atsuko have done an amazing amount of legwork for whomever wants to pick this up, too.

Why is running Linux on a RiscPC so hard?

3 December 2025 at 16:19

What if you have a Risc PC, but aside from RISC OS, you also want to run Linux? Well, then you have to jump through a lot of hoops, especially in 2025.

Well, this was a mess. I don’t know why Potato is so crashy when I install it. I don’t know why the busybox binary in the Woody initrd is so broken. But I’ve got it installed, and now I can do circa-2004 UNIX things with a machine from 1994.

↫ Jonathan Pallant

The journey is definitely the most rewarding experience here for us readers, but I’m fairly sure Pallant is just happy to have a working Linux installation on his Risc PC and wants to mostly forget about that journey. Still, reading about the Risc PC is very welcome, since it’s one of those platforms you just don’t hear about very often between everyone talking about classic Macs and Commodore 64s all the time.

A vector graphics workstation from the 70s

3 December 2025 at 10:26

OK I promised computers, so let’s move to the Tek 4051 I got! Released in 1975, this was based on the 4010 series of terminals, but with a Motorola 6800 computer inside. This machine ran, like so many at the time, BASIC, but with extra subroutines for drawing and manipulating vector graphics. 8KB RAM was standard, but up to 32KB RAM could be installed. Extra software was installed via ROM modules in the back, for example to add DSP routines. Data could be saved on tape, and via RS232 and GBIP external devices could be attached!

All in all, a pretty capable machine, especially in 1975. BASIC computers where getting common, but graphics was pretty new. According to Tektronix the 4051 was ideal for researches, analysts and physicians, and this could be yours for the low low price of 6 grand, or around $36.000 in 2025. I could not find sales figures, but it seems that this was a decently successful machine. Tektronix also made the 4052, with a faster CPU, and the 4054, a 19″ 4K resolution behemoth! Tektronix continued making workstations until the 90s but like almost all workstations of the era, x86/Linux eventually took over the entire workstation market.

↫ Rik te Winkel at Just another electronics blog

Now that’s a retro computer you don’t see very often.

Accelerating VMware migrations with a factory model approach

In 1913, Henry Ford cut the time it took to build a Model T from 12 hours to just over 90 minutes. He accomplished this feat through a revolutionary breakthrough in process design: Instead of skilled craftsmen building a car from scratch by hand, Ford created an assembly line where standardized tasks happened in sequence, at scale.

The IT industry is having a similar moment of reinvention. Across operations from software development to cloud migration, organizations are adopting an AI-infused factory model that replaces manual, one-off projects with templated, scalable systems designed for speed and cost-efficiency.

Take VMware migrations as an example. For years, these projects resembled custom production jobs—bespoke efforts that often took many months or even years to complete. Fluctuating licensing costs added a layer of complexity, just as business leaders began pushing for faster modernization to make their organizations AI-ready. That urgency has become nearly universal: According to a recent IDC report, six in 10 organizations evaluating or using cloud services say their IT infrastructure requires major transformation, while 82% report their cloud environments need modernization.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Moving toward LessOps with VMware-to-cloud migrations

Today’s IT leaders face competing mandates to do more (“make us an ‘AI-first’ enterprise—yesterday”) with less (“no new hires for at least the next six months”).

VMware has become a focal point of these dueling directives. It remains central to enterprise IT, with 80% of organizations using VMware infrastructure products. But shifting licensing models are prompting teams to reconsider how they manage and scale these workloads, often on tighter budgets.

For many organizations, the path forward involves adopting a LessOps model, an operational strategy that makes hybrid environments manageable without increasing headcount. This operational philosophy minimizes human intervention through extensive automation and selfservice capabilities while maintaining governance and compliance.

In practice, VMware-to-cloud migrations create a “two birds, one stone” opportunity. They present a practical moment to codify the automation and governance practices LessOps depends on—laying the groundwork for a leaner, more resilient IT operating model.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Mexico Unveils Plans To Build Most Powerful Supercomputer In Latin America

26 November 2025 at 22:30
An anonymous reader quotes a report from the Associated Press: Mexico unveiled plans Wednesday to build what it claims will be Latin America's most powerful supercomputer -- a project the government says will help the country capitalize on the rapidly evolving uses of artificial intelligence and exponentially expand the country's computing capacity. Dubbed "Coatlicue" for the Mexica goddess considered the earth mother, the supercomputer would be seven times more powerful than the region's current leader in Brazil, Jose Merino, head of the Telecommunications and Digital Transformation Agency. President Claudia Sheinbaum said during her morning news briefing that the location for the project had not been decided yet, but construction will begin next year. "We're very excited," said Sheinbaum, an academic and climate scientist. "It is going to allow Mexico to fully get in on the use of artificial intelligence and the processing of data that today we don't have the capacity to do." Merino said that Mexico's most powerful supercomputer operates at 2.3 petaflops -- a unit to measure computing speed, meaning it can perform one quadrillion operations per second. Coatlicue would have a capacity of 314 petaflops.

Read more of this story at Slashdot.

Vision Pro M5 review: It’s time for Apple to make some tough choices

26 November 2025 at 12:00

With the recent releases of visionOS 26 and newly refreshed Vision Pro hardware, it’s an ideal time to check in on Apple’s Vision Pro headset—a device I was simultaneously amazed and disappointed by when it launched in early 2024.

I still like the Vision Pro, but I can tell it’s hanging on by a thread. Content is light, developer support is tepid, and while Apple has taken action to improve both, it’s not enough, and I’m concerned it might be too late.

When I got a Vision Pro, I used it a lot: I watched movies on planes and in hotel rooms, I walked around my house placing application windows and testing out weird new ways of working. I tried all the neat games and educational apps, and I watched all the immersive videos I could get ahold of. I even tried my hand at developing my own applications for it.

Read full article

Comments

© Samuel Axon

Aligning VMware migration with business continuity

For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.

In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.

Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The why of LisaGUI

21 November 2025 at 16:48

LisaGUI is an amazing project that recreates the entire user interface of the Apple Lisa in the browser, using nothing but CSS, a bit of HTML, and SVG files, and it’s an absolute joy to use and experience. Its creator, Andrew Yaros, has published a blog post diving into the why and how of LisaGUI.

I had been trying to think of a good project to add to my programming portfolio, which was lacking. Finding an idea I was willing and able to execute on proved harder than expected. Good ideas are born from necessity and enthusiasm; trying to create a project for its own sake tends to be an uphill battle. I was also hoping to think of a specific project idea that hasn’t really been tried before. As you may have guessed by the title of this post, LisaGUI ended up being that project, although I didn’t really set out to make it as much as I stumbled into it while trying to accomplish something else.

↫ Andrew Yaros

I’m someone who prefers to run the real thing on real hardware, but in a lot of cases, that’s just not realistic anymore. Hardware like the Apple Lisa are not only hard to find and expensive, they also require considerable knowledge and skill to maintain and possibly repair, which not everyone can do. For these types of machines, virtualisation, emulation, and recreation are much better, more accessible options, especially if it involves hardware and software you’re not interested enough in to spend time and money on them.

Google tells employees it must double capacity every 6 months to meet AI demand

21 November 2025 at 16:47

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.

During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale “the next 1000x in 4-5 years.”

While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”

Read full article

Comments

© Google

The A.I. Boom Has Found Another Gear. Why Can’t People Shake Their Worries?

20 November 2025 at 20:34
It is a time of superlatives in the tech industry, with historic profits, stock prices and deal prices. It’s enough to make some people very nervous.

© Scott Ball for The New York Times

OpenAI’s Stargate data center complex in Abilene, Texas.

Techstrong Group and DigiCert Unveil the “Quantum Security 25” to Spotlight Leaders Shaping the Future of Quantum Security

20 November 2025 at 12:46
Quantum Security 25

Inaugural awards celebrate the pioneers turning quantum’s promise into real-world impact, bridging theory and practice in the next era of secure computing  Boca Raton, FL, November 20, 2025 — Techstrong Group, in collaboration with DigiCert, today announced the launch of Quantum Security 25, a new awards program recognizing the top 25 most influential people in..

The post Techstrong Group and DigiCert Unveil the “Quantum Security 25” to Spotlight Leaders Shaping the Future of Quantum Security appeared first on Security Boulevard.

New Attacks Against Secure Enclaves

10 November 2025 at 07:04

Encryption can protect data at rest and data in transit, but does nothing for data in use. What we have are secure enclaves. I’ve written about this before:

Almost all cloud services have to perform some computation on our data. Even the simplest storage provider has code to copy bytes from an internal storage system and deliver them to the user. End-to-end encryption is sufficient in such a narrow context. But often we want our cloud providers to be able to perform computation on our raw data: search, analysis, AI model training or fine-tuning, and more. Without expensive, esoteric techniques, such as secure multiparty computation protocols or homomorphic encryption techniques that can perform calculations on encrypted data, cloud servers require access to the unencrypted data to do anything useful...

The post New Attacks Against Secure Enclaves appeared first on Security Boulevard.

New Attacks Against Secure Enclaves

10 November 2025 at 07:04

Encryption can protect data at rest and data in transit, but does nothing for data in use. What we have are secure enclaves. I’ve written about this before:

Almost all cloud services have to perform some computation on our data. Even the simplest storage provider has code to copy bytes from an internal storage system and deliver them to the user. End-to-end encryption is sufficient in such a narrow context. But often we want our cloud providers to be able to perform computation on our raw data: search, analysis, AI model training or fine-tuning, and more. Without expensive, esoteric techniques, such as secure multiparty computation protocols or homomorphic encryption techniques that can perform calculations on encrypted data, cloud servers require access to the unencrypted data to do anything useful.

Fortunately, the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

Secure enclaves are critical in our modern cloud-based computing architectures. And, of course, they have vulnerabilities:

The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month—which worked only against CPUs using DDR4 memory—TEE.fail works against DDR5, allowing them to work against the latest TEEs.

Yes, these attacks require physical access. But that’s exactly the threat model secure enclaves are supposed to secure against.

Why Debt Funding Is Ratcheting Up the Risks of the A.I. Boom

10 November 2025 at 05:00
While the tech giants have plenty of money to build data centers, smaller outfits are taking on debt and taking big chances to work with them.

© Shelby Tauber/Reuters

OpenAI is involved in a massive data center project in Abilene, Texas.

Your Security Team Is About to Get an AI Co-Pilot — Whether You’re Ready or Not: Report

8 November 2025 at 13:47
CISO

The days of human analysts manually sorting through endless security alerts are numbered. By 2028, artificial intelligence (AI) agents will handle 80% of that work in most security operations centers worldwide, according to a new IDC report. But while AI promises to revolutionize defense, it’s also supercharging the attackers. IDC predicts that by 2027, 80%..

The post Your Security Team Is About to Get an AI Co-Pilot — Whether You’re Ready or Not: Report appeared first on Security Boulevard.

A new ion-based quantum computer makes error correction simpler

5 November 2025 at 16:43

The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability. 

Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s.

“Helios is an important proof point in our road map about how we’ll scale to larger physical systems,” says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum’s majority owner.

Located at Quantinuum’s facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium.  These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ℉), on top of an optical table. Users can access the computer by logging in remotely over the cloud.

Helios encodes information in the ions’ quantum states, which can represent not only 0s and 1s, like the bits in classical computing, but probabilistic combinations of both, known as superpositions. A hallmark of quantum computing, these superposition states are akin to the state of a coin flipping in the air—neither heads nor tails, but some probability of both. 

Quantum computing exploits the unique mathematics of quantum-mechanical objects like ions to perform computations. Proponents of the technology believe this should enable commercially useful applications, such as highly accurate chemistry simulations for the development of batteries or better optimization algorithms for logistics and finance. 

In the last decade, researchers at companies and academic institutions worldwide have incrementally developed the technology with billions of dollars of private and public funding. Still, quantum computing is in an awkward teenage phase. It’s unclear when it will bring profitable applications. Of late, developers have focused on scaling up the machines. 

A key challenge to making a more powerful quantum computer is implementing error correction. Like all computers, quantum computers occasionally make mistakes. Classical computers correct these errors by storing information redundantly. Owing to quirks of quantum mechanics, quantum computers can’t do this and require special correction techniques. 

Quantum error correction involves storing a single unit of information in multiple qubits rather than in a single qubit. The exact methods vary depending on the specific hardware of the quantum computer, with some machines requiring more qubits per unit of information than others. The industry refers to an error-corrected unit of quantum information as a “logical qubit.” Helios needs two ions, or “physical qubits,” to create one logical qubit.

This is fewer physical qubits than needed in recent quantum computers made of superconducting circuits. In 2024, Google used 105 physical qubits to create a logical qubit. This year, IBM used 12 physical qubits per single logical qubit, and Amazon Web Services used nine physical qubits to produce a single logical qubit. All three companies use variations of superconducting circuits as qubits.

Helios is noteworthy for its qubits’ precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer’s qubit error rates are low to begin with, which means it doesn’t need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. “To the best of my knowledge, no other platform is at this level,” says Islam.

This advantage comes from a design property of ions. Unlike superconducting circuits, which are affixed to the surface of a quantum computing chip, ions on Quantinuum’s Helios chip can be shuffled around. Because the ions can move, they can interact with every other ion in the computer, a capacity known as “all-to-all connectivity.” This connectivity allows for error correction approaches that use fewer physical qubits. In contrast, superconducting qubits can only interact with their direct neighbors, so a computation between two non-adjacent qubits requires several intermediate steps involving the qubits in between. “It’s becoming increasingly more apparent how important all-to-all-connectivity is for these high-performing systems,” says Strabley.

Still, it’s not clear what type of qubit will win in the long run. Each type has design benefits that could ultimately make it easier to scale. Ions (which are used by the US-based startup IonQ as well as Quantinuum) offer an advantage because they produce relatively few errors, says Islam: “Even with fewer physical qubits, you can do more.” However, it’s easier to manufacture superconducting qubits. And qubits made of neutral atoms, such as the quantum computers built by the Boston-based startup QuEra, are “easier to trap” than ions, he says. 

Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction “on the fly,” says David Hayes, the company’s director of computational theory and design, That’s a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry.

Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Helios’s predecessor, with the claim that it “rivals the best classical approaches in expanding our understanding of magnetism.” Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor. 

“These aren’t contrived problems,” says Hayes. “These are problems that the Department of Energy, for example, is very interested in.”

Quantinuum plans to build another version of Helios in its facility in Minnesota. It has already begun to build a prototype for a fourth-generation computer, Sol, which it plans to deliver in 2027, with 192 physical qubits. Then, in 2029, the company hopes to release Apollo, which it says will have thousands of physical qubits and should be “fully fault tolerant,” or able to implement error correction at a large scale.

From vibe coding to context engineering: 2025 in software development

5 November 2025 at 05:31

This year, we’ve seen a real-time experiment playing out across the technology industry, one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from vibe coding to what’s being termed context engineering shows that while the work of human developers is evolving, they nevertheless remain absolutely critical.

This is captured in the latest volume of the “Thoughtworks Technology Radar,” a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents. 

Taken together, there’s a clear signal of the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively.

Vibes, antipatterns, and new innovations 

In February 2025, Andrej Karpathy coined the term vibe coding. It took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were skeptical. On an April episode of our technology podcast, we talked about our concerns and were cautious about how vibe coding might evolve.

Unsurprisingly given the implied imprecision of vibe-based coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter.

Experimenting with generative AI 

This is one of the drivers behind increasing interest in engineering context. We’re well aware of its importance, working with coding assistants like Claude Code and Augment Code. Providing necessary context—or knowledge priming—is crucial. It ensures outputs are more consistent and reliable, which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.

When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context, it can even help when we don’t have full access to source code

It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario, we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use.

Context is critical in the agentic era

The backdrop of changes that have happened over recent months is the growth of agents and agentic systems — both as products organizations want to develop and as technology they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.

Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts. 

There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7, and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.

Toward consensus

Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another. 

It remains to be seen whether these standards win out. But in any case, it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet, but they can be remarkably powerful for helping teams work together.

There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.

Software engineers can solve the context challenge

Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things. 

Once again, it will be down to them to experiment, collaborate, and learn — the future depends on it.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

OpenAI Signs $38 Billion Cloud Computing Deal With Amazon

3 November 2025 at 14:07
After signing agreements to use computing power from Nvidia, AMD and Oracle, OpenAI is teaming up with the world’s largest cloud computing company.

© Haiyun Jiang/The New York Times

Sam Altman, the chief executive of OpenAI, at the White House in September.

A.I. Spending Is Accelerating Among Tech’s Biggest Companies

31 October 2025 at 05:02
Despite the risk of a bubble, Google, Meta, Microsoft and Amazon plan to spend billions more on artificial intelligence than they already do.

© Christie Hemm Klok for The New York Times

Google is among several big technology companies increasing their spending on data centers.

Amazon’s Profit Is Up 38% on Strong Performance

30 October 2025 at 18:03
After unexpectedly strong sales and profits across its consumer and cloud businesses, the tech giant said another strong quarter might be ahead.

© AJ Mast for The New York Times

Amazon’s cloud computing complex in New Carlisle, Ind. The company reported that sales for that division were up 20 percent from a year earlier.

Microsoft Increases Investments Amid A.I. Race

29 October 2025 at 19:21
The company reported higher-than-expected capital expenditures of $34.9 billion in its latest quarter.

© Chona Kasinger for The New York Times

Microsoft has said the demand for its cloud computing services outpaces its available data centers.

Signal’s Post-Quantum Cryptographic Implementation

29 October 2025 at 07:09

Signal has just rolled out its quantum-safe cryptographic implementation.

Ars Technica has a really good article with details:

Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too...

The post Signal’s Post-Quantum Cryptographic Implementation appeared first on Security Boulevard.

Signal’s Post-Quantum Cryptographic Implementation

29 October 2025 at 07:09

Signal has just rolled out its quantum-safe cryptographic implementation.

Ars Technica has a really good article with details:

Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too.

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. At the Usenix 25 conference, they discussed the six options they considered for adding quantum-safe forward secrecy and post-compromise security and why SPQR and one other stood out. Presentations at the NIST PQC Standardization Conference and the Cryptographic Applications Workshop explain the details of chunking, the design challenges, and how the protocol had to be adapted to use the standardized ML-KEM.

Jacomme further observed:

The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or…, the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don’t care about quantum computers.

Also read this post on X.

I’d like to speak to the Bellcore ManaGeR

27 October 2025 at 16:36

I love it when I discover – usually through people smarter than I – an operating system or graphical user interface I’ve never heard of. This time, we’ve got Bellcore MGR, as meticulously detailed by Nina Kalinina a few weeks ago.

I love old computers, and I enjoy looking at old user interfaces immensely. I could spend a whole evening on installing an old version of MS Word and playing with it: “Ah, look, how cute, they didn’t invent scrollbars just yet”. A special place in my heart is taken by user interfaces that were historically significant and yet fell into relative obscurity (like Windows 2 or BTRON).

This is why I absolutely had to try Bellcore MGR. An early windowing system (1984), it was made by the Bell Communications Research, and it looked like Plan 9’s older sister. The system was distributed over the Usenet, ported to every conceivable Unix-like system, including Minix, Linux and Coherent, and – eventually – mostly forgotten. The only two videos on YouTube that have something to do with MGR have a bit over 1000 views combined, and don’t really show it in the best light possible. And I think it’s a crying shame.

↫ Nina Kalinina

The reference to Plan 9 is apt, as MGR definitely seems to function almost exactly like Plan 9’s rio graphical user interface, including things like drawing a rectangle to open a new window. Rio is an acquired taste – to put it very mildly – and it seems MGR fits the same bill. There’s also $home movie, an entire video editor for MGR, which is honestly mind-blowing considering it’s running on a mere SPARCstation in the late ’80s and early ’90s. It has an incredibly unique UNIXy flavour:

If you don’t have 40 minutes to watch the tour, please do spend two minutes on this demo of the “$HOME MOVIE” system. It is “a suite of tools for the capture, editing and playback of window system sessions on a Sun Sparcstation” based on MGR. It is probably the most Unix way of making videos: the window manager dumps the rendering commands into a file, then the rendering commands can be altered with a set of small tools, some of which are in awk, and then these rendering commands can be packaged into a single demo.

↫ Nina Kalinina

Kalinina had to more or less reverse-engineer its unique video format, too, but in doing so managed to upload the original demonstration of $movie home, narrated by its creator and created in $movie home itself, to YouTube. Kalinina also created and uploaded a ready-made hard disk image of Debian 0.93 with Bellcore MGR preinstalled for use in Qemu and 86Box.

Google’s Quantum Computer Makes a Big Technical Leap

22 October 2025 at 12:14
Designed to accelerate advances in medicine and other fields, the tech giant’s quantum algorithm runs 13,000 times as fast as software written for a traditional supercomputer.

© Adam Amengual for The New York Times

A quantum computer at Google’s quantum research facility near Santa Barbara, Calif.

Amazon’s AWS Disruption Creates Outages for Hundreds of Websites for Hours

20 October 2025 at 21:33
Amazon Web Services, a major provider of cloud services, cited a problem at its data center in Northern Virginia. The outage highlighted the fragility of global internet infrastructure.

© Sean Gallup/Getty Images

The Amazon Web Services pavilion at a trade fair in Hanover, Germany, in March.

Turning migration into modernization

In late 2023, a long-trusted virtualization staple became the biggest open question on the enterprise IT roadmap.

Amid concerns of VMware licensing changes and steeper support costs, analysts noticed an exodus mentality. Forrester predicted that one in five large VMware customers would begin moving away from the platform in 2024. A subsequent Gartner community poll found that 74% of respondents were rethinking their VMware relationship in light of recent changes. CIOs contending with pricing hikes and product roadmap opacity face a daunting choice: double‑down on a familiar but costlier stack, or use the disruption to rethink how—and where—critical workloads should run.


“There’s still a lot of uncertainty in the marketplace around VMware,” explains Matt Crognale, senior director, migrations and modernization at cloud modernization firm Effectual, adding that the VMware portfolio has been streamlined and refocused over the past couple of years. “The portfolio has been trimmed down to a core offering focused on the technology versus disparate systems.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Encore 91 computer system

30 September 2025 at 10:46

Have you ever heard of the Encore 91 computer system, developed and built by Encore Computer Corporation? I stumbled upon the name of this system on the website for the Macintosh like virtual window manager (MLVWM), an old X11 window manager designed to copy some of the look and feel of the classic Mac OS, and wanted to know more about it. An old website from what appears to be a reseller of the Encore 91 has a detailed description and sales pitch of the machine still online, and it’s a great read.

The hardware architecture of the Encore 91 series is based on the Motorola high-performance 88100 25MHz RISC processor. A basic system is a highly integrated fully symmetrical single board multiprocessor. The single board includes two or four 88100 processors with supporting cache memory, 16 megabytes of shared main memory, two synchronous SCSI ports, an Ethernet port, 4 asynchronous ports, real-time clocks, timers, interrupts and a VME-64 bus interface. The VME-64 bus provides full compatibility with VME plus enhancements for greater throughput. Shared main memory may be expanded to 272 megabytes (mb) by adding up to four expansion cards. The expansion memory boards have the same high-speed access characteristics as local memory.

Encore computing 91 system

The Encore 91 ran a combination of AT&T’s system V.3.2 UNIX and Encore’s POSIX-compliant MicroMPX real-time kernel, and would be followed by machines with more powerful processors in the 88xxx series, as well as machines based on the Alpha architecture. The company also created and sold its own modified RISC architecture, RSX, for which there are still some details available online. Bits and bobs of the company were spun off and sold off, and I don’t think much of the original company is still around today.

Regardless, it’s an interesting system with an interesting history, but we’ll most likely never get to see oe in action – unless it turns up in some weird corner of the United States where the rare working examples of hardware like this invariably tends to end up.

Designing CPUs for next-generation supercomputing

In Seattle, a meteorologist analyzes dynamic atmospheric models to predict the next major storm system. In Stuttgart, an automotive engineer examines crash-test simulations for vehicle safety certification. And in Singapore, a financial analyst simulates portfolio stress tests to hedge against global economic shocks. 

Each of these professionals—and the consumers, commuters, and investors who depend on their insights— relies on a time-tested pillar of high-performance computing: the humble CPU. 

With GPU-powered AI breakthroughs getting the lion’s share of press (and investment) in 2025, it is tempting to assume that CPUs are yesterday’s news. Recent predictions anticipate that GPU and accelerator installations will increase by 17% year over year through 2030. But, in reality, CPUs are still responsible for the vast majority of today’s most cutting-edge scientific, engineering, and research workloads. Evan Burness, who leads Microsoft Azure’s HPC and AI product teams, estimates that CPUs still support 80% to 90% of HPC simulation jobs today.

In 2025, not only are these systems far from obsolete, they are experiencing a technological renaissance. A new wave of CPU innovation, including high-bandwidth memory (HBM), is delivering major performance gains— without requiring costly architectural resets. 

Download the report.

To learn more, watch the new webcast “Powering HPC with next-generation CPUs.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Powering HPC with next-generation CPUs

For all the excitement around GPUs—the workhorses of today’s AI revolution—the central processing unit (CPU) remains the backbone of high-performance computing (HPC). CPUs still handle 80% to 90% of HPC workloads globally, powering everything from climate modeling to semiconductor design. Far from being eclipsed, they’re evolving in ways that make them more competitive, flexible, and indispensable than ever.

The competitive landscape around CPUs has intensified. Once dominated almost exclusively by Intel’s x86 chips, the market now includes powerful alternatives based on ARM and even emerging architectures like RISC-V. Flagship examples like Japan’s Fugaku supercomputer demonstrate how CPU innovation is pushing performance to new frontiers. Meanwhile, cloud providers like Microsoft and AWS are developing their own silicon, adding even more diversity to the ecosystem.

What makes CPUs so enduring? Flexibility, compatibility, and cost efficiency are key. As Evan Burness of Microsoft Azure points out, CPUs remain the “it-just-works” technology. Moving complex, proprietary code to GPUs can be an expensive and time-consuming effort, while CPUs typically support software continuity across generations with minimal friction. That reliability matters for businesses and researchers who need results, not just raw power.

Innovation is also reshaping what a CPU can be. Advances in chiplet design, on-package memory, and hybrid CPU-GPU architectures are extending the performance curve well beyond the limits of Moore’s Law. For many organizations, the CPU is the strategic choice that balances speed, efficiency, and cost.

Looking ahead, the relationship between CPUs, GPUs, and specialized processors like NPUs will define the future of HPC. Rather than a zero-sum contest, it’s increasingly a question of fit-for-purpose design. As Addison Snell, co-founder and chief executive officer of Intersect360 Research, notes, science and industry never run out of harder problems to solve.

That means CPUs, far from fading, will remain at the center of the computing ecosystem.

To learn more, read the new report “Designing CPUs for next-generation supercomputing.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

MV 950 Toy: an emulator of the Metrovick 950, the first commercial transistor computer

22 September 2025 at 11:05

After researching the first commercial transistor computer, the British Metrovick 950, Nina Kalinina wrote an emulator, simple assembler, and some additional “toys” (her word) so we can enjoy this machine today. First, what, exactly, is the Metrovick 950?

Metrovick 950, the first commercial transistor computer, is an early British computer, released in 1956. It is a direct descendant of the Manchester Baby (1948), the first electronic stored-program computer ever.

↫ Nina Kalinina

The Baby, formally known as Small-Scale Experimental Machine, was a foundation for the Manchester Mark I (1949). Mark I found commercial success as the Ferranti Mark I. A few years later, Manchester University built a variant of Mark I that used magnetic drum memory instead of Williams tubes and transistors instead of valves. This computer was called the Manchester Transistor Computer (1955). Engineers from Metropolitan-Vickers released a streamlined, somewhat simplified version of the Transistor Computer as Metrovick 950.

The emulator she developed is “only” compatible on a source code level, and emulates “the CPU, a teleprinter with a paper tape punch/reader, a magnetic tape storage device, and a plotter”, at 200-300 operations per second. It’s complete enough you can play Lunar Lander on it, because is a computer you can’t play games on really a computer?

Nina didn’t just create this emulator and its related components, but also wrote a ton of documentation to help you understand the machine and to get started. There’s an introduction to programming and using the Metrovick 950 emulator, additional notes on programming the emulator, and much more. She also posted a long thread on Fedi with a ton more details and background information, which is a great read, as well.

This is amazing work, and interesting not just to programmers interested in ancient computers, but also to historians and people who really put the retro in retrocomputing.

History of the GEM desktop environment

21 September 2025 at 10:18

The 1980s saw a flurry of graphical user interfaces pop up, almost all of them in some way made by people who got to see the work done by Xerox. Today’s topic is no exception – GEM was developed by Lee Jay Lorenzen, who worked at Xerox and wished to create a cheaper, less resource-intensive alternative to the Xerox Star, which he got to do at DRI after leaving Xerox. His work was then shown off to Atari, who were interested in using it.

The entire situation was pretty hectic for a while: DRI’s graphics group worked on the PC version of GEM on MS-DOS; Atari developers were porting it to Apple Lisas running CP/M-68K; and Loveman was building GEMDOS. Against all odds, they succeeded. The operating system for Atari ST consisting of GEM running on top of GEMDOS was named TOS which simply meant “the operating system”, although many believed “T” actually stood for “Tramiel”.

Atari 520 ST, soon nicknamed “Jackintosh”, was introduced at the 1985 Consumer Electronics Show in Las Vegas and became an immediate hit. GEM ran smoothly on the powerful ST’s hardware, and there were no clones to worry about. Atari developed its branch of GEM independently of Digital Research until 1993, when the Atari ST line of computers was discontinued.

↫ Nemanja Trifunovic at Programming at the right level

Other than through articles like these and the occasional virtual machine, I have no experience with the various failed graphical user interfaces of the 1980s, since I was too young at the time. Even from the current day, though, it’s easy to see how all of them can be traced back directly to the work done at Xerox, and just how much we owe to the people working there at the time.

Now that the technology industry is as massive as it is, with the stakes being so high, it’s unlikely we’ll ever see a place like Xerox PARC ever again. Everything is secretive now, and if a line of research doesn’t obviously lead to massive short-term gains, it’s canned before it even starts. The golden age of wild, random computer research without a profit motive is clearly behind us, and that’s sad.

A gentle introduction to CP/M

2 September 2025 at 11:06

For an operating system that was once incredibly popular and expected to become a standard for a long time to come, it’s remarkable how little experience most people have with CP/M. In fact, many conventions and historical limitations you might be aware of – like the 8.3 filename convention of DOS – come straight from CP/M, as it influenced DOS considerably. It’s quite easy to emulate CP/M today, but it’s just old and different enough that getting into it might be a but confusing, but that’s where Eerie Linux’s introduction to CP/M comes into play.

This article is just what the headline promises: an introduction to the CP/M operating system. No previous knowledge of 1970s and early ’80s operating systems is required. However, some familiarity with Linux or a BSD-style operating system is assumed, as the setup process suggested here involves using a package manager and command-line tools. But why explore CP/M in the 2020s? There are (at least) two good reasons: 1) historical education 2) gaining a better understanding of how computers actually work.

↫ Eerie Linux

This article is a great way to get up and running with CP/M fairly quickly, and I intend to do just that when I find some time to mess around with it. What are some of the core, crucial applications that one should try on CP/M? Things people would be using back when CP/M was properly in use?

❌