Reading view

Securing VMware workloads in regulated industries

At a regional hospital, a cardiac patient’s lab results sit behind layers of encryption, accessible to his surgeon but shielded from those without strictly need-to-know status. Across the street at a credit union, a small business owner anxiously awaits the all-clear for a wire transfer, unaware that fraud detection systems have flagged it for further review.

Such scenarios illustrate how companies in regulated industries juggle competing directives: Move data and process transactions quickly enough to save lives and support livelihoods, but carefully enough to maintain ironclad security and satisfy regulatory scrutiny.

Organizations subject to such oversight walk a fine line every day. And recently, a number of curveballs have thrown off that hard-won equilibrium. Agencies are ramping up oversight thanks to escalating data privacy concerns; insurers are tightening underwriting and requiring controls like MFA and privileged-access governance as a condition of coverage. Meanwhile, the shifting VMware landscape has introduced more complexity for IT teams tasked with planning long-term infrastructure strategies. 

Download the full article

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Harnessing human-AI collaboration for an AI roadmap that moves beyond pilots

The past year has marked a turning point in the corporate AI conversation. After a period of eager experimentation, organizations are now confronting a more complex reality: While investment in AI has never been higher, the path from pilot to production remains elusive. Three-quarters of enterprises remain stuck in experimentation mode, despite mounting pressure to convert early tests into operational gains.

“Most organizations can suffer from what we like to call PTSD, or process technology skills and data challenges,” says Shirley Hung, partner at Everest Group. “They have rigid, fragmented workflows that don’t adapt well to change, technology systems that don’t speak to each other, talent that is really immersed in low-value tasks rather than creating high impact. And they are buried in endless streams of information, but no unified fabric to tie it all together.”

The central challenge, then, lies in rethinking how people, processes, and technology work together.

Across industries as different as customer experience and agricultural equipment, the same pattern is emerging: Traditional organizational structures—centralized decision-making, fragmented workflows, data spread across incompatible systems—are proving too rigid to support agentic AI. To unlock value, leaders must rethink how decisions are made, how work is executed, and what humans should uniquely contribute.

“It is very important that humans continue to verify the content. And that is where you’re going to see more energy being put into,” Ryan Peterson, EVP and chief product officer at Concentrix.

Much of the conversation centered on what can be described as the next major unlock: operationalizing human-AI collaboration. Rather than positioning AI as a standalone tool or a “virtual worker,” this approach reframes AI as a system-level capability that augments human judgment, accelerates execution, and reimagines work from end to end. That shift requires organizations to map the value they want to create; design workflows that blend human oversight with AI-driven automation; and build the data, governance, and security foundations that make these systems trustworthy.

“My advice would be to expect some delays because you need to make sure you secure the data,” says Heidi Hough, VP for North America aftermarket at Valmont. “As you think about commercializing or operationalizing any piece of using AI, if you start from ground zero and have governance at the forefront, I think that will help with outcomes.”

Early adopters are already showing what this looks like in practice: starting with low-risk operational use cases, shaping data into tightly scoped enclaves, embedding governance into everyday decision-making, and empowering business leaders, not just technologists, to identify where AI can create measurable impact. The result is a new blueprint for AI maturity grounded in reengineering how modern enterprises operate.

“Optimization is really about doing existing things better, but reimagination is about discovering entirely new things that are worth doing,” says Hung.

Watch the webcast.

This webcast is produced in partnership with Concentrix.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Delivering securely on data and AI strategy 

Most organizations feel the imperative to keep pace with continuing advances in AI capabilities, as highlighted in a recent MIT Technology Review Insights report. That clearly has security implications, particularly as organizations navigate a surge in the volume, velocity, and variety of security data. This explosion of data, coupled with fragmented toolchains, is making it increasingly difficult for security and data teams to maintain a proactive and unified security posture. 

Data and AI teams must move rapidly to deliver the desired business results, but they must do so without compromising security and governance. As they deploy more intelligent and powerful AI capabilities, proactive threat detection and response against the expanded attack surface, insider threats, and supply chain vulnerabilities must remain paramount. “I’m passionate about cybersecurity not slowing us down,” says Melody Hildebrandt, chief technology officer at Fox Corporation, “but I also own cybersecurity strategy. So I’m also passionate about us not introducing security vulnerabilities.” 

That’s getting more challenging, says Nithin Ramachandran, who is global vice president for data and AI at industrial and consumer products manufacturer 3M. “Our experience with generative AI has shown that we need to be looking at security differently than before,” he says. “With every tool we deploy, we look not just at its functionality but also its security posture. The latter is now what we lead with.” 

Our survey of 800 technology executives (including 100 chief information security officers), conducted in June 2025, shows that many organizations struggle to strike this balance. 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Accelerating VMware migrations with a factory model approach

In 1913, Henry Ford cut the time it took to build a Model T from 12 hours to just over 90 minutes. He accomplished this feat through a revolutionary breakthrough in process design: Instead of skilled craftsmen building a car from scratch by hand, Ford created an assembly line where standardized tasks happened in sequence, at scale.

The IT industry is having a similar moment of reinvention. Across operations from software development to cloud migration, organizations are adopting an AI-infused factory model that replaces manual, one-off projects with templated, scalable systems designed for speed and cost-efficiency.

Take VMware migrations as an example. For years, these projects resembled custom production jobs—bespoke efforts that often took many months or even years to complete. Fluctuating licensing costs added a layer of complexity, just as business leaders began pushing for faster modernization to make their organizations AI-ready. That urgency has become nearly universal: According to a recent IDC report, six in 10 organizations evaluating or using cloud services say their IT infrastructure requires major transformation, while 82% report their cloud environments need modernization.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Moving toward LessOps with VMware-to-cloud migrations

Today’s IT leaders face competing mandates to do more (“make us an ‘AI-first’ enterprise—yesterday”) with less (“no new hires for at least the next six months”).

VMware has become a focal point of these dueling directives. It remains central to enterprise IT, with 80% of organizations using VMware infrastructure products. But shifting licensing models are prompting teams to reconsider how they manage and scale these workloads, often on tighter budgets.

For many organizations, the path forward involves adopting a LessOps model, an operational strategy that makes hybrid environments manageable without increasing headcount. This operational philosophy minimizes human intervention through extensive automation and selfservice capabilities while maintaining governance and compliance.

In practice, VMware-to-cloud migrations create a “two birds, one stone” opportunity. They present a practical moment to codify the automation and governance practices LessOps depends on—laying the groundwork for a leaner, more resilient IT operating model.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Aligning VMware migration with business continuity

For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.

In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.

Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Designing digital resilience in the agentic AI era

Digital resilience—the ability to prevent, withstand, and recover from digital disruptions—has long been a strategic priority for enterprises. With the rise of agentic AI, the urgency for robust resilience is greater than ever.

Agentic AI represents a new generation of autonomous systems capable of proactive planning, reasoning, and executing tasks with minimal human intervention. As these systems shift from experimental pilots to core elements of business operations, they offer new opportunities but also introduce new challenges when it comes to ensuring digital resilience. That’s because the autonomy, speed, and scale at which agentic AI operates can amplify the impact of even minor data inconsistencies, fragmentation, or security gaps.

While global investment in AI is projected to reach $1.5 trillion in 2025, fewer than half of business leaders are confident in their organization’s ability to maintain service continuity, security, and cost control during unexpected events. This lack of confidence, coupled with the profound complexity introduced by agentic AI’s autonomous decision-making and interaction with critical infrastructure, requires a reimagining of digital resilience.

Organizations are turning to the concept of a data fabric—an integrated architecture that connects and governs information across all business layers. By breaking down silos and enabling real-time access to enterprise-wide data, a data fabric can empower both human teams and agentic AI systems to sense risks, prevent problems before they occur, recover quickly when they do, and sustain operations.

Machine data: A cornerstone of agentic AI and digital resilience

Earlier AI models relied heavily on human-generated data such as text, audio, and video, but agentic AI demands deep insight into an organization’s machine data: the logs, metrics, and other telemetry generated by devices, servers, systems, and applications.

To put agentic AI to use in driving digital resilience, it must have seamless, real-time access to this data flow. Without comprehensive integration of machine data, organizations risk limiting AI capabilities, missing critical anomalies, or introducing errors. As Kamal Hathi, senior vice president and general manager of Splunk, a Cisco company, emphasizes, agentic AI systems rely on machine data to understand context, simulate outcomes, and adapt continuously. This makes machine data oversight a cornerstone of digital resilience.

“We often describe machine data as the heartbeat of the modern enterprise,” says Hathi. “Agentic AI systems are powered by this vital pulse, requiring real-time access to information. It’s essential that these intelligent agents operate directly on the intricate flow of machine data and that AI itself is trained using the very same data stream.” 

Few organizations are currently achieving the level of machine data integration required to fully enable agentic systems. This not only narrows the scope of possible use cases for agentic AI, but, worse, it can also result in data anomalies and errors in outputs or actions. Natural language processing (NLP) models designed prior to the development of generative pre-trained transformers (GPTs) were plagued by linguistic ambiguities, biases, and inconsistencies. Similar misfires could occur with agentic AI if organizations rush ahead without providing models with a foundational fluency in machine data. 

For many companies, keeping up with the dizzying pace at which AI is progressing has been a major challenge. “In some ways, the speed of this innovation is starting to hurt us, because it creates risks we’re not ready for,” says Hathi. “The trouble is that with agentic AI’s evolution, relying on traditional LLMs trained on human text, audio, video, or print data doesn’t work when you need your system to be secure, resilient, and always available.”

Designing a data fabric for resilience

To address these shortcomings and build digital resilience, technology leaders should pivot to what Hathi describes as a data fabric design, better suited to the demands of agentic AI. This involves weaving together fragmented assets from across security, IT, business operations, and the network to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time analysis and risk management. 

“Once you have a single view, you can do all these things that are autonomous and agentic,” says Hathi. “You have far fewer blind spots. Decision-making goes much faster. And the unknown is no longer a source of fear because you have a holistic system that’s able to absorb these shocks and disruption without losing continuity,” he adds.

To create this unified system, data teams must first break down departmental silos in how data is shared, says Hathi. Then, they must implement a federated data architecture—a decentralized system where autonomous data sources work together as a single unit without physically merging—to create a unified data source while maintaining governance and security. And finally, teams must upgrade data platforms to ensure this newly unified view is actionable for agentic AI. 

During this transition, teams may face technical limitations if they rely on traditional platforms modeled on structured data—that is, mostly quantitative information such as customer records or financial transactions that can be organized in a predefined format (often in tables) that is easy to query. Instead, companies need a platform that can also manage streams of unstructured data such as system logs, security events, and application traces, which lack uniformity and are often qualitative rather than quantitative. Analyzing, organizing, and extracting insights from these kinds of data requires more advanced methods enabled by AI.

Harnessing AI as a collaborator

AI itself can be a powerful tool in creating the data fabric that enables AI systems. AI-powered tools can, for example, quickly identify relationships between disparate data—both structured and unstructured—automatically merging them into one source of truth. They can detect and correct errors and employ NLP to tag and categorize data to make it easier to find and use. 

Agentic AI systems can also be used to augment human capabilities in detecting and deciphering anomalies in an enterprise’s unstructured data streams. These are often beyond human capacity to spot or interpret at speed, leading to missed threats or delays. But agentic AI systems, designed to perceive, reason, and act autonomously, can plug the gap, delivering higher levels of digital resilience to an enterprise.

“Digital resilience is about more than withstanding disruptions,” says Hathi. “It’s about evolving and growing over time. AI agents can work with massive amounts of data and continuously learn from humans who provide safety and oversight. This is a true self-optimizing system.”

Humans in the loop

Despite its potential, agentic AI should be positioned as assistive intelligence. Without proper oversight, AI agents could introduce application failures or security risks.

Clearly defined guardrails and maintaining humans in the loop is “key to trustworthy and practical use of AI,” Hathi says. “AI can enhance human decision-making, but ultimately, humans are in the driver’s seat.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Scaling innovation in manufacturing with AI

Manufacturing is getting a major system upgrade. As AI amplifies existing technologies—like digital twins, the cloud, edge computing, and the industrial internet of things (IIoT)—it is enabling factory operations teams to shift from reactive, isolated problem-solving to proactive, systemwide optimization.

Digital twins—physically accurate virtual representations of a piece of equipment, a production line, a process, or even an entire factory—allow workers to test, optimize, and contextualize complex, real-world environments. Manufacturers are using digital twins to simulate factory environments with pinpoint detail.

“AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines,” says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft. “This is allowing manufacturers to move beyond isolated monitoring toward much wider insights.”

A digital twin of a bottling line, for example, can integrate one-dimensional shop-floor telemetry, two-dimensional enterprise data, and three-dimensional immersive modeling into a single operational view of the entire production line to improve efficiency and reduce costly downtime. Many high-speed industries face downtime rates as high as 40%, estimates Jon Sobel, co-founder and chief executive officer of Sight Machine, an industrial AI company that partners with Microsoft and NVIDIA to transform complex data into actionable insights. By tracking micro-stops and quality metrics via digital twins, companies can target improvements and adjustments with greater precision, saving millions in once-lost productivity without disrupting ongoing operations.

AI offers the next opportunity. Sircar estimates that up to 50% of manufacturers are currently deploying AI in production. This is up from 35% of manufacturers surveyed in a 2024 MIT Technology Review Insights report who said they have begun to put AI use cases into production. Larger manufacturers with more than $10 billion in revenue were significantly ahead, with 77% already deploying AI use cases, according to the report.

“Manufacturing has a lot of data and is a perfect use case for AI,” says Sobel. “An industry that has been seen by some as lagging when it comes to digital technology and AI may be in the best position to lead. It’s very unexpected.”

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Networking for AI: Building the foundation for real-time intelligence

The Ryder Cup is an almost-century-old tournament pitting Europe against the United States in an elite showcase of golf skill and strategy. At the 2025 event, nearly a quarter of a million spectators gathered to watch three days of fierce competition on the fairways.

From a technology and logistics perspective, pulling off an event of this scale is no easy feat. The Ryder Cup’s infrastructure must accommodate the tens of thousands of network users who flood the venue (this year, at Bethpage Black in Farmingdale, New York) every day.

To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution centered around a platform where tournament staff could access data visualization supporting operational decision-making. This dashboard, which leveraged a high-performance network and private-cloud environment, aggregated and distilled insights from diverse real-time data feeds.

It was a glimpse into what AI-ready networking looks like at scale—a real-world stress test with implications for everything from event management to enterprise operations. While models and data readiness get the lion’s share of boardroom attention and media hype, networking is a critical third leg of successful AI implementation, explains Jon Green, CTO of HPE Networking. “Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,” he says.

As businesses move toward distributed, real-time AI applications, tomorrow’s networks will need to parse even more massive volumes of information at ever more lightning-fast speeds. What played out on the greens at Bethpage Black represents a lesson being learned across industries: Inference-ready networks are a make-or-break factor for turning AI’s promise into real-world performance.

Making a network AI inference-ready

More than half of organizations are still struggling to operationalize their data pipelines. In a recent HPE cross-industry survey of 1,775  IT leaders, 45% said they could run real-time data pushes and pulls for innovation. It’s a noticeable change over last year’s numbers (just 7% reported having such capabilities in 2024), but there’s still work to be done to connect data collection with real-time decision-making.

The network may hold the key to further narrowing that gap. Part of the solution will likely come down to infrastructure design. While traditional enterprise networks are engineered to handle the predictable flow of business applications—email, browsers, file sharing, etc.—they’re not designed to field the dynamic, high-volume data movement required by AI workloads. Inferencing in particular depends on shuttling vast datasets between multiple GPUs with supercomputer-like precision.

“There’s an ability to play fast and loose with a standard, off-the-shelf enterprise network,” says Green. “Few will notice if an email platform is half a second slower than it might’ve been. But with AI transaction processing, the entire job is gated by the last calculation taking place. So it becomes really noticeable if you’ve got any loss or congestion.”

Networks built for AI, therefore, must operate with a different set of performance characteristics, including ultra-low latency, lossless throughput, specialized equipment, and adaptability at scale. One of these differences is AI’s distributed nature, which affects the seamless flow of data.

The Ryder Cup was a vivid demonstration of this new class of networking in action. During the event, a Connected Intelligence Center was put in place to ingest data from ticket scans, weather reports, GPS-tracked golf carts, concession and merchandise sales, spectator and consumer queues, and network performance. Additionally, 67 AI-enabled cameras were positioned throughout the course. Inputs were analyzed through an operational intelligence dashboard and provided staff with an instantaneous view of activity across the grounds.

“The tournament is really complex from a networking perspective, because you have many big open areas that aren’t uniformly packed with people,” explains Green. “People tend to follow the action. So in certain areas, it’s really dense with lots of people and devices, while other areas are completely empty.”

To handle that variability, engineers built out a two-tiered architecture. Across the sprawling venue, more than 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors worked together to maintain continuous connectivity and feed a private cloud AI cluster for live analytics. The front-end layer connected cameras, sensors, and access points to capture live video and movement data, while a back-end layer—located within a temporary on-site data center—linked GPUs and servers in a high-speed, low-latency configuration that effectively served as the system’s brain. Together, the setup enabled both rapid on-the-ground responses and data collection that could inform future operational planning. “AI models also were available to the team which could process video of the shots taken and help determine, from the footage, which ones were the most interesting,” says Green.

Physical AI and the return of on-prem intelligence

If time is of the essence for event management, it’s even more critical in contexts where safety is on the line—for instance a self-driving car making a split-second decision to accelerate or brake.

In planning for the rise of physical AI, where applications move off screens and onto factory floors and city streets, a growing number of enterprises are rethinking their architectures. Instead of sending the data to centralized clouds for inference, some are deploying edge-based AI clusters that process information closer to where it is generated. Data-intensive training may still occur in the cloud, but inferencing happens on-site.

This hybrid approach is fueling a wave of operational repatriation, as workloads once relegated to the cloud return to on-premises infrastructure for enhanced speed, security, sovereignty, and cost reasons. “We’ve had an out-migration of IT into the cloud in recent years, but physical AI is one of the use cases that we believe will bring a lot of that back on-prem,” predicts Green, giving the example of an AI-infused factory floor, where a round-trip of sensor data to the cloud would be too slow to safely control automated machinery. “By the time processing happens in the cloud, the machine has already moved,” he explains.

There’s data to back up Green’s projection: research from Enterprise Research Group shows that 84% of respondents are reevaluating application deployment strategies due to the growth of AI. Market forecasts also reflect this shift. According to IDC, the AI market for infrastructure is expected to reach $758 billion by 2029.

AI for networking and the future of self-driving infrastructure

The relationship between networking and AI is circular: Modern networks make AI at scale possible, but AI is also helping make networks smarter and more capable.

“Networks are some of the most data-rich systems in any organization,” says Green. “That makes them a perfect use case for AI. We can analyze millions of configuration states across thousands of customer environments and learn what actually improves performance or stability.”

At HPE for example, which has one of the largest network telemetry repositories in the world, AI models analyze anonymized data collected from billions of connected devices to identify trends and refine behavior over time. The platform processes more than a trillion telemetry points each day, which means it can continuously learn from real-world conditions.

The concept broadly known as AIOps (or AI-driven IT operations) is changing how enterprise networks are managed across industries. Today, AI surfaces insights as recommendations that administrators can choose to apply with a single click. Tomorrow, those same systems might automatically test and deploy low-risk changes themselves.

That long-term vision, Green notes, is referred to as a “self-driving network”—one that handles the repetitive, error-prone tasks that have historically plagued IT teams. “AI isn’t coming for the network engineer’s job, but it will eliminate the tedious stuff that slows them down,” he says. “You’ll be able to say, ‘Please go configure 130 switches to solve this issue,’ and the system will handle it. When a port gets stuck or someone plugs a connector in the wrong direction, AI can detect it—and in many cases, fix it automatically.”

Digital initiatives now depend on how effectively information moves. Whether coordinating a live event or streamlining a supply chain, the performance of the network increasingly defines the performance of the business. Building that foundation today will separate those who pilot from those who scale AI.

For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Realizing value with AI inference at scale and in production

Training an AI model to predict equipment failures is an engineering achievement. But it’s not until prediction meets action—the moment that model successfully flags a malfunctioning machine—that true business transformation occurs. One technical milestone lives in a proof-of-concept deck; the other meaningfully contributes to the bottom line.

Craig Partridge, senior director worldwide of Digital Next Advisory at HPE, believes “the true value of AI lies in inference”. Inference is where AI earns its keep. It’s the operational layer that puts all that training to use in real-world workflows. Partridge elaborates, “The phrase we use for this is ‘trusted AI inferencing at scale and in production,'” he says. “That’s where we think the biggest return on AI investments will come from.”

Getting to that point is difficult. Christian Reichenbach, worldwide digital advisor at HPE, points to findings from the company’s recent survey of 1,775 IT leaders: While nearly a quarter (22%) of organizations have now operationalized AI—up from 15% the previous year—the majority remain stuck in experimentation.

Reaching the next stage requires a three-part approach: establishing trust as an operating principle, ensuring data-centric execution, and cultivating IT leadership capable of scaling AI successfully.

Trust as a prerequisite for scalable, high-stakes AI

Trusted inference means users can actually rely on the answers they’re getting from AI systems. This is important for applications like generating marketing copy and deploying customer service chatbots, but it’s absolutely critical for higher-stakes scenarios—say, a robot assisting during surgeries or an autonomous vehicle navigating crowded streets.

Whatever the use case, establishing trust will require doubling down on data quality; first and foremost, inferencing outcomes must be built on reliable foundations. This reality informs one of Partridge’s go-to mantras: “Bad data in equals bad inferencing out.”

Reichenbach cites a real-world example of what happens when data quality falls short—the rise of unreliable AI-generated content, including hallucinations, that clogs workflows and forces employees to spend significant time fact-checking. “When things go wrong, trust goes down, productivity gains are not reached, and the outcome we’re  looking for is not achieved,” he says.

On the other hand, when trust is properly engineered into inference systems, efficiency and productivity gains can increase. Take a network operations team tasked with troubleshooting configurations. With a trusted inferencing engine, that unit gains a reliable copilot that can deliver faster, more accurate, custom-tailored recommendations—”a 24/7 member of the team they didn’t have before,” says Partridge.

The shift to data-centric thinking and rise of the AI factory

In the first AI wave, companies rushed to hire data scientists and many viewed sophisticated, trillion-parameter models as the primary goal. But today, as organizations move to turn early pilots into real, measurable outcomes, the focus has shifted toward data engineering and architecture.

“Over the past five years, what’s become more meaningful is breaking down data silos, accessing data streams, and quickly unlocking value,” says Reichenbach. It’s an evolution happening alongside the rise of the AI factory—the always-on production line where data moves through pipelines and feedback loops to generate continuous intelligence.

This shift reflects an evolution from model-centric to data-centric thinking, and with it comes a new set of strategic considerations. “It comes down to two things: How much of the intelligence–the model itself–is truly yours? And how much of the input–the data–is uniquely yours, from your customers, operations, or market?” says Reichenbach.

These two central questions inform everything from platform direction and operating models to engineering roles and trust and security considerations. To help clients map their answers—and translate them into actionable strategies—Partridge breaks down HPE’s four-quadrant AI factory implication matrix (see figure):

Source: HPE, 2025

  • Run: Accessing an external, pretrained model via an interface or API; organizations don’t own the model or the data. Implementation requires strong security and governance. It also requires establishing a center of excellence that makes and communicates decisions about AI usage.
  • RAG (retrieval augmented generation): Using external, pre-trained models combined with a company’s proprietary data to create unique insights. Implementation focuses on connecting data streams to inferencing capabilities that provide rapid, integrated access to full-stack AI platforms.
  • Riches: Training custom models on data that resides in the enterprise for unique differentiation opportunities and insights. Implementation requires scalable, energy-efficient environments, and often high-performance systems.
  • Regulate: Leveraging custom models trained on external data, requiring the same scalable setup as Riches, but with added focus on legal and regulatory compliance for handling sensitive, non-owned data with extreme caution.

Importantly, these quadrants are not mutually exclusive. Partridge notes that most organizations—including HPE itself—operate across many of the quadrants. “We build our own models to help understand how networks operate,” he says. “We then deploy that intelligence into our products, so that our end customer gets the chance to deliver in what we call the ‘Run’ quadrant. So for them, it’s not their data; it’s not their model. They’re just adding that capability inside their organization.”

IT’s moment to scale—and lead

The second part of Partridge’s catchphrase about inferencing—”at scale”— speaks to a primary tension in enterprise AI: what works for a handful of use cases often breaks when applied across an entire organization.

“There’s value in experimentation and kicking ideas around,” he says. “But if you want to really see the benefits of AI, it needs to be something that everybody can engage in and that solves for many different use cases.”

In Partridge’s view, the challenge of turning boutique pilots into organization-wide systems is uniquely suited to the IT function’s core competencies—and it’s a leadership opportunity the function can’t afford to sit out. “IT takes things that are small-scale and implements the discipline required to run them at scale,” he says. “So, IT organizations really need to lean into this debate.”

For IT teams content to linger on the sidelines, history offers a cautionary tale from the last major infrastructure shift: enterprise migration to the cloud. Many IT departments sat out decision-making during the early cloud adoption wave a decade ago, while business units independently deployed cloud services. This led to fragmented systems, redundant spending, and security gaps that took years to untangle.

The same dynamic threatens to repeat with AI, as different teams experiment with tools and models outside IT’s purview. This phenomenon—sometimes called shadow AI—describes environments where pilots proliferate without oversight or governance. Partridge believes that most organizations are already operating in the “Run” quadrant in some capacity, as employees will use AI tools whether or not they’re officially authorized to.

Rather than shut down experimentation, it is now IT’s mandate to bring structure to it. And enterprises must architect a data platform strategy that brings together enterprise data with guardrails, governance framework, and accessibility to feed AI. Also, it’s critical to keep standardizing infrastructure (such as private cloud AI platforms), protecting data integrity, and safeguarding brand trust, all while enabling the speed and flexibility that AI applications demand. These are the requirements for reaching the final milestone: AI that’s truly in production.

For teams on the path to that goal, Reichenbach distills what success requires. “It comes down to knowing where you play: When to Run external models smarter, when to apply RAG to make them more informed, where to invest to unlock Riches from your own data and models, and when to Regulate what you don’t control,” says Reichenbach. “The winners will be those who bring clarity to all quadrants and align technology ambition with governance and value creation.”

For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Improving VMware migration workflows with agentic AI

For years, many chief information officers (CIOs) looked at VMware-to-cloud migrations with a wary pragmatism. Manually mapping dependencies and rewriting legacy apps mid-flight was not an enticing, low-lift proposition for enterprise IT teams.

But the calculus for such decisions has changed dramatically in a short period of time. Following recent VMware licensing changes, organizations are seeing greater uncertainty around the platform’s future. At the same time, cloud-native innovation is accelerating. According to the CNCF’s 2024 Annual Survey, 89% of organizations have already adopted at least some cloud-native techniques, and the share of companies reporting nearly all development and deployment as cloud-native grew sharply from 2023 to 2024 (20% to 24%). And market research firm IDC reports that cloud providers have become top strategic partners for generative AI initiatives.

This is all happening amid escalating pressure to innovate faster and more cost-effectively to meet the demands of an AI-first future. As enterprises prepare for that inevitability, they are facing compute demands that are difficult, if not prohibitively expensive, to maintain exclusively on-premises.

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Reimagining cybersecurity in the era of AI and quantum

AI and quantum technologies are dramatically reconfiguring how cybersecurity functions, redefining the speed and scale with which digital defenders and their adversaries can operate.

The weaponization of AI tools for cyberattacks is already proving a worthy opponent to current defenses. From reconnaissance to ransomware, cybercriminals can automate attacks faster than ever before with AI. This includes using generative AI to create social engineering attacks at scale, churning out tens of thousands of tailored phishing emails in seconds, or accessing widely available voice cloning software capable of bypassing security defenses for as little as a few dollars. And now, agentic AI raises the stakes by introducing autonomous systems that can reason, act, and adapt like human adversaries.

But AI isn’t the only force shaping the threat landscape. Quantum computing has the potential to seriously undermine current encryption standards if developed unchecked. Quantum algorithms can solve the mathematical problems underlying most modern cryptography, particularly public-key systems like RSA and Elliptic Curve, widely used for secure online communication, digital signatures, and cryptocurrency.

“We know quantum is coming. Once it does, it will force a change in how we secure data across everything, including governments, telecoms, and financial systems,” says Peter Bailey, senior vice president and general manager of Cisco’s security business.

“Most organizations are understandably focused on the immediacy of AI threats,” says Bailey. “Quantum might sound like science fiction, but those scenarios are coming faster than many realize. It’s critical to start investing now in defenses that can withstand both AI and quantum attacks.”

Critical to this defense is a zero trust approach to cybersecurity, which assumes no user or device can be inherently trusted. By enforcing continuous verification, zero trust enables constant monitoring and ensures that any attempts to exploit vulnerabilities are quickly detected and addressed in real time. This approach is technology-agnostic and creates a resilient framework even in the face of an ever-changing threat landscape.

Putting up AI defenses 

AI is lowering the barrier to entry for cyberattacks, enabling hackers even with limited skills or resources to infiltrate, manipulate, and exploit the slightest digital vulnerability.

Nearly three-quarters (74%) of cybersecurity professionals say AI-enabled threats are already having a significant impact on their organization, and 90% anticipate such threats in the next one to two years. 

“AI-powered adversaries have advanced techniques and operate at machine speed,” says Bailey. “The only way to keep pace is to use AI to automate response and defend at machine speed.”

To do this, Bailey says, organizations must modernize systems, platforms, and security operations to automate threat detection and response—processes that have previously relied on human rule-writing and reaction times. These systems must adapt dynamically as environments evolve and criminal tactics change.

At the same time, companies must strengthen the security of their AI models and data to reduce exposure to manipulation from AI-enabled malware. Such risks could include, for instance, prompt injections, where a malicious user crafts a prompt to manipulate an AI model into performing unintended actions, bypassing its original instructions and safeguards.

Agentic AI further ups the ante, with hackers able to use AI agents to automate attacks and make tactical decisions without constant human oversight. “Agentic AI has the potential to collapse the cost of the kill chain,” says Bailey. “That means everyday cybercriminals could start executing campaigns that today only well-funded espionage operations can afford.”

Organizations, in turn, are exploring how AI agents can help them stay ahead. Nearly 40% of companies expect agentic AI to augment or assist teams over the next 12 months, especially in cybersecurity, according to Cisco’s 2025 AI Readiness Index. Use cases include AI agents trained on telemetry, which can identify anomalies or signals from machine data too disparate and unstructured to be deciphered by humans. 

Calculating the quantum threat

As many cybersecurity teams focus on the very real AI-driven threat, quantum is waiting on the sidelines. Almost three-quarters (73%) of US organizations surveyed by KPMG say they believe it is only a matter of time before cybercriminals are using quantum to decrypt and disrupt today’s cybersecurity protocols. And yet, the majority (81%) also admit they could do more to ensure that their data remains secure.

Companies are right to be concerned. Threat actors are already carrying out harvest now, decrypt later attacks, stockpiling sensitive encrypted data to crack once quantum technology matures. Examples include state-sponsored actors intercepting government communications and cybercriminal networks storing encrypted internet traffic or financial records. 

Large technology companies are among the first to roll out quantum defenses. For example, Apple is using cryptography protocol PQ3 to defend against harvest now, decrypt later attacks on its iMessage platform. Google is testing post-quantum cryptography (PQC)—which is resistant to attacks from both quantum and classical computers—in its Chrome browser. And Cisco “has made significant investments in quantum-proofing our software and infrastructure,” says Bailey. “You’ll see more enterprises and governments taking similar steps over the next 18 to 24 months,” he adds. 

As regulations like the US Quantum Computing Cybersecurity Preparedness Act lay out requirements for mitigating against quantum threats, including standardized PQC algorithms by the National Institute of Standards and Technology, a wider range of organizations will start preparing their own quantum defenses. 

For organizations beginning that journey, Bailey outlines two key actions. First, establish visibility. “Understand what data you have and where it lives,” he says. “Take inventory, assess sensitivity, and review your encryption keys, rotating out any that are weak or outdated.”

Second, plan for migration. “Next, assess what it will take to support post-quantum algorithms across your infrastructure. That means addressing not just the technology, but also the process and people implications,” Bailey says.

Adopting proactive defense 

Ultimately, the foundation for building resilience against both AI and quantum is a zero trust approach, says Bailey. By embedding zero trust access controls across users, devices, business applications, networks, and clouds, this approach grants only the minimum access required to complete a task and enables continuous monitoring. It can also minimize the attack surface by confining a potential threat to an isolated zone, preventing it from accessing other critical systems.

Into this zero trust architecture, organizations can integrate specific measures to defend against AI and quantum risks. For instance, quantum-immune cryptography and AI-powered analytics and security tools can be used to identify complex attack patterns and automate real-time responses. 

“Zero trust slows down attacks and builds resilience,” Bailey says. “It ensures that even if a breach occurs, the crown jewels stay protected and operations can recover quickly.”

Ultimately, companies should not wait for threats to emerge and evolve. They must get ahead now. “This isn’t a what-if scenario; it’s a when,” says Bailey. “Organizations that invest early will be the ones setting the pace, not scrambling to catch up.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

From vibe coding to context engineering: 2025 in software development

This year, we’ve seen a real-time experiment playing out across the technology industry, one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from vibe coding to what’s being termed context engineering shows that while the work of human developers is evolving, they nevertheless remain absolutely critical.

This is captured in the latest volume of the “Thoughtworks Technology Radar,” a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents. 

Taken together, there’s a clear signal of the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively.

Vibes, antipatterns, and new innovations 

In February 2025, Andrej Karpathy coined the term vibe coding. It took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were skeptical. On an April episode of our technology podcast, we talked about our concerns and were cautious about how vibe coding might evolve.

Unsurprisingly given the implied imprecision of vibe-based coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter.

Experimenting with generative AI 

This is one of the drivers behind increasing interest in engineering context. We’re well aware of its importance, working with coding assistants like Claude Code and Augment Code. Providing necessary context—or knowledge priming—is crucial. It ensures outputs are more consistent and reliable, which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.

When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context, it can even help when we don’t have full access to source code

It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario, we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use.

Context is critical in the agentic era

The backdrop of changes that have happened over recent months is the growth of agents and agentic systems — both as products organizations want to develop and as technology they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.

Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts. 

There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7, and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.

Toward consensus

Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another. 

It remains to be seen whether these standards win out. But in any case, it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet, but they can be remarkably powerful for helping teams work together.

There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.

Software engineers can solve the context challenge

Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things. 

Once again, it will be down to them to experiment, collaborate, and learn — the future depends on it.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

  •  

Building a high performance data and AI organization (2nd edition)

Four years is a lifetime when it comes to artificial intelligence. Since the first edition of this study was published in 2021, AI’s capabilities have been advancing at speed, and the advances have not slowed since generative AI’s breakthrough. For example, multimodality— the ability to process information not only as text but also as audio, video, and other unstructured formats—is becoming a common feature of AI models. AI’s capacity to reason and act autonomously has also grown, and organizations are now starting to work with AI agents that can do just that.

Amid all the change, there remains a constant: the quality of an AI model’s outputs is only ever as good as the data
that feeds it. Data management technologies and practices have also been advancing, but the second edition of this study suggests that most organizations are not leveraging those fast enough to keep up with AI’s development. As a result of that and other hindrances, relatively few organizations are delivering the desired business results from their AI strategy. No more than 2% of senior executives we surveyed rate their organizations highly in terms of delivering results from AI.

To determine the extent to which organizational data performance has improved as generative AI and other AI advances have taken hold, MIT Technology Review Insights surveyed 800 senior data and technology executives. We also conducted in-depth interviews with 15 technology and business leaders.

Key findings from the report include the following:

Few data teams are keeping pace with AI. Organizations are doing no better today at delivering on data strategy than in pre-generative AI days. Among those surveyed in 2025, 12% are self-assessed data “high achievers” compared with 13% in 2021. Shortages of skilled talent remain a constraint, but teams also struggle with accessing fresh data, tracing lineage, and dealing with security complexity—important requirements for AI success.

Partly as a result, AI is not fully firing yet. There are even fewer “high achievers” when it comes to AI. Just 2% of respondents rate their organizations’ AI performance highly today in terms of delivering measurable business results. In fact, most are still struggling to scale generative AI. While two thirds have deployed it, only 7% have done so widely.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Finding return on AI investments across industries

The market is officially three years post ChatGPT and many of the pundit bylines have shifted to using terms like “bubble” to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers. 

In September, the MIT NANDA report made waves because the soundbite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At The Wall Street Journal’s Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI’s return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong. 

This places technology leaders in a precarious position–robust tech stacks already sustain their business operations, so what is the upside to introducing new technology? 

For decades, deployment strategies have followed a consistent cadence where tech operators avoid destabilizing business-critical workflows to swap out individual components in tech stacks. For example, a better or cheaper technology is not meaningful if it puts your disaster recovery at risk. 

While the price might increase when a new buyer takes over mature middleware, the cost of losing part of your enterprise data because you are mid-way through transitioning your enterprise to a new technology is way more severe than paying a higher price for a stable technology that you’ve run your business on for 20 years.

So, how do enterprises get a return on investing in the latest tech transformation?

First principle of AI: Your data is your value

Most of the articles about AI data relate to engineering tasks to ensure that an AI model infers against business data in repositories that represent past and present business realities. 

However, one of the most widely-deployed use cases in enterprise AI begins with prompting an AI model by uploading file attachments into the model. This step narrows an AI model’s range to the content of the uploaded files, accelerating accurate response times and reducing the number of prompts required to get the best answer. 

This tactic relies upon sending your proprietary business data into an AI model, so there are two important considerations to take in parallel with data preparation: first, governing your system for appropriate confidentiality; and second, developing a deliberate negotiation strategy with the model vendors, who cannot advance their frontier models without getting access to non-public data, like your business’ data. 

Recently, Anthropic and OpenAI completed massive deals with enterprise data platforms and owners because there is not enough high-value primary data publicly available on the internet. 

Most enterprises would automatically prioritize confidentiality of their data and design business workflows to maintain trade secrets. From an economic value point of view, especially considering how costly every model API call really is, exchanging selective access to your data for services or price offsets may be the right strategy. Rather than approaching model purchase/onboarding as a typical supplier/procurement exercise, think through the potential to realize mutual benefits in advancing your suppliers’ model and your business adoption of the model in tandem.

Second principle of AI: Boring by design

According to Information is Beautiful, in 2024 alone, 182 new generative AI models were introduced to the market. When GPT5 came into the market in 2025, many of the models from 12 to 24 months prior were rendered unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked. Their tech providers thought the customers would be excited about the newest models and did not realize the premium that business workflows place on stability. Video gamers are happy to upgrade their custom builds throughout the entire lifespan of the system components in their gaming rigs, and will upgrade the entire system just to play a newly released title. 

However, behavior does not translate to business run rate operations. While many employees may use the latest models for document processing or generating content, back-office operations can’t sustain swapping a tech stack three times a week to keep up with the latest model drops. The back-office work is boring by design.

The most successful AI deployments have focused on deploying AI on business problems unique to their business, often running in the background to accelerate or augment mundane but mandated tasks. Relieving legal or expense audits from having to manually cross check individual reports but putting the final decision in a humans’ responsibility zone combines the best of both. 

The important point is that none of these tasks require constant updates to the latest model to deliver that value. This is also an area where abstracting your business workflows from using direct model APIs can offer additional long-term stability while maintaining options to update or upgrade the underlying engines at the pace of your business.

Third principle of AI: Mini-van economics

The best way to avoid upside-down economics is to design systems to align to the users rather than vendor specs and benchmarks. 

Too many businesses continue to fall into the trap of buying new gear or new cloud service types based on new supplier-led benchmarks rather than starting their AI journey from what their business can consume, at what pace, on the capabilities they have deployed today. 

While Ferrari marketing is effective and those automobiles are truly magnificent, they drive the same speed through school zones and lack ample trunk space for groceries. Keep in mind that every remote server and model touched by a user layers on the costs and design for frugality by reconfiguring workflows to minimize spending on third-party services. 

Too many companies have found that their customer support AI workflows add millions of dollars of operational run rate costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, the companies that decided that a system running at the pace a human can read—less than 50 tokens per second—were able to successfully deploy scaled-out AI applications with minimal additional overhead.

There are so many aspects of this new automation technology to unpack—the best guidance is to start practical, design for independence in underlying technology components to keep from disrupting stable applications long term, and to leverage the fact that AI technology makes your business data valuable to the advancement of your tech suppliers’ goals.

This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

  •  

Redefining data engineering in the age of AI

As organizations weave AI into more of their operations, senior executives are realizing data engineers hold a central role in bringing these initiatives to life. After all, AI only delivers when you have large amounts of reliable and well-managed, high-quality data. Indeed, this report finds that data engineers play a pivotal role in their organizations as enablers of AI. And in so doing, they are integral to the overall success of the business.

According to the results of a survey of 400 senior data and technology executives, conducted by MIT Technology Review Insights, data engineers have become influential in areas that extend well beyond their traditional remit as pipeline managers. The technology is also changing how data engineers work, with the balance of their time shifting from core data management tasks toward AI-specific activities.

As their influence grows, so do the challenges data engineers face. A major one is dealing with greater complexity, as more advanced AI models elevate the importance of managing unstructured data and real-time pipelines. Another challenge is managing expanding workloads; data engineers are being asked to do more today than ever before, and that’s not likely to change.

Key findings from the report include the following:

  • Data engineers are integral to the business. This is the view of 72% of the surveyed technology leaders—and 86% of those in the survey’s biggest organizations, where AI maturity is greatest. It is a view held especially strongly among executives in financial services and manufacturing companies.
  • AI is changing everything data engineers do. The share of time data engineers spend each day on AI projects has nearly doubled in the past two years, from an average of 19% in 2023 to 37% in 2025, according to our survey. Respondents expect this figure to continue rising to an average of 61% in two years’ time. This is also contributing to bigger data engineer workloads; most respondents (77%) see these growing increasingly heavy.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Unlocking the potential of SAF with book and claim in air freight

Used in aviation, book and claim offers companies the ability to financially support the use of SAF even when it is not physically available at their locations.

As companies that ship goods by air or provide air freight related services address a range of climate goals aiming to reduce emissions, the importance of sustainable aviation fuel (SAF) couldn’t be more pronounced. In its neat form, SAF has the potential to reduce life cycle GHG emissions by up to 80% compared to conventional jet fuel.

In this exclusive webcast, leaders discuss the urgency for reducing air freight emissions for freight forwarders and shippers, and reasons why companies should use SAF. They also explain how companies can best make use of the book and claim model to support their emissions reduction strategies.

Learn from the leaders

  • What book and claim is and how companies can use it
  • Why SAF use is so important
  • How freight-forwarders and shippers can both potentially utilise and contribute to the benefits of SAF

Featured speakers

Raman Ojha, President, Shell Aviation. Raman is responsible for Shell’s global aviation business, which supplies fuels, lubricants, and lower carbon solutions, and offers a range of technical services globally. During almost 20 years at Shell, Raman has held leadership positions across a variety of industry sectors, including energy, lubricants, construction, and fertilisers. He has broad experience across both matured markets in the Americas and Europe, as well as developing markets including China, India, and Southeast Asia.  

Bettina Paschke, VP ESG Accounting, Reporting & Controlling, DHL Express. Bettina Paschke leads ESG Accounting, Reporting & Controlling, at DHL Express a division of DHL Group. In her role, she is responsible for ESG, including, EU Taxonomy Reporting, and Carbon Accounting. She has more than 20 years’ experience in Finance. In her role she is driving the Sustainable Aviation Fuel agenda at DHL Express and is engaged in various industry initiatives to allow reliable book and claim transactions.

Christoph Wolff, Chief Executive Officer at Smart Freight Centre. Christoph Wolff is currently the Chief Executive Officer at Smart Freight Centre, leading programs focused on sustainability in freight transport. Prior to this role, Christoph served as the Senior Advisor and Director at ACME Group, a global leader in green energy solutions. With a background in various industries, Christoph has held positions such as Managing Director at European Climate Foundation and Senior Board Advisor at Ferrostaal GmbH. Christoph has also worked at Novatec, Solar Millennium AG, DB Schenker, McKinsey & Company, and served as an Assistant Professor at Northwestern University – Kellogg School of Management. Christoph holds multiple degrees from RWTH Aachen University and ETH Zürich, along with ongoing executive education at the University of Michigan.

Watch the webcast.

This discussion is presented by MIT Technology Review Insights in association with Avelia. Avelia is a Shell owned solution and brand that was developed with support from Amex GBT, Accenture and Energy Web Foundation. The views from individuals not affiliated with Shell are their own and not those of Shell PLC or its affiliates. Cautionary note | Shell Global

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Not all offerings are available in all jurisdictions. Depending on jurisdiction and local laws, Shell may offer the sale of Environmental Attributes (for which subject to applicable law and consultation with own advisors, buyers might be able to use such Environmental Attributes for their own emission reduction purposes) and/or Environmental Attribute Information (pursuant to which buyers are helping subsidize the use of SAF and lower overall aviation emissions at designated airports but no emission reduction claims may be made by buyers for their own emissions reduction purposes). Different offerings have different forms of contracts, and no assumptions should be made about a particular offering without reading the specific contractual language applicable to such offering.

  •  

Future-proofing business capabilities with AI technologies

Artificial intelligence has always promised speed, efficiency, and new ways of solving problems. But what’s changed in the past few years is how quickly those promises are becoming reality. From oil and gas to retail, logistics to law, AI is no longer confined to pilot projects or speculative labs. It is being deployed in critical workflows, reducing processes that once took hours to just minutes, and freeing up employees to focus on higher-value work.

“Business process automation has been around a long while. What GenAI and AI agents are allowing us to do is really give superpowers, so to speak, to business process automation.” says chief AI architect at Cloudera, Manasi Vartak.

Much of the momentum is being driven by two related forces: the rise of AI agents and the rapid democratization of AI tools. AI agents, whether designed for automation or assistance, are proving especially powerful at speeding up response times and removing friction from complex workflows. Instead of waiting on humans to interpret a claim form, read a contract, or process a delivery driver’s query, AI agents can now do it in seconds, and at scale. 

At the same time, advances in usability are putting AI into the hands of nontechnical staff, making it easier for employees across various functions to experiment, adopt and adapt these tools for their own needs.

That doesn’t mean the road is without obstacles. Concerns about privacy, security, and the accuracy of LLMs remain pressing. Enterprises are also grappling with the realities of cost management, data quality, and how to build AI systems that are sustainable over the long term. And as companies explore what comes next—including autonomous agents, domain-specific models, and even steps toward artificial general intelligence—questions about trust, governance, and responsible deployment loom large.

“Your leadership is especially critical in making sure that your business has an AI strategy that addresses both the opportunity and the risk while giving the workforce some ability to upskill such that there’s a path to become fluent with these AI tools,” says principal advisor of AI and modern data strategy at Amazon Web Services, Eddie Kim.

Still, the case studies are compelling. A global energy company cutting threat detection times from over an hour to just seven minutes. A Fortune 100 legal team saving millions by automating contract reviews. A humanitarian aid group harnessing AI to respond faster to crises. Long gone are the days of incremental steps forward. These examples illustrate that when data, infrastructure, and AI expertise come together, the impact is transformative. 

The future of enterprise AI will be defined by how effectively organizations can marry innovation with scale, security, and strategy. That’s where the real race is happening.

Watch the webcast now.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Transforming commercial pharma with agentic AI 

Amid the turbulence of the wider global economy in recent years, the pharmaceuticals industry is weathering its own storms. The rising cost of raw materials and supply chain disruptions are squeezing margins as pharma companies face intense pressure—including from countries like the US—to control drug costs. At the same time, a wave of expiring patents threatens around $300 billion in potential lost sales by 2030. As companies lose the exclusive right to sell the drugs they have developed, competitors can enter the market with generic and biosimilar lower-cost alternatives, leading to a sharp decline in branded drug sales—a “patent cliff.” Simultaneously, the cost of bringing new drugs to market is climbing. McKinsey estimates cost per launch is growing 8% each year, reaching $4 billion in 2022. 

In clinics and health-care facilities, norms and expectations are evolving, too. Patients and health-care providers are seeking more personalized services, leading to greater demand for precision drugs and targeted therapies. While proving effective for patients, the complexity of formulating and producing these drugs makes them expensive and restricts their sale to a smaller customer base.

The need for personalization extends to sales and marketing operations too as pharma companies are increasingly needing to compete for the attention of health-care professionals (HCPs). Estimates suggest that biopharmas were able to reach 45% of HCPs in 2024, down from 60% in 2022. Personalization, real-time communication channels, and relevant content offer a way of building trust and reaching HCPs in an increasingly competitive market. But with ever-growing volumes of content requiring medical, legal, and regulatory (MLR) review, companies are struggling to keep up, leading to potential delays and missed opportunities. 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Turning migration into modernization

In late 2023, a long-trusted virtualization staple became the biggest open question on the enterprise IT roadmap.

Amid concerns of VMware licensing changes and steeper support costs, analysts noticed an exodus mentality. Forrester predicted that one in five large VMware customers would begin moving away from the platform in 2024. A subsequent Gartner community poll found that 74% of respondents were rethinking their VMware relationship in light of recent changes. CIOs contending with pricing hikes and product roadmap opacity face a daunting choice: double‑down on a familiar but costlier stack, or use the disruption to rethink how—and where—critical workloads should run.


“There’s still a lot of uncertainty in the marketplace around VMware,” explains Matt Crognale, senior director, migrations and modernization at cloud modernization firm Effectual, adding that the VMware portfolio has been streamlined and refocused over the past couple of years. “The portfolio has been trimmed down to a core offering focused on the technology versus disparate systems.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Unlocking AI’s full potential requires operational excellence

Talk of AI is inescapable. It’s often the main topic of discussion at board and executive meetings, at corporate retreats, and in the media. A record 58% of S&P 500 companies mentioned AI in their second-quarter earnings calls, according to Goldman Sachs.

But it’s difficult to walk the talk. Just 5% of generative AI pilots are driving measurable profit-and-loss impact, according to a recent MIT study. That means 95% of generative AI pilots are realizing zero return, despite significant attention and investment.

Although we’re nearly three years past the watershed moment of ChatGPT’s public release, the vast majority of organizations are stalling out in AI. Something is broken. What is it?

Date from Lucid’s AI readiness survey sheds some light on the tripwires that are making organizations stumble. Fortunately, solving these problems doesn’t require recruiting top AI talent worth hundreds of millions of dollars, at least for most companies. Instead, as they race to implement AI quickly and successfully, leaders need to bring greater rigor and structure to their operational processes.

Operations are the gap between AI’s promise and practical adoption

I can’t fault any leader for moving as fast as possible with their implementation of AI. In many cases, the existential survival of their company—and their own employment—depends on it. The promised benefits to improve productivity, reduce costs, and enhance communication are transformational, which is why speed is paramount.

But while moving quickly, leaders are skipping foundational steps required for any technology implementation to be successful. Our survey research found that more than 60% of knowledge workers believe their organization’s AI strategy is only somewhat to not at all well aligned with operational capabilities.

AI can process unstructured data, but AI will only create more headaches for unstructured organizations. As Bill Gates said, “The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”

Where are the operations gaps in AI implementations? Our survey found that approximately half of respondents (49%) cite undocumented or ad-hoc processes impacting efficiency sometimes; 22% say this happens often or always.

The primary challenge of AI transformation lies not in the technology itself, but in the final step of integrating it into daily workflows. We can compare this to the “last mile problem” in logistics: The most difficult part of a delivery is getting the product to the customer, no matter how efficient the rest of the process is.

In AI, the “last mile” is the crucial task of embedding AI into real-world business operations. Organizations have access to powerful models but struggle to connect them to the people who need to use them. The power of AI is wasted if it’s not effectively integrated into business operations, and that requires clear documentation of those operations.

Capturing, documenting, and distributing knowledge at scale is critical to organizational success with AI. Yet our survey showed only 16% of respondents say their workflows are extremely well-documented. The top barriers to proper documentation are a lack of time, cited by 40% of respondents, and a lack of tools, cited by 30%.

The challenge of integrating new technology with old processes was perfectly illustrated in a recent meeting I had with a Fortune 500 executive. The company is pushing for significant productivity gains with AI, but it still relies on an outdated collaboration tool that was never designed for teamwork. This situation highlights the very challenge our survey uncovered: Powerful AI initiatives can stall if teams lack modern collaboration and documentation tools.

This disconnect shows that AI adoption is about more than just the technology itself. For it to truly succeed enterprise-wide, companies need to provide a unified space for teams to brainstorm, plan, document, and make decisions. The fundamentals of successful technology adoption still hold true: You need the right tools to enable collaboration and documentation for AI to truly make an impact.

Collaboration and change management are hidden blockers to AI implementation

A company’s approach to AI is perceived very differently depending on an employee’s role. While 61% of C-suite executives believe their company’s strategy is well-considered, that number drops to 49% for managers and just 36% for entry-level employees, as our survey found.

Just like with product development, building a successful AI strategy requires a structured approach. Leaders and teams need a collaborative space to come together, brainstorm, prioritize the most promising opportunities, and map out a clear path forward. As many companies have embraced hybrid or distributed work, supporting remote collaboration with digital tools becomes even more important.

We recently used AI to streamline a strategic challenge for our executive team. A product leader used it to generate a comprehensive preparatory memo in a fraction of the typical time, complete with summaries, benchmarks, and recommendations.

Despite this efficiency, the AI-generated document was merely the foundation. We still had to meet to debate the specifics, prioritize actions, assign ownership, and formally document our decisions and next steps.

According to our survey, 23% of respondents reported that collaboration is frequently a bottleneck in complex work. Employees are willing to embrace change, but friction from poor collaboration adds risk and reduces the potential impact of AI.

Operational readiness enhances your AI readiness

Operations lacking structure are preventing many organizations from implementing AI successfully. We asked teams about their top needs to help them adapt to AI. At the top of their lists were document collaboration (cited by 37% of respondents), process documentation (34%), and visual workflows (33%).

Notice that none of these requests are for more sophisticated AI. The technology is plenty capable already, and most organizations are still just scratching the surface of its full potential. Instead, what teams want most is ensuring the fundamentals around processes, documentation, and collaboration are covered.

AI offers a significant opportunity for organizations to gain a competitive edge in productivity and efficiency. But moving fast isn’t a guarantee of success. The companies best positioned for successful AI adoption are those that invest in operational excellence, down to the last mile.

This content was produced by Lucid Software. It was not written by MIT Technology Review’s editorial staff.

  •  

Designing CPUs for next-generation supercomputing

In Seattle, a meteorologist analyzes dynamic atmospheric models to predict the next major storm system. In Stuttgart, an automotive engineer examines crash-test simulations for vehicle safety certification. And in Singapore, a financial analyst simulates portfolio stress tests to hedge against global economic shocks. 

Each of these professionals—and the consumers, commuters, and investors who depend on their insights— relies on a time-tested pillar of high-performance computing: the humble CPU. 

With GPU-powered AI breakthroughs getting the lion’s share of press (and investment) in 2025, it is tempting to assume that CPUs are yesterday’s news. Recent predictions anticipate that GPU and accelerator installations will increase by 17% year over year through 2030. But, in reality, CPUs are still responsible for the vast majority of today’s most cutting-edge scientific, engineering, and research workloads. Evan Burness, who leads Microsoft Azure’s HPC and AI product teams, estimates that CPUs still support 80% to 90% of HPC simulation jobs today.

In 2025, not only are these systems far from obsolete, they are experiencing a technological renaissance. A new wave of CPU innovation, including high-bandwidth memory (HBM), is delivering major performance gains— without requiring costly architectural resets. 

Download the report.

To learn more, watch the new webcast “Powering HPC with next-generation CPUs.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •  

Powering HPC with next-generation CPUs

For all the excitement around GPUs—the workhorses of today’s AI revolution—the central processing unit (CPU) remains the backbone of high-performance computing (HPC). CPUs still handle 80% to 90% of HPC workloads globally, powering everything from climate modeling to semiconductor design. Far from being eclipsed, they’re evolving in ways that make them more competitive, flexible, and indispensable than ever.

The competitive landscape around CPUs has intensified. Once dominated almost exclusively by Intel’s x86 chips, the market now includes powerful alternatives based on ARM and even emerging architectures like RISC-V. Flagship examples like Japan’s Fugaku supercomputer demonstrate how CPU innovation is pushing performance to new frontiers. Meanwhile, cloud providers like Microsoft and AWS are developing their own silicon, adding even more diversity to the ecosystem.

What makes CPUs so enduring? Flexibility, compatibility, and cost efficiency are key. As Evan Burness of Microsoft Azure points out, CPUs remain the “it-just-works” technology. Moving complex, proprietary code to GPUs can be an expensive and time-consuming effort, while CPUs typically support software continuity across generations with minimal friction. That reliability matters for businesses and researchers who need results, not just raw power.

Innovation is also reshaping what a CPU can be. Advances in chiplet design, on-package memory, and hybrid CPU-GPU architectures are extending the performance curve well beyond the limits of Moore’s Law. For many organizations, the CPU is the strategic choice that balances speed, efficiency, and cost.

Looking ahead, the relationship between CPUs, GPUs, and specialized processors like NPUs will define the future of HPC. Rather than a zero-sum contest, it’s increasingly a question of fit-for-purpose design. As Addison Snell, co-founder and chief executive officer of Intersect360 Research, notes, science and industry never run out of harder problems to solve.

That means CPUs, far from fading, will remain at the center of the computing ecosystem.

To learn more, read the new report “Designing CPUs for next-generation supercomputing.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

  •