Normal view

Received today — 15 December 2025

Steve Clarke to see if Harvey Barnes will commit to Scotland before friendlies

15 December 2025 at 13:00
  • Newcastle player has left door open to allegiance switch

  • Scotland manager may seek to stay on after World Cup

Steve Clarke plans to check on the extent to which Harvey Barnes will commit to playing for Scotland before friendly matches in March. The manager wants to know Barnes is ­sufficiently keen on swapping international allegiance – he has a single cap for England – before con­sidering the Newcastle player for a potential World Cup berth.

Scotland’s World Cup return after a 28-year wait has put Barnes’s inter­national future back on the agenda. The feeling within the Scottish ­Football Association has thus far been that Barnes believes he can play for England again, but the player left the door open on a switch during an interview last month.

Continue reading...

© Photograph: Adam Vaughan/EPA

© Photograph: Adam Vaughan/EPA

© Photograph: Adam Vaughan/EPA

AI coding is now everywhere. But not everyone is convinced.

15 December 2025 at 05:00


Depending who you ask, AI-powered coding is either giving software developers an unprecedented productivity boost or churning out masses of poorly designed code that saps their attention and sets software projects up for serious long term-maintenance problems.

The problem is right now, it’s not easy to know which is true.

As tech giants pour billions into large language models (LLMs), coding has been touted as the technology’s killer app. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies’ code is now AI-generated. And in March, Anthropic’s CEO, Dario Amodei, predicted that within six months 90% of all code would be written by AI. It’s an appealing and obvious use case. Code is a form of language, we need lots of it, and it’s expensive to produce manually. It’s also easy to tell if it works—run a program and it’s immediately evident whether it’s functional.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


Executives enamored with the potential to break through human bottlenecks are pushing engineers to lean into an AI-powered future. But after speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem.  

For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology’s limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.

The pace of progress is complicating the picture, though. A steady drumbeat of new model releases mean these tools’ capabilities and quirks are constantly evolving. And their utility often depends on the tasks they are applied to and the organizational structures built around them. All of this leaves developers navigating confusing gaps between expectation and reality. 

Is it the best of times or the worst of times (to channel Dickens) for AI coding? Maybe both.

A fast-moving field

It’s hard to avoid AI coding tools these days. There are a dizzying array of products available, both from model developers like Anthropic, OpenAI, and Google and from companies like Cursor and Windsurf, which wrap these models in polished code-editing software. And according to Stack Overflow’s 2025 Developer Survey, they’re being adopted rapidly, with 65% of developers now using them at least weekly.

AI coding tools first emerged around 2016 but were supercharged with the arrival of LLMs. Early versions functioned as little more than autocomplete for programmers, suggesting what to type next. Today they can analyze entire code bases, edit across files, fix bugs, and even generate documentation explaining how the code works. All this is guided through natural-language prompts via a chat interface.

“Agents”—autonomous LLM-powered coding tools that can take a high-level plan and build entire programs independently—represent the latest frontier in AI coding. This leap was enabled by the latest reasoning models, which can tackle complex problems step by step and, crucially, access external tools to complete tasks. “This is how the model is able to code, as opposed to just talk about coding,” says Boris Cherny, head of Claude Code, Anthropic’s coding agent.

These agents have made impressive progress on software engineering benchmarks—standardized tests that measure model performance. When OpenAI introduced the SWE-bench Verified benchmark in August 2024, offering a way to evaluate agents’ success at fixing real bugs in open-source repositories, the top model solved just 33% of issues. A year later, leading models consistently score above 70%

In February, Andrej Karpathy, a founding member of OpenAI and former director of AI at Tesla, coined the term “vibe coding”—meaning an approach where people describe software in natural language and let AI write, refine, and debug the code. Social media abounds with developers who have bought into this vision, claiming massive productivity boosts.

But while some developers and companies report such productivity gains, the hard evidence is more mixed. Early studies from GitHub, Google, and Microsoft—all vendors of AI tools—found developers completing tasks 20% to 55% faster. But a September report from the consultancy Bain & Company described real-world savings as “unremarkable.”

Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code—code that isn’t deleted or rewritten within weeks—since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow’s survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower.

Growing disillusionment

For Mike Judge, principal developer at the software consultancy Substantial, the METR study struck a nerve. He was an enthusiastic early adopter of AI tools, but over time he grew frustrated with their limitations and the modest boost they brought to his productivity. “I was complaining to people because I was like, ‘It’s helping me but I can’t figure out how to make it really help me a lot,’” he says. “I kept feeling like the AI was really dumb, but maybe I could trick it into being smart if I found the right magic incantation.”

When asked by a friend, Judge had estimated the tools were providing a roughly 25% speedup. So when he saw similar estimates attributed to developers in the METR study he decided to test his own. For six weeks, he guessed how long a task would take, flipped a coin to decide whether to use AI or code manually, and timed himself. To his surprise, AI slowed him down by an median of 21%—mirroring the METR results.

This got Judge crunching the numbers. If these tools were really speeding developers up, he reasoned, you should see a massive boom in new apps, website registrations, video games, and projects on GitHub. He spent hours and several hundred dollars analyzing all the publicly available data and found flat lines everywhere.

“Shouldn’t this be going up and to the right?” says Judge. “Where’s the hockey stick on any of these graphs? I thought everybody was so extraordinarily productive.” The obvious conclusion, he says, is that AI tools provide little productivity boost for most developers. 

Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing “boilerplate code” (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the “blank page problem” by offering an imperfect first stab to get a developer’s creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers.

These tasks can be tedious, and developers are typically  glad to hand them off. But they represent only a small part of an experienced engineer’s workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles.

Perhaps the biggest problem is that LLMs can hold only a limited amount of information in their “context window”—essentially their working memory. This means they struggle to parse large code bases and are prone to forgetting what they’re doing on longer tasks. “It gets really nearsighted—it’ll only look at the thing that’s right in front of it,” says Judge. “And if you tell it to do a dozen things, it’ll do 11 of them and just forget that last one.”

DEREK BRAHNEY

LLMs’ myopia can lead to headaches for human coders. While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren’t built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that’s hard for humans to parse and, more important, to maintain.

Developers have traditionally addressed this by following conventions—loosely defined coding guidelines that differ widely between projects and teams. “AI has this overwhelming tendency to not understand what the existing conventions are within a repository,” says Bill Harding, the CEO of GitClear. “And so it is very likely to come up with its own slightly different version of how to solve a problem.”

The models also just get things wrong. Like all LLMs, coding models are prone to “hallucinating”—it’s an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. “Some projects you get a 20x improvement in terms of speed or efficiency,” says Liu. “On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it’s just not going to.”

Judge suspects this is why engineers often overestimate productivity gains. “You remember the jackpots. You don’t remember sitting there plugging tokens into the slot machine for two hours,” he says.

And it can be particularly pernicious if the developer is unfamiliar with the task. Judge remembers getting AI to help set up a Microsoft cloud service called an Azure Functions, which he’d never used before. He thought it would take about two hours, but nine hours later he threw in the towel. “It kept leading me down these rabbit holes and I didn’t know enough about the topic to be able to tell it ‘Hey, this is nonsensical,’” he says.

The debt begins to mount up

Developers constantly make trade-offs between speed of development and the maintainability of their code—creating what’s known as “technical debt,” says Geoffrey G. Parker, professor of engineering innovation at Dartmouth College. Each shortcut adds complexity and makes the code base harder to manage, accruing “interest” that must eventually be repaid by restructuring the code. As this debt piles up, adding new features and maintaining the software becomes slower and more difficult.

Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear’s Harding. And GitClear’s data suggests this is happening at scale. Since 2020, the company has seen a significant rise in the amount of copy-pasted code—an indicator that developers are reusing more code snippets, most likely based on AI suggestions—and an even bigger decline in the amount of code moved from one place to another, which happens when developers clean up their code base.

And as models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of “code smells”—harder-to-pinpoint flaws that lead to maintenance problems and technical debt. 

Recent research by Sonar found that these make up more than 90% of the issues found in code generated by leading AI models. “Issues that are easy to spot are disappearing, and what’s left are much more complex issues that take a while to find,” says Shaukat. “That’s what worries us about this space at the moment. You’re almost being lulled into a false sense of security.”

If AI tools make it increasingly difficult to maintain code, that could have significant security implications, says Jessica Ji, a security researcher at Georgetown University. “The harder it is to update things and fix things, the more likely a code base or any given chunk of code is to become insecure over time,” says Ji.

There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software. 

LLMs are also vulnerable to “data-poisoning attacks,” where hackers seed the publicly available data sets models train on with data that alters the model’s behavior in undesirable ways, such as generating insecure code when triggered by specific phrases. In October, research by Anthropic found that as few as 250 malicious documents can introduce this kind of back door into an LLM regardless of its size.

The converted

Despite these issues, though, there’s probably no turning back. “Odds are that writing every line of code on a keyboard by hand—those days are quickly slipping behind us,” says Kyle Daigle, chief operating officer at the Microsoft-owned code-hosting platform GitHub, which produces a popular AI-powered tool called Copilot (not to be confused with the Microsoft product of the same name).

The Stack Overflow report found that despite growing distrust in the technology, usage has increased rapidly and consistently over the past three years. Erin Yepis, a senior analyst at Stack Overflow, says this suggests that engineers are taking advantage of the tools with a clear-eyed view of the risks. The report also found that frequent users tend to be more enthusiastic and more than half of developers are not using the latest coding agents, perhaps explaining why many remain underwhelmed by the technology.

Those latest tools can be a revelation. Trevor Dilley, CTO at the software development agency Twenty20 Ideas, says he had found some value in AI editors’ autocomplete functions, but when he tried anything more complex it would “fail catastrophically.” Then in March, while on vacation with his family, he set the newly released Claude Code to work on one of his hobby projects. It completed a four-hour task in two minutes, and the code was better than what he would have written.

“I was like, Whoa,” he says. “That, for me, was the moment, really. There’s no going back from here.” Dilley has since cofounded a startup called DevSwarm, which is creating software that can marshal multiple agents to work in parallel on a piece of software.

The challenge, says Armin Ronacher, a prominent open-source developer, is that the learning curve for these tools is shallow but long. Until March he’d remained unimpressed by AI tools, but after leaving his job at the software company Sentry in April to launch a startup, he started experimenting with agents. “I basically spent a lot of months doing nothing but this,” he says. “Now, 90% of the code that I write is AI-generated.”

Getting to that point involved extensive trial and error, to figure out which problems tend to trip the tools up and which they can handle efficiently. Today’s models can tackle most coding tasks with the right guardrails, says Ronacher, but these can be very task and project specific.

To get the most out of these tools, developers must surrender control over individual lines of code and focus on the overall software architecture, says Nico Westerdale, chief technology officer at the veterinary staffing company IndeVets. He recently built a data science platform 100,000 lines of code long almost exclusively by prompting models rather than writing the code himself.

Westerdale’s process starts with an extended conversation with the modelagent to develop a detailed plan for what to build and how. He then guides it through each step. It rarely gets things right on the first try and needs constant wrangling, but if you force it to stick to well-defined design patterns, the models can produce high-quality, easily maintainable code, says Westerdale. He reviews every line, and the code is as good as anything he’s ever produced, he says: “I’ve just found it absolutely revolutionary,. It’s also frustrating, difficult, a different way of thinking, and we’re only just getting used to it.”

But while individual developers are learning how to use these tools effectively, getting consistent results across a large engineering team is significantly harder. AI tools amplify both the good and bad aspects of your engineering culture, says Ryan J. Salva, senior director of product management at Google. With strong processes, clear coding patterns, and well-defined best practices, these tools can shine. 

DEREK BRAHNEY

But if your development process is disorganized, they’ll only magnify the problems. It’s also essential to codify that institutional knowledge so the models can draw on it effectively. “A lot of work needs to be done to help build up context and get the tribal knowledge out of our heads,” he says.

The cryptocurrency exchange Coinbase has been vocal about its adoption of AI tools. CEO Brian Armstrong made headlines in August when he revealed that the company had fired staff unwilling to adopt AI tools. But Coinbase’s head of platform, Rob Witoff, tells MIT Technology Review that while they’ve seen massive productivity gains in some areas, the impact has been patchy. For simpler tasks like restructuring the code base and writing tests, AI-powered workflows have achieved speedups of up to 90%. But gains are more modest for other tasks, and the disruption caused by overhauling existing processes often counteracts the increased coding speed, says Witoff.

One factor is that AI tools let junior developers produce far more code,. As in almost all engineering teams, this code has to be reviewed by others, normally more senior developers, to catch bugs and ensure it meets quality standards. But the sheer volume of code now being churned out i whichs quickly saturatinges the ability of midlevel staff to review changes. “This is the cycle we’re going through almost every month, where we automate a new thing lower down in the stack, which brings more pressure higher up in the stack,” he says. “Then we’re looking at applying automation to that higher-up piece.”

Developers also spend only 20% to 40% of their time coding, says Jue Wang, a partner at Bain, so even a significant speedup there often translates to more modest overall gains. Developers spend the rest of their time analyzing software problems and dealing with customer feedback, product strategy, and administrative tasks. To get significant efficiency boosts, companies may need to apply generative AI to all these other processes too, says Jue, and that is still in the works.

Rapid evolution

Programming with agents is a dramatic departure from previous working practices, though, so it’s not surprising companies are facing some teething issues. These are also very new products that are changing by the day. “Every couple months the model improves, and there’s a big step change in the model’s coding capabilities and you have to get recalibrated,” says Anthropic’s Cherny.

For example, in June Anthropic introduced a built-in planning mode to Claude; it has since been replicated by other providers. In October, the company also enabled Claude to ask users questions when it needs more context or faces multiple possible solutions, which Cherny says helps it avoid the tendency to simply assume which path is the best way forward.

Most significant, Anthropic has added features that make Claude better at managing its own context. When it nears the limits of its working memory, it summarizes key details and uses them to start a new context window, effectively giving it an “infinite” one, says Cherny. Claude can also invoke sub-agents to work on smaller tasks, so it no longer has to hold all aspects of the project in its own head. The company claims that its latest model, Claude 4.5 Sonnet, can now code autonomously for more than 30 hours without major performance degradation.

Novel approaches to software development could also sidestep coding agents’ other flaws. MIT professor Max Tegmark has introduced something he calls “vericoding,” which could allow agents to produce entirely bug-free code from a natural-language description. It builds on an approach known as “formal verification,” where developers create a mathematical model of their software that can prove incontrovertibly that it functions correctly. This approach is used in high-stakes areas like flight-control systems and cryptographic libraries, but it remains costly and time-consuming, limiting its broader use.

Rapid improvements in LLMs’ mathematical capabilities have opened up the tantalizing possibility of models that produce not only software but the mathematical proof that it’s bug free, says Tegmark. “You just give the specification, and the AI comes back with provably correct code,” he says. “You don’t have to touch the code. You don’t even have to ever look at the code.”

When tested on about 2,000 vericoding problems in Dafny—a language designed for formal verification—the best LLMs solved over 60%, according to non-peer-reviewed research by Tegmark’s group. This was achieved with off-the-shelf LLMs, and Tegmark expects that training specifically for vericoding could improve scores rapidly.

And counterintuitively, Tthe speed at which AI generates code could actuallylso ease maintainability concerns. Alex Worden, principal engineer at the business software giant Intuit, notes that maintenance is often difficult because engineers reuse components across projects, creating a tangle of dependencies where one change triggers cascading effects across the code base. Reusing code used to save developers time, but in a world where AI can produce hundreds of lines of code in seconds, that imperative has gone, says Worden.

Instead, he advocates for “disposable code,” where each component is generated independently by AI without regard for whether it follows design patterns or conventions. They are then connected via APIs—sets of rules that let components request information or services from each other. Each component’s inner workings are not dependent on other parts of the code base, making it possible to rip them out and replace them without wider impact, says Worden. 

“The industry is still concerned about humans maintaining AI-generated code,” he says. “I question how long humans will look at or care about code.”

A narrowing talent pipeline

For the foreseeable future, though, humans will still need to understand and maintain the code that underpins their projects. And one of the most pernicious side effects of AI tools may be a shrinking pool of people capable of doing so. 

Early evidence suggests that fears around the job-destroying effects of AI may be justified. A recent Stanford University study found that employment among software developers aged 22 to 25 fell nearly 20% between 2022 and 2025, coinciding with the rise of AI-powered coding tools.

Experienced developers could face difficulties too. Luciano Nooijen, an engineer at the video-game infrastructure developer Companion Group, used AI tools heavily in his day job, where they were provided for free. But when he began a side project without access to those tools, he found himself struggling with tasks that previously came naturally. “I was feeling so stupid because things that used to be instinct became manual, sometimes even cumbersome,” says Nooijen.

Just as athletes still perform basic drills, he thinks the only way to maintain an instinct for coding is to regularly practice the grunt work. That’s why he’s largely abandoned AI tools, though he admits that deeper motivations are also at play. 

Part of the reason Nooijen and other developers MIT Technology Review spoke to are pushing back against AI tools is a sense that they are hollowing out the parts of their jobs that they love. “I got into software engineering because I like working with computers. I like making machines do things that I want,” Nooijen says. “It’s just not fun sitting there with my work being done for me.”

CISO’s View: What Indian Companies Must Execute for DPDP Readiness in 2026

15 December 2025 at 02:48

DPDP Act

Shashank Bajpai, CISO & CTSO at Yotta 2026 is the execution year for India’s Digital Personal Data Protection (DPDP) regime , the Rules were notified in November 2025 and the government has signalled a phased enforcement timeline. The law is consent-centric, imposes heavy penalties (up to ₹250 crore for the most serious security failures), creates a new institutional stack (Data Protection Board, Consent Managers), and elevates privacy to boardroom priority. Organizations that treat compliance as a strategic investment, not a cost centre, will gain trust, operational resilience, and competitive advantage. Key themes for 2026: consent at scale, data minimization, hardened security, vendor accountability, and new dependency risks arising from Consent Manager infrastructure.

Why 2026 Matters

The DPDP Act (2023) becomes operational through Rules notified in November 2025; the result is a staggered compliance timetable that places 2026 squarely in the execution phase. That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress. The practical effect is immediate: companies must move from policy documents to implemented consent systems, security controls, breach workflows, and vendor governance.

The High-Impact Obligations

  • Explicit consent architecture: Consent must be free, specific, informed and obtained by clear affirmative action. Systems must record, revoke and propagate consent signals reliably.
  • Data minimization & purpose limitation: Collect only what’s necessary and purge data when the purpose is fulfilled.
  • Reasonable security safeguards: Highest penalty bracket (up to ₹250 crore) for failures to implement required security measures. Encryption, tokenization, RBAC, monitoring and secure third-party contracts are expected.
  • Breach notification: Obligatory notification to the Data Protection Board and affected principals, with tight timelines (public guidance references 72-hour reporting windows for board notification).
  • Data subject rights: Access, correction, erasure, withdrawal of consent and grievance mechanisms must be operational and auditable.
  • Children’s data: Verifiable parental consent and prohibitions on behavioural profiling/targeted advertising toward minors; failures risk very high penalties.
  • Consent Managers: New regulated intermediaries where individuals may centrally manage consent; only India-incorporated entities meeting financial/operational thresholds (minimum net worth indicated in Rules) can register. This constructs a new privacy infrastructure and a new dependency vector for data fiduciaries.

Implementation Challenges & Strategic Opportunities

1. Key Implementation Challenges

Challenge Area What Will Break / Strain in 2026 Why It Matters to Leadership Strategic Imperative
Regulatory Ambiguity & Evolving Interpretation Unclear operational expectations around “informed consent,” Significant Data Fiduciary designation, and cross-border data transfers Risk of over-engineering or non-compliance as regulatory guidance evolves Build modular, configurable privacy architectures that can adapt without re-platforming
Legacy Systems & Distributed Data Difficulty retrofitting consent enforcement, encryption, audit trails, and real-time controls into legacy and batch-oriented systems High cost, operational disruption, and extended timelines for compliance Prioritize modernization of high-risk systems and align vendor roadmaps with DPDP requirements
Organizational Governance & Talent Gaps Privacy cuts across legal, product, engineering, HR, procurement—often without clear ownership; shortage of experienced DPOs Fragmented accountability increases regulatory and breach risk Establish cross-functional privacy governance; leverage fractional DPOs and external advisors while building internal capability
Children’s Data & Onboarding Friction Age verification and parental consent slow user onboarding and impact conversion metrics Direct revenue and growth impact if UX is not carefully redesigned Re-engineer onboarding flows to balance compliance with user experience, especially in consumer platforms
Consent Manager Dependency & Systemic Risk Outages or breaches at registered Consent Managers can affect multiple data fiduciaries simultaneously Creates concentration and third-party systemic risk Design fallback mechanisms, redundancy plans, and enforce strong SLAs and audit rights

 2. Strategic Opportunities: Turning Compliance into Advantage

Opportunity Area Business Value Strategic Outcome
Trust as a Market Differentiator Privacy becomes a competitive trust signal, particularly in fintech, healthtech, and BFSI ecosystems. Strong DPDP compliance enhances brand equity, customer loyalty, partner confidence, and investor perception.
Operational Efficiency & Risk Reduction Data minimization, encryption, and segmentation reduce storage costs and limit breach blast radius. Privacy investments double as technical debt reduction with measurable ROI and lower incident recovery costs.
Global Market Access Alignment with global privacy principles simplifies cross-border expansion and compliance-sensitive partnerships. Faster deal closures, reduced due diligence friction, and improved access to regulated international markets.
Domestic Privacy & RegTech Ecosystem Growth Demand for Consent Managers, RegTech, and privacy engineering solutions creates a new domestic market. Strategic opportunity for Indian vendors to lead in privacy infrastructure and export DPDP-aligned solutions globally.

DPDP Readiness Roadmap for 2026

Time Horizon Key Actions Primary Owners Strategic Outcome
Immediate (0–3 Months) • Establish Board-level Privacy Steering Committee •Appoint or contract a Data Protection Officer (DPO) • Conduct rapid enterprise data mapping (repositories, processors, high-risk data flows) • Triage high-risk systems for encryption, access controls, and logging • Update breach response runbooks to meet Board and individual notification timelines Board, CEO, CISO, Legal, Compliance Executive accountability for privacy; clear visibility of data risk exposure; regulatory-ready breach response posture
Short Term (3–9 Months) • Deploy consent management platform interoperable with upcoming Consent Managers • Standardize DPDP-compliant vendor contracts and initiate bulk vendor renegotiation/audits • Automate data principal request handling (identity verification, APIs, evidence trails) CISO, CTO, Legal, Procurement, Product Operational DPDP compliance at scale; reduced manual handling risk; strengthened third-party governance
Medium Term (9–18 Months) • Implement data minimization and archival policies focused on high-sensitivity datasets • Embed Privacy Impact Assessments (PIAs) into product development (“privacy by design”) • Stress-test reliance on Consent Managers and negotiate resilience SLAs and contingency plans Product, Engineering, CISO, Risk, Procurement Sustainable compliance architecture; reduced long-term data liability; privacy-integrated product innovation
Ongoing (Board Dashboard Metrics) • Consent fulfillment latency & revocation success rate • Mean time to detect and notify data breaches (aligned to regulatory windows) • % of sensitive data encrypted at rest and in transit • Vendor compliance score and DPA coverage Board, CISO, Risk & Compliance Continuous assurance, measurable compliance maturity, and defensible regulatory posture

Board-Level Takeaway

DPDP compliance in 2026 is not a one-time legal exercise, it is an operating model change. Organizations that treat privacy as a board-governed, product-integrated, and metrics-driven discipline will outperform peers on regulatory trust, customer confidence, and incident resilience.

The Macro View: Data Sovereignty & Trust Infrastructure

The Rules reinforce India’s intention to control flows of citizen data while creating domestic privacy infrastructure (DPB + Consent Managers + data auditors). This is not just regulation; it is an economic strategy to build domestic capability in cloud, identity, security and RegTech, and to position India as a credible participant in global data governance conversations.

Act Strategically, Not Reactively

DPDP is a structural shift: it will change products, engineering practices, contracts, and customer expectations. 2026 will reveal winners and laggards. Those that embrace privacy as a governance discipline and a product differentiator will realize measurable advantages in trust, operational resilience, and market value. The alternative, waiting until enforcement escalates, risks fines, reputational harm and erosion of customer trust. (This article reflects the author’s analysis and personal viewpoints and is intended for informational purposes only. It should not be construed as legal or regulatory advice.)
Received before yesterday

Football Association to pass on fan anger over World Cup ticket prices

12 December 2025 at 14:08
  • Prices 10 times those promised in initial bid

  • Fifa not expected to change policy for 2026

The Football Association will pass on England supporters’ concerns about high 2026 World Cup ticket prices to Fifa. However, despite the growing outrage, it is understood none of the international federations expect world football’s governing body to change its policy.

Anger among supporter groups continued on Friday after it emerged that the cheapest tickets will cost 10 times the price promised in the original bid for the United States, Canada and Mexico to host the tournament. For England fans it will mean having to pay at least $220 (£165) for group games – when the bid document’s ticket model stated the cheapest seats should be $21 (£15.70).

Continue reading...

© Photograph: Mike Egerton/PA

© Photograph: Mike Egerton/PA

© Photograph: Mike Egerton/PA

I’ve been to 14 major tournaments. Will I follow England to the 2026 World Cup? No, no, no | Philip Cornwall

12 December 2025 at 12:59

Fifa’s demand that the most fervent supporters cough up a minimum of £5,000 in advance just for tickets is scandalous

It was not mathematically confirmed until the Latvia game a month later, but as I watched Ezri Konsa turn in the third goal away to Serbia in early September I smiled to myself in the Stadion Rajko Mitic, knowing England were going to the World Cup. But immediately, a key question surfaced: was I? The answer came on Thursday, with the announcement of the ticket prices that the most loyal supporters of international football would have to pay. And that answer, emphatically, was no, as it will be for countless supporters worldwide. If you had asked me as a hypothetical what seeing England in a World Cup final was worth, I might have said: “Priceless.” But $4,185 – £3,130 – just for the match ticket? No, no, no.

As a fan, I have been to 14 tournaments – nine European Championships and five World Cups – dating back to Euro 92. I have the money, or at least could get it by dipping into my pension pot, which I was braced to do for hotels and flights. But, in a sentiment being echoed across England, Scotland and all the other qualifying nations, I’m not spending a minimum of about £5,000 simply on match tickets, the price Fifa has put on watching your team from group stage through to the final (the exact total will vary, depending on where a country’s group matches are).

Continue reading...

© Photograph: Bradley Collyer/PA

© Photograph: Bradley Collyer/PA

© Photograph: Bradley Collyer/PA

Welcome to the 2026 World Cup shakedown! The price of a ticket: the integrity of the game | Marina Hyde

12 December 2025 at 09:00

In World Cup parlance, Qatar was Fifa president Gianni Infantino’s qualifier. Now it’s the big time for Trump’s dictator-curious protege

I used to think Fifa’s recent practice of holding the World Cup in autocracies was because it made it easier for world football’s governing body to do the things it loved: spend untold billions of other people’s money and siphon the profits without having to worry about boring little things like human rights or public opinion. Which, let’s face it, really piss around with your bottom line.

But for a while now, that view has seemed ridiculously naive, a bit like assuming Recep Erdoğan followed Vladimir Putin’s election-hollowing gameplan just because hey, he’s an interested guy who likes to read around a lot of subjects. So no: Fifa president Gianni Infantino hasn’t spent recent tournaments cosying up to authoritarians because it made his life easier. He’s done it to learn from the best. And his latest decree this week simply confirms Fifa is now a fully operational autocracy in the classic populace-rinsing style. Do just absorb yesterday’s news that the cheapest ticket for next year’s World Cup final in the US will cost £3,120 – seven times more than the cheapest ticket for the last World Cup final in Qatar. (Admittedly, still marginally cheaper than an off-peak single from London to Manchester.)

Marina Hyde is a Guardian columnist

Continue reading...

© Photograph: Héctor Vivas/FIFA/Getty Images

© Photograph: Héctor Vivas/FIFA/Getty Images

© Photograph: Héctor Vivas/FIFA/Getty Images

Fifa urged to halt World Cup ticket sales after ‘monumental betrayal’ of fans

11 December 2025 at 11:40
  • Final tickets more than £3,000; five-fold rise on Qatar

  • Cheapest England tickets are £165 for two Group L games

Fifa has been accused of a ­“monumental betrayal” by fan ­representatives after it emerged that the cheapest tickets for next summer’s World Cup final will cost more than £3,000.

Football Supporters Europe (FSE), which represents fans across the ­continent, described the prices as “extortionate” and called for an immediate halt to ticket sales after a day when England fans ­discovered that tickets to follow their team through the tournament could cost up to $16,590 (£12,375) in the top categories.

Continue reading...

© Photograph: Brian Snyder/Reuters

© Photograph: Brian Snyder/Reuters

© Photograph: Brian Snyder/Reuters

Trump plan for World Cup tourists to reveal social media activity described as ‘chilling’

11 December 2025 at 05:47
  • UK tourists would be among those affected by US policy

  • ‘Unacceptable’ and ‘chilling’, says European fan group

A plan to require supporters travelling to the United States for the World Cup to disclose information about their social media accounts has been described as “profoundly unacceptable”.

Tourists from 42 countries, including the UK, which use the Electronic System for Travel Authorization (Esta) as part of the visa waiver programme would be obliged to provide information about accounts they have held in the last five years in their applications. Previously it had been optional to provide the information.

Continue reading...

© Photograph: Sam Corum/PA

© Photograph: Sam Corum/PA

© Photograph: Sam Corum/PA

NHL warns top players will not show up for Winter Olympics if venue is unsafe

10 December 2025 at 14:15
  • Construction delays have beset ice hockey arena in Milan

  • ‘If the ice isn’t ready, we’re not going,’ NHL deputy warns

The NHL says it is “disappointing” that the main ice hockey venue for the Winter Olympics will not be ready until the new year – and warned that its top players will not show up unless the ice is shown to be safe.

The men’s and women’s tournaments are expected to be among the highlights of the 2026 Milan-Cortina Games with the NHL stars showing up for the first time since 2014.

Continue reading...

© Photograph: Daniele Mascolo/Reuters

© Photograph: Daniele Mascolo/Reuters

© Photograph: Daniele Mascolo/Reuters

LGBTQ+ events to go ahead at World Cup game despite Egypt and Iran objections

10 December 2025 at 11:38
  • Organisers confirm ‘Pride Match’ activities will take place

  • Seattle to host Egypt v Iran in Group G next summer

Plans to celebrate LGBTQ+ rights and freedoms in Seattle during the World Cup next summer will continue despite objections from the Egyptian and Iranian football federations over the “Pride Match” due to take place in the city.

Seattle organisers have confirmed that they are “moving forward as planned” with Pride activities in the city when Egypt face Iran in Group G on 26 June. Rainbow flags will also be allowed into the stadium by Fifa.

Continue reading...

© Photograph: Alexander Spatari/Getty Images

© Photograph: Alexander Spatari/Getty Images

© Photograph: Alexander Spatari/Getty Images

Fabio Cannavaro: ‘Uzbeks are tough, never give up. Playing them is a pain in the arse’

10 December 2025 at 03:00

In an exclusive interview, the former World Cup winner talks about taking Uzbekistan to the 2026 World Cup and a project close to his heart in Naples

Uzbekistan may have made history by qualifying for the World Cup for the first time in the country’s 34 years of independence in June after losing only once in 15 qualifiers. But they then had a problem: Timur Kapadze stepped down and they needed a head coach for next year’s tournament.

They turned to Fabio Cannavaro, Italy’s 2006 World Cup-winning captain and Ballon d’Or winner, who has had a rich and varied coaching career and was ready to take on the challenge of managing a nation still taking its first steps in international football.

Continue reading...

© Photograph: Roberto Salomone/The Guardian

© Photograph: Roberto Salomone/The Guardian

© Photograph: Roberto Salomone/The Guardian

England scout for World Cup camps amid fears of losing preferred base to Netherlands

10 December 2025 at 03:00
  • Initial Kansas plan for US training base thrown into doubt

  • FA exploring alternative options on the east coast

The Football Association has sent operational staff to the US this week to scout for World Cup training camps amid concerns that England may lose their preferred site to the Netherlands.

Thomas Tuchel had cleared an FA plan for England to be based in Kansas after a pre-tournament training camp in Fort Lauderdale, but after last week’s draw there are concerns that the Netherlands will be allocated their chosen facility at Sporting Kansas City, a high-performance centre used by US Soccer.

Continue reading...

© Photograph: David Rogers/Getty Images

© Photograph: David Rogers/Getty Images

© Photograph: David Rogers/Getty Images

Egypt and Iran ask Fifa to prevent LGBTQ+ Pride celebration at World Cup 2026 match

10 December 2025 at 00:18
  • Egypt’s football body says Pride event would clash with values

  • Iran raises objections to plans organised by local Seattle group

Egypt and Iran are calling on football’s governing body to intervene in the LGBTQ+ Pride celebration planned to coincide with their group stage match in Seattle at the 2026 World Cup.

Egypt’s Football Association (EFA) said on Tuesday it had sent a letter to Fifa urging them to prevent any LGBTQ+ Pride-related activities during the national team’s match against Iran next June.

Continue reading...

© Photograph: Bradley Collyer/PA

© Photograph: Bradley Collyer/PA

© Photograph: Bradley Collyer/PA

4 technologies that didn’t make our 2026 breakthroughs list

8 December 2025 at 07:00

If you’re a longtime reader, you probably know that our newsroom selects 10 breakthroughs every year that we think will define the future. This group exercise is mostly fun and always engrossing, but at times it can also be quite difficult. 

We collectively pitch dozens of ideas, and the editors meticulously review and debate the merits of each. We agonize over which ones might make the broadest impact, whether one is too similar to something we’ve featured in the past, and how confident we are that a recent advance will actually translate into long-term success. There is plenty of lively discussion along the way.  

The 2026 list will come out on January 12—so stay tuned. In the meantime, I wanted to share some of the technologies from this year’s reject pile, as a window into our decision-making process. 

These four technologies won’t be on our 2026 list of breakthroughs, but all were closely considered, and we think they’re worth knowing about. 

Male contraceptives 

There are several new treatments in the pipeline for men who are sexually active and wish to prevent pregnancy—potentially providing them with an alternative to condoms or vasectomies. 

Two of those treatments are now being tested in clinical trials by a company called Contraline. One is a gel that men would rub on their shoulder or upper arm once a day to suppress sperm production, and the other is a device designed to block sperm during ejaculation. (Kevin Eisenfrats, Contraline’s CEO, was recently named to our Innovators Under 35 list). A once-a-day pill is also in early-stage trials with the firm YourChoice Therapeutics. 

Though it’s exciting to see this progress, it will still take several years for any of these treatments to make their way through clinical trials—assuming all goes well.

World models 

World models have become the hot new thing in AI in recent months. Though they’re difficult to define, these models are generally trained on videos or spatial data and aim to produce 3D virtual worlds from simple prompts. They reflect fundamental principles, like gravity, that govern our actual world. The results could be used in game design or to make robots more capable by helping them understand their physical surroundings. 

Despite some disagreements on exactly what constitutes a world model, the idea is certainly gaining momentum. Renowned AI researchers including Yann LeCun and Fei-Fei Li have launched companies to develop them, and Li’s startup World Labs released its first version last month. And Google made a huge splash with the release of its Genie 3 world model earlier this year. 

Though these models are shaping up to be an exciting new frontier for AI in the year ahead, it seemed premature to deem them a breakthrough. But definitely watch this space. 

Proof of personhood 

Thanks to AI, it’s getting harder to know who and what is real online. It’s now possible to make hyperrealistic digital avatars of yourself or someone you know based on very little training data, using equipment many people have at home. And AI agents are being set loose across the internet to take action on people’s behalf. 

All of this is creating more interest in what are known as personhood credentials, which could offer a way to verify that you are, in fact, a real human when you do something important online. 

For example, we’ve reported on efforts by OpenAI, Microsoft, Harvard, and MIT to create a digital token that would serve this purpose. To get it, you’d first go to a government office or other organization and show identification. Then it’d be installed on your device and whenever you wanted to, say, log into your bank account, cryptographic protocols would verify that the token was authentic—confirming that you are the person you claim to be. 

Whether or not this particular approach catches on, many of us in the newsroom agree that the future internet will need something along these lines. Right now, though, many competing identity verification projects are in various stages of development. One is World ID by Sam Altman’s startup Tools for Humanity, which uses a twist on biometrics. 

If these efforts reach critical mass—or if one emerges as the clear winner, perhaps by becoming a universal standard or being integrated into a major platform—we’ll know it’s time to revisit the idea.  

The world’s oldest baby

In July, senior reporter Jessica Hamzelou broke the news of a record-setting baby. The infant developed from an embryo that had been sitting in storage for more than 30 years, earning him the bizarre honorific of “oldest baby.” 

This odd new record was made possible in part by advances in IVF, including safer methods of thawing frozen embryos. But perhaps the greater enabler has been the rise of “embryo adoption” agencies that pair donors with hopeful parents. People who work with these agencies are sometimes more willing to make use of decades-old embryos. 

This practice could help find a home for some of the millions of leftover embryos that remain frozen in storage banks today. But since this recent achievement was brought about by changing norms as much as by any sudden technological improvements, this record didn’t quite meet our definition of a breakthrough—though it’s impressive nonetheless.

❌