Normal view

Received yesterday — 13 February 2026

60,000 Records Exposed in Cyberattack on Uzbekistan Government

13 February 2026 at 03:46

Uzbekistan cyberattack

An alleged Uzbekistan cyberattack that triggered widespread concern online has exposed around 60,000 unique data records, not the personal data of 15 million citizens, as previously claimed on social media. The clarification came from Uzbekistan’s Digital Technologies Minister Sherzod Shermatov during a press conference on 12 February, addressing mounting speculation surrounding the scale of the breach. From 27 to 30 January, information systems of three government agencies in Uzbekistan were targeted by cyberattacks. The names of the agencies have not been disclosed. However, officials were firm in rejecting viral claims suggesting a large-scale national data leak. “There is no information that the personal data of 15 million citizens of Uzbekistan is being sold online. 60,000 pieces of data — that could be five or six pieces of data per person. We are not talking about 60,000 citizens,” the minister noted, adding that law enforcement agencies were examining the types of data involved. For global readers, the distinction matters. In cybersecurity reporting, raw data units are often confused with the number of affected individuals. A single record can include multiple data points such as a name, date of birth, address, or phone number. According to Shermatov, the 60,000 figure refers to individual data units, not the number of citizens impacted.
Also read: Sanctioned Spyware Vendor Used iOS Zero-Day Exploit Chain Against Egyptian Targets

Uzbekistan Cyberattack: What Actually Happened

The Uzbekistan cyberattack targeted three government information systems over a four-day period in late January. While the breach did result in unauthorized access to certain systems, the ministry emphasized that it was not a mass compromise of citizen accounts. “Of course, there was an attack. The hackers were skilled and sophisticated. They made attempts and succeeded in gaining access to a specific system. In a sense, this is even useful — an incident like this helps to further examine other systems and increase vigilance. Some data, in a certain amount, could indeed have been obtained from some systems,” Shermatov said. His remarks reveal a balanced acknowledgment: the attack was real, the threat actors were capable, and some data exposure did occur. At the same time, the scale appears significantly smaller than initially portrayed online. The ministry also stressed that a “personal data leak” does not mean citizens’ accounts were hacked or that full digital identities were compromised. Instead, limited personal details may have been accessed.

Rising Cyber Threats in Uzbekistan

The Uzbekistan cyberattack comes amid a sharp increase in attempted digital intrusions across the country. According to the ministry, more than 7 million cyber threats were prevented in 2024 through Uzbekistan’s cybersecurity infrastructure. In 2025, that number reportedly exceeded 107 million. Looking ahead, projections suggest that over 200 million cyberattacks could target Uzbekistan in 2026. These figures highlight a broader global trend: as countries accelerate digital transformation, they inevitably expand their attack surface. Emerging digital economies, in particular, often face intense pressure from transnational cybercriminal groups seeking to exploit gaps in infrastructure and rapid system expansion. Uzbekistan’s growing digital ecosystem — from e-government services to financial platforms — is becoming a more attractive target for global threat actors. The recent Uzbekistan cyberattack illustrates that no country, regardless of size, is immune.

Strengthening Security After the Breach

Following the breach, authorities blocked further unauthorized access attempts and reinforced technical safeguards. Additional protections were implemented within the Unified Identification System (OneID), Uzbekistan’s centralized digital identity platform. Under the updated measures, users must now personally authorize access to their data by banks, telecom operators, and other organizations. This shifts more control, and responsibility, directly to citizens. The ministry emphasized that even with partial personal data, fraudsters cannot fully act on behalf of a citizen without direct involvement. However, officials warned that attackers may attempt secondary scams using exposed details. For example, a fraudster could call a citizen, pose as a bank employee, cite known personal details, and claim that someone is applying for a loan in their name — requesting an SMS code to “cancel” the transaction. Such social engineering tactics remain one of the most effective tools for cybercriminals globally.

A Reality Check on Digital Risk

The Uzbekistan cyberattack highlights two critical lessons. First, misinformation can amplify panic faster than technical facts. Second, even limited data exposure carries real risk if exploited creatively. Shermatov’s comment that the incident can help “increase vigilance” reflects a pragmatic view shared by many cybersecurity professionals worldwide: breaches, while undesirable, often drive improvements in resilience. For Uzbekistan, the challenge now is sustaining public trust while hardening systems against a growing global cyber threats. For the rest of the world, the incident serves as a reminder that cybersecurity transparency — clear communication about scope and impact — is just as important as technical defense.
Received before yesterday

Bringing the "functionally extinct" American chestnut back from the dead

12 February 2026 at 14:00

Very few people alive today have seen the Appalachian forests as they existed a century ago. Even as state and national parks preserved ever more of the ecosystem, fungal pathogens from Asia nearly wiped out one of the dominant species of these forests, the American chestnut, killing an estimated 3 billion trees. While new saplings continue to sprout from the stumps of the former trees, the fungus persists, killing them before they can seed a new generation.

But thanks in part to trees planted in areas where the two fungi don't grow well, the American chestnut isn't extinct. And efforts to revive it in its native range have continued, despite the long generation times needed to breed resistant trees. In Thursday's issue of Science, researchers describe their efforts to apply modern genomic techniques and exhaustive testing to identify the best route to restoring chestnuts to their native range.

Multiple paths to restoration

While the American chestnut is functionally extinct—it's no longer a participant in the ecosystems it once dominated—it's most certainly not extinct. Two Asian fungi that have killed it off in its native range; one causes chestnut blight, while a less common pathogen causes a root rot disease. Both prefer warmer, humid environments and persist there because they can grow asymptomatically on distantly related trees, such as oaks. Still, chestnuts planted outside the species' original range—primarily in drier areas of western North America—have continued to thrive.

Read full article

Comments

© Teresa Lett

Illinois Man Charged in Massive Snapchat Hacking Scheme Targeting Hundreds of Women

9 February 2026 at 01:10

Snapchat hacking investigation

The Snapchat hacking investigation involving an Illinois man accused of stealing and selling private images of hundreds of women is not just another cybercrime case, it is a reminder of how easily social engineering can be weaponized against trust, privacy, and young digital users. Federal prosecutors say the case exposes a disturbing intersection of identity theft, online exploitation, and misuse of social media platforms that continues to grow largely unchecked. Kyle Svara, a 26-year-old from Oswego, Illinois, has been charged in federal court in Boston for his role in a wide-scale Snapchat account hacking scheme that targeted nearly 600 women. According to court documents, Svara used phishing and impersonation tactics to steal Snapchat access codes, gain unauthorized account access, and extract nude or semi-nude images that were later sold or traded online.

Snapchat Hacking Investigation Reveals Scale of Phishing Abuse

At the core of the Snapchat hacking investigation is a textbook example of social engineering. Between May 2020 and February 2021, Svara allegedly gathered emails, phone numbers, and Snapchat usernames using online tools and research techniques. He then deliberately triggered Snapchat’s security system to send one-time access codes to victims. Using anonymized phone numbers, Svara allegedly impersonated a Snap Inc. representative and texted more than 4,500 women, asking them to share their security codes. About 570 women reportedly complied—handing over access to their accounts without realizing they were being manipulated. Once inside, prosecutors say Svara accessed at least 59 Snapchat accounts and downloaded private images. These images were allegedly kept, sold, or exchanged on online forums. The investigation found that Svara openly advertised his services on platforms such as Reddit, offering to “get into girls’ snap accounts” for a fee or trade.

Snapchat Hacking for Hire

What makes this Snapchat hacking case especially troubling is that it was not driven solely by curiosity or personal motives. Investigators allege that Svara operated as a hacking-for-hire service. One of his co-conspirators was Steve Waithe, a former Northeastern University track and field coach, who allegedly paid Svara to hack Snapchat accounts of women he coached or knew personally. Waithe was convicted in November 2023 on multiple counts, including wire fraud and cyberstalking, and sentenced to five years in prison. The link between authority figures and hired cybercriminals adds a deeply unsettling dimension to the case, one that highlights how power dynamics can be exploited through digital tools. Beyond hired jobs, Svara also allegedly targeted women in and around Plainfield, Illinois, as well as students at Colby College in Maine, suggesting a pattern of opportunistic and localized targeting.

Why the Snapchat Hacking Investigation Matters

This Snapchat hacking investigation features a critical cybersecurity truth: technical defenses mean little when human trust is exploited. The victims did not lose access because Snapchat’s systems failed; they were deceived into handing over the keys themselves. It also raises serious questions about accountability on social platforms. While Snapchat provides security warnings and access codes, impersonation attacks continue to succeed at scale. The ease with which attackers can pose as platform representatives points to a larger problem of user awareness and platform-level safeguards. The case echoes other recent investigations, including the indictment of a former University of Michigan football coach accused of hacking thousands of athlete accounts to obtain private images. Together, these cases reveal a troubling pattern—female student athletes being specifically researched, targeted, and exploited.

Legal Consequences

Svara faces charges including aggravated identity theft, wire fraud, computer fraud, conspiracy, and false statements related to child pornography. If convicted, he could face decades in prison, with a cumulative maximum sentence of 32 years. His sentencing is scheduled for May 18. Federal authorities have urged anyone who believes they may be affected by this Snapchat hacking scheme to come forward. More than anything, this case serves as a warning. The tools used were not sophisticated exploits or zero-day vulnerabilities—they were lies, impersonation, and manipulation. As this Snapchat hacking investigation shows, the most dangerous cyber threats today often rely on human error, not broken technology.

ShinyHunters Leads Surge in Vishing Attacks to Steal SaaS Data

2 February 2026 at 11:39
credentials EUAC CUI classified secrets SMB

Several threat clusters are using vishing in extortion campaigns that include tactics that are consistent with those used by high-profile threat group ShinyHunters. They are stealing SSO and MFA credentials to access companies' environments and steal data from cloud applications, according to Mandiant researchers.

The post ShinyHunters Leads Surge in Vishing Attacks to Steal SaaS Data appeared first on Security Boulevard.

Moltbot Personal Assistant Goes Viral—And So Do Your Secrets

29 January 2026 at 11:56

Early 2026, Moltbot a new AI personal assistant went viral. GitGuardian detected 200+ leaked secrets related to it, including from healthcare and fintech companies. Our contribution to Moltbot: a skill that turns secret scanning into a conversational prompt, letting users ask "is this safe?"

The post Moltbot Personal Assistant Goes Viral—And So Do Your Secrets appeared first on Security Boulevard.

Custom machine kept man alive without lungs for 48 hours

29 January 2026 at 12:26

Humans can’t live without lungs. And yet for 48 hours, in a surgical suite at Northwestern University, a 33-year-old man lived with an empty cavity in his chest where his lungs used to be. He was kept alive by a custom-engineered artificial device that represented a desperate last-ditch effort by his doctors. The custom hardware solved a physiological puzzle that has made bilateral pneumonectomy, the removal of both lungs, extremely risky before now.

The artificial lung system was built by the team of Ankit Bharat, a surgeon and researcher at Northwestern. It successfully kept a critically ill patient alive long enough to enable a double lung transplant, temporarily replacing his entire pulmonary system with a synthetic surrogate. The system creates a blueprint for saving people previously considered beyond hope by transplant teams.

Melting lungs

The patient, a once-healthy 33-year-old, arrived at the hospital with Influenza B complicated by a secondary, severe infection of Pseudomonas aeruginosa, a bacterium that in this case proved resistant even to carbapenems—our antibiotics of last resort. This combination of infections triggered acute respiratory distress syndrome (ARDS), a condition where the lungs become so inflamed and fluid-filled that oxygen can no longer reach the blood.

Read full article

Comments

© Yuichiro Chino

Peter H. Duesberg, 89, Renowned Biologist Turned H.I.V. Denialist, Dies

27 January 2026 at 17:33
His pioneering work on the origins of cancer was later overshadowed by his contrarian views, notably his rejection of the established theory that H.I.V. causes AIDS.

© Roger Ressmeyer/Corbis — VCG, via Getty Images

Peter H. Duesberg in 1985, holding a tray of petri dishes containing cultured cancer cells. In the late 1960s, he discovered the first known cancer-causing gene, or oncogene.

Stratospheric internet could finally start taking off this year

27 January 2026 at 09:52

Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery. 

Even with nearly 10,000 active Starlink satellites in orbit and the OneWeb constellation of 650 satellites, solid internet coverage is not a given across vast swathes of the planet. 

One of the most prominent efforts to plug the connectivity gap was Google X’s Loon project. Launched in 2011, it aimed to deliver access using high-altitude balloons stationed above predetermined spots on Earth. But the project faced literal headwinds—the Loons kept drifting away and new ones had to be released constantly, making the venture economically unfeasible. 

Although Google shuttered the high-profile Loon in 2021, work on other kinds of high-altitude platform stations (HAPS) has continued behind the scenes. Now, several companies claim they have solved Loon’s problems with different designs—in particular, steerable airships and fixed-wing UAVs (unmanned aerial vehicles)—and are getting ready to prove the tech’s internet beaming potential starting this year, in tests above Japan and Indonesia.

Regulators, too, seem to be thinking seriously about HAPS. In mid-December, for example, the US Federal Aviation Administration released a 50-page document outlining how large numbers of HAPS could be integrated into American airspace. According to the US Census Bureau’s 2024 American Community Survey (ACS) data, some 8 million US households (4.5% of the population) still live completely offline, and HAPS proponents think the technology might get them connected more cheaply than alternatives.

Despite the optimism of the companies involved, though, some analysts remain cautious.

“The HAPS market has been really slow and challenging to develop,” says Dallas Kasaboski, a space industry analyst at the consultancy Analysis Mason. After all, Kasaboski says, the approach has struggled before: “A few companies were very interested in it, very ambitious about it, and then it just didn’t happen.”

Beaming down connections

Hovering in the thin air at altitudes above 12 miles, HAPS have a unique vantage point to beam down low-latency, high-speed connectivity directly to smartphone users in places too remote and too sparsely populated to justify the cost of laying fiber-optic cables or building ground-based cellular base stations.

“Mobile network operators have some commitment to provide coverage, but they frequently prefer to pay a fine than cover these remote areas,” says Pierre-Antoine Aubourg, chief technology officer of Aalto HAPS, a spinoff from the European aerospace manufacturer Airbus. “With HAPS, we make this remote connectivity case profitable.” 

Aalto HAPS has built a solar-powered UAV with a 25-meter wingspan that has conducted many long-duration test flights in recent years. In April 2025 the craft, called Zephyr, broke a HAPS record by staying afloat for 67 consecutive days. The first months of 2026 will be busy for the company, according to Aubourg; Zephyr will do a test run over southern Japan to trial connectivity delivery to residents of some of the country’s smallest and most poorly connected inhabited islands.

the Zephyr on the runway at sunrise
AALTO

Because of its unique geography, Japan is a perfect test bed for HAPS. Many of the country’s roughly 430 inhabited islands are remote, mountainous, and sparsely populated, making them too costly to connect with terrestrial cell towers. Aalto HAPS is partnering with Japan’s largest mobile network operators, NTT DOCOMO and the telecom satellite operator Space Compass, which want to use Zephyr as part of next-generation telecommunication infrastructure.

“Non-terrestrial networks have the potential to transform Japan’s communications ecosystem, addressing access to connectivity in hard-to-reach areas while supporting our country’s response to emergencies,” Shigehiro Hori, co-CEO of Space Compass, said in a statement

Zephyr, Aubourg explains, will function like another cell tower in the NTT DOCOMO network, only it will be located well above the planet instead of on its surface. It will beam high-speed 5G connectivity to smartphone users without the need for the specialized terminals that are usually required to receive satellite internet. “For the user on the ground, there is no difference when they switch from the terrestrial network to the HAPS network,” Aubourg says. “It’s exactly the same frequency and the same network.”

New Mexico–based Sceye, which has developed a solar-powered helium-filled airship, is also eyeing Japan for pre-commercial trials of its stratospheric connectivity service this year. The firm, which extensively tested its slick 65-meter-long vehicle in 2025, is working with the Japanese telecommunications giant SoftBank. Just like NTT DOCOMO, Softbank is betting on HAPS to take its networks to another level. 

Mikkel Frandsen, Sceye’s founder and CEO, says that his firm succeeded where Loon failed by betting on the advantages offered by the more controllable airship shape, intelligent avionics, and innovative batteries that can power an electric fan to keep the aircraft in place.

“Google’s Loon was groundbreaking, but they used a balloon form factor, and despite advanced algorithms—and the ability to change altitude to find desired wind directions and wind speeds—Loon’s system relied on favorable winds to stay over a target area, resulting in unpredictable station-seeking performance,” Frandsen says. “This required a large amount of balloons in the air to have relative certainty that one would stay over the area of operation, which was financially unviable.”

He adds that Sceye’s airship can “point into the wind” and more effectively maintain its position. 

“We have significant surface area, providing enough physical space to lift 250-plus kilograms and host solar panels and batteries,” he says, “allowing Sceye to maintain power through day-night cycles, and therefore staying over an area of operation while maintaining altitude.” 

The persistent digital divide

Satellite internet currently comes at a price tag that can be too high for people in developing countries, says Kasaboski. For example, Starlink subscriptions start at $10 per month in Africa, but millions of people in these regions are surviving on a mere $2 a day.

Frandsen and Aubourg both claim that HAPS can connect the world’s unconnected more cheaply. Because satellites in low Earth orbit circle the planet at very high speeds, they quickly disappear from a ground terminal’s view, meaning large quantities of those satellites are needed to provide continuous coverage. HAPS can hover, affording a constant view of a region, and more HAPS can be launched to meet higher demand.

“If you want to deliver connectivity with a low-Earth-orbit constellation into one place, you still need a complete constellation,” says Aubourg. “We can deliver connectivity with one aircraft to one location. And then we can tailor much more the size of the fleet according to the market coverage that we need.”

Starlink gets a lot of attention, but satellite internet has some major drawbacks, says Frandsen. A big one is that its bandwidth gets diluted once the number of users in an area grows. 

In a recent interview, Starlink cofounder Elon Musk compared the Starlink beams to a flashlight. Given the distance at which those satellites orbit the planet, the cone is wide, covering a large area. That’s okay when users are few and far between, but it can become a problem with higher densities of users.

For example, Ukrainian defense technologists have said that Starlink bandwidth can drop on the front line to a mere 10 megabits per second, compared with the peak offering of 220 Mbps when drones and ground robots are in heavy use. Users in Indonesia, which like Japan is an island nation, also began reporting problems with Starlink shortly after the service was introduced in the country in 2024. Again, bandwidth declined as the number of subscribers grew.

In fact, Frandsen says, Starlink’s performance is less than optimal once the number of users exceeds one person per square kilometer. And that can happen almost anywhere—even relatively isolated island communities can have hundreds or thousands of residents in a small area. “There is a relationship between the altitude and the population you can serve,” Frandsen says. “You can’t bring space closer to the surface of the planet. So the telco companies want to use the stratosphere so that they can get out to more rural populations than they could otherwise serve.” Starlink did not respond to our queries about these challenges. 

Cheaper and faster

Sceye and Aalto HAPS see their stratospheric vehicles as part of integrated telecom networks that include both terrestrial cell towers and satellites. But they’re far from the only game in town. 

World Mobile, a telecommunications company headquartered in London, thinks its hydrogen-powered high-altitude UAV can compete directly with satellite mega-constellations. The company acquired the HAPS developer Stratospheric Platforms last year. This year, it plans to flight-test an innovative phased array antenna, which it claims will be able to deliver bandwidth of 200 megabits per second (enough to enable ultra-HD video streaming to 500,000 users at the same time over an area of 15,000 square kilometers—equivalent to the coverage of more than 500 terrestrial cell towers, the company says). 

Last year, World Mobile also signed a partnership with the Indonesian telecom operator Protelindo to build a prototype Stratomast aircraft, with tests scheduled to begin in late 2027.

Richard Deakin, CEO of World Mobile’s HAPS division World Mobile Stratospheric, says that just nine Stratomasts could supply Scotland’s 5.5 million residents with high-speed internet connectivity at a cost of £40 million ($54 million) per year. That’s equivalent to about 60 pence (80 cents) per person per month, he says. Starlink subscriptions in the UK, of which Scotland is a part, come at £75 ($100) per month.

A troubled past 

Companies working on HAPS also extol the convenience of prompt deployments in areas struck by war or natural disasters like Hurricane Maria in Puerto Rico, after which Loon played an important role. And they say that HAPS could make it possible for smaller nations to obtain complete control over their celestial internet-beaming infrastructure rather than relying on mega-constellations controlled by larger nations, a major boon at a time of rising geopolitical tensions and crumbling political alliances. 

Analysts, however, remain cautious, projecting a HAPS market totaling a modest $1.9 billion by 2033. The satellite internet industry, on the other hand, is expected to be worth $33.44 billion by 2030, according to some estimates. 

The use of HAPS for internet delivery to remote locations has been explored since the 1990s, about as long as the concept of low-Earth-orbit mega-constellations. The seemingly more cost-effective stratospheric technology, however, lost to the space fleets thanks to the falling cost of space launches and ambitious investment by Musk’s SpaceX. 

Google wasn’t the only tech giant to explore the HAPS idea. Facebook also had a project, called Aquila, that was discontinued after it too faced technical difficulties. Although the current cohort of HAPS makers claim they have solved the challenges that killed their predecessors, Kasaboski warns that they’re playing a different game: catching up with now-established internet-beaming mega constellations. By the end of this year, it’ll be much clearer whether they stand a good chance of doing so.

China’s ‘Dr. Frankenstein’ Thinks Time Is on His Side

13 January 2026 at 18:20
He Jiankui spent three years in prison after creating gene-edited babies. Now back at work, he sees a greater opening for researchers who push boundaries.

© Chang W. Lee/The New York Times

He Jiankui, a researcher in gene editing, at his home in Beijing. He argues that his only crime was being ahead of his time in a world not yet ready for his vision.

The Scientists Making Antacids for the Sea to Help Counter Global Warming

11 January 2026 at 14:36
The world’s oceans are becoming dangerously acidic. A controversial proposal would raise the pH — by mixing chemicals into the water.

© Alexander Coggin for The New York Times

Adam Subhas of the Woods Hole Oceanographic Institution in Massachusetts.

How AI made scams more convincing in 2025

2 January 2026 at 05:16

This blog is part of a series where we highlight new or fast-evolving threats in consumer security. This one focuses on how AI is being used to design more realistic campaigns, accelerate social engineering, and how AI agents can be used to target individuals.

Most cybercriminals stick with what works. But once a new method proves effective, it spreads quickly—and new trends and types of campaigns follow.

In 2025, the rapid development of Artificial Intelligence (AI) and its use in cybercrime went hand in hand. In general, AI allows criminals to improve the scale, speed, and personalization of social engineering through realistic text, voice, and video. Victims face not only financial loss, but erosion of trust in digital communication and institutions.

Social engineering

Voice cloning

One of the main areas where AI improved was in the area of voice-cloning, which was immediately picked up by scammers. In the past, they would mostly stick to impersonating friends and relatives. In 2025, they went as far as impersonating senior US officials. The targets were predominantly current or former US federal or state government officials and their contacts.

In the course of these campaigns, cybercriminals used test messages as well as AI-generated voice messages. At the same time, they did not abandon the distressed-family angle. A woman in Florida was tricked into handing over thousands of dollars to a scammer after her daughter’s voice was AI-cloned and used in a scam.

AI agents

Agentic AI is the term used for individualized AI agents designed to carry out tasks autonomously. One such task could be to search for publicly available or stolen information about an individual and use that information to compose a very convincing phishing lure.

These agents could also be used to extort victims by matching stolen data with publicly known email addresses or social media accounts, composing messages and sustaining conversations with people who believe a human attacker has direct access to their Social Security number, physical address, credit card details, and more.

Another use we see frequently is AI-assisted vulnerability discovery. These tools are in use by both attackers and defenders. For example, Google uses a project called Big Sleep, which has found several vulnerabilities in the Chrome browser.

Social media

As mentioned in the section on AI agents, combining data posted on social media with data stolen during breaches is a common tactic. Such freely provided data is also a rich harvesting ground for romance scams, sextortion, and holiday scams.

Social media platforms are also widely used to peddle fake products, AI generated disinformation, dangerous goods,  and drop-shipped goods.

Prompt injection

And then there are the vulnerabilities in public AI platforms such as ChatGPT, Perplexity, Claude, and many others. Researchers and criminals alike are still exploring ways to bypass the safeguards intended to limit misuse.

Prompt injection is the general term for when someone inserts carefully crafted input, in the form of an ordinary conversation or data, to nudge or force an AI into doing something it wasn’t meant to do.

Malware campaigns

In some cases, attackers have used AI platforms to write and spread malware. Researchers have documented campaign where attackers leveraged Claude AI to automate the entire attack lifecycle, from initial system compromise through to ransom note generation, targeting sectors such as government, healthcare, and emergency services.

Since early 2024, OpenAI says it has disrupted more than 20 campaigns around the world that attempted to abuse its AI platform for criminal operations and deceptive campaigns.

Looking ahead

AI is amplifying the capabilities of both defenders and attackers. Security teams can use it to automate detection, spot patterns faster, and scale protection. Cybercriminals, meanwhile, are using it to sharpen social engineering, discover vulnerabilities more quickly, and build end-to-end campaigns with minimal effort.

Looking toward 2026, the biggest shift may not be technical but psychological. As AI-generated content becomes harder to distinguish from the real thing, verifying voices, messages, and identities will matter more than ever.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Beyond Compliance: How India’s DPDP Act Is Reshaping the Cyber Insurance Landscape

19 December 2025 at 00:38

DPDP Act Is Reshaping the Cyber Insurance Landscape

By Gauravdeep Singh, Head – State e-Mission Team (SeMT), Ministry of Electronics and Information Technology The Digital Personal Data Protection (DPDP) Act has fundamentally altered the risk landscape for Indian organisations. Data breaches now trigger mandatory compliance obligations regardless of their origin, transforming incidents that were once purely operational concerns into regulatory events with significant financial and legal implications.

Case Study 1: Cloud Misconfiguration in a Consumer Platform

A prominent consumer-facing platform experienced a data exposure incident when a misconfigured storage bucket on its public cloud infrastructure inadvertently made customer data publicly accessible. While no malicious actor was involved, the incident still constituted a reportable data breach under the DPDP Act framework. The organisation faced several immediate obligations:
  • Notification to affected individuals within prescribed timelines
  • Formal reporting to the Data Protection Board
  • Comprehensive internal investigation and remediation measures
  • Potential penalties for failure to implement reasonable security safeguards as mandated under the Act
Such incidents highlight a critical gap in traditional risk management approaches. The financial exposure—encompassing regulatory penalties, legal costs, remediation expenses, and reputational damage—frequently exceeds conventional cyber insurance coverage limits, particularly when compliance failures are implicated.

Case Study 2: Ransomware Attack on Healthcare and EdTech Infrastructure

A mid-sized healthcare and education technology provider fell victim to a ransomware attack that encrypted sensitive personal records. Despite successful restoration from backup systems, the organisation confronted extensive regulatory and operational obligations:
  • Forensic assessment to determine whether data confidentiality was compromised
  • Mandatory notification to regulatory authorities and affected data principals
  • Ongoing legal and compliance proceedings
The total cost extended far beyond any ransom demand. Forensic investigations, legal advisory services, public communications, regulatory compliance activities, and operational disruption collectively created substantial financial strain, costs that would have been mitigated with appropriate insurance coverage.

Case Study 3: AI-Enabled Fraud and Social Engineering

The emergence of AI-driven attack vectors has introduced new dimensions of cyber risk. Deepfake technology and sophisticated phishing campaigns now enable threat actors to impersonate senior leadership with unprecedented authenticity, compelling finance teams to authorise fraudulent fund transfers or inappropriate data disclosures. These attacks often circumvent traditional technical security controls because they exploit human trust rather than system vulnerabilities. As a result, organisations are increasingly seeking insurance coverage for social engineering and cyber fraud events, particularly those involving personal data or financial information, that fall outside conventional cybersecurity threat models.

The Evolution of Cyber Insurance in India

India DPDP Act The Indian cyber insurance market is undergoing significant transformation in response to the DPDP Act and evolving threat landscape. Modern policies now extend beyond traditional hacking incidents to address:
  • Data breaches resulting from human error or operational failures
  • Third-party vendor and SaaS provider security failures
  • Cloud service disruptions and availability incidents
  • Regulatory investigation costs and legal defense expenses
  • Incident response, crisis management, and public relations support
Organisations are reassessing their coverage adequacy as they recognise that historical policy limits of Rs. 10–20 crore may prove insufficient when regulatory penalties, legal costs, business interruption losses, and remediation expenses are aggregated under the DPDP compliance framework.

The SME and MSME Vulnerability

Small and medium enterprises represent the most vulnerable segment of the market. While many SMEs and MSMEs regularly process personal data, they frequently lack:
  • Mature information security controls and governance frameworks
  • Dedicated compliance and data protection teams
  • Financial reserves to absorb penalties, legal costs, or operational disruption
For organisations in this segment, even a relatively minor cyber incident can trigger prolonged operational shutdowns or, in severe cases, permanent closure. Despite this heightened vulnerability, cyber insurance adoption among SMEs remains disproportionately low, driven primarily by awareness gaps and perceived cost barriers.

Implications for the Cyber Insurance Ecosystem

The Indian cyber insurance market is entering a period of accelerated growth and structural evolution. Several key trends are emerging:
  • Higher policy limits becoming standard practice across industries
  • Enhanced underwriting processes emphasising compliance readiness and data governance maturity
  • Comprehensive coverage integrating legal advisory, forensic investigation, and regulatory support
  • Risk-based pricing models that reward robust data protection practices
Looking ahead, cyber insurance will increasingly be evaluated not merely as a risk-transfer mechanism, but as an indicator of an organisation's overall data protection posture and regulatory preparedness.

DPDP Act and the End of Optional Cyber Insurance

The DPDP Act has fundamentally redefined cyber risk in the Indian context. Data breaches are no longer isolated IT failures; they are regulatory events carrying substantial financial, legal, and reputational consequences. In this environment, cyber insurance is transitioning from a discretionary safeguard to a strategic imperative. Organisations that integrate cyber insurance into a comprehensive data governance and enterprise risk management strategy will be better positioned to navigate the evolving regulatory landscape. Conversely, those that remain uninsured or underinsured may discover that the cost of inadequate preparation far exceeds the investment required for robust protection. (This article reflects the author’s analysis and personal viewpoints and is intended for informational purposes only. It should not be construed as legal or regulatory advice.)

Someone Boarded a Plane at Heathrow Without a Ticket or Passport

18 December 2025 at 11:41

I’m sure there’s a story here:

Sources say the man had tailgated his way through to security screening and passed security, meaning he was not detected carrying any banned items.

The man deceived the BA check-in agent by posing as a family member who had their passports and boarding passes inspected in the usual way.

Quantum navigation could solve the military’s GPS jamming problem

16 December 2025 at 05:00

In late September, a Spanish military plane carrying the country’s defense minister to a base in Lithuania was reportedly the subject of a kind of attack—not by a rocket or anti-aircraft rounds, but by radio transmissions that jammed its GPS system. 

The flight landed safely, but it was one of thousands that have been affected by a far-reaching Russian campaign of GPS interference since the 2022 invasion of Ukraine. The growing inconvenience to air traffic and risk of a real disaster have highlighted the vulnerability of GPS and focused attention on more secure ways for planes to navigate the gauntlet of jamming and spoofing, the term for tricking a GPS receiver into thinking it’s somewhere else. 

US military contractors are rolling out new GPS satellites that use stronger, cleverer signals, and engineers are working on providing better navigation information based on other sources, like cellular transmissions and visual data. 

But another approach that’s emerging from labs is quantum navigation: exploiting the quantum nature of light and atoms to build ultra-sensitive sensors that can allow vehicles to navigate independently, without depending on satellites. As GPS interference becomes more of a problem, research on quantum navigation is leaping ahead, with many researchers and companies now rushing to test new devices and techniques. In recent months, the US’s Defense Advanced Research Projects Agency (DARPA) and its Defense Innovation Unit have announced new grants to test the technology on military vehicles and prepare for operational deployment. 

Tracking changes

Perhaps the most obvious way to navigate is to know where you started and then track where you go by recording the speed, direction, and duration of travel. But while this approach, known in the field as inertial navigation, is conceptually simple, it’s difficult to do well; tiny uncertainties in any of those measurements compound over time and lead to big errors later on. Douglas Paul, the principal investigator of the UK’s Hub for Quantum Enabled Precision, Navigation & Timing (QEPNT), says that existing specialized inertial-navigation devices might be off by 20 kilometers after 100 hours of travel. Meanwhile, the cheap sensors commonly used in smartphones produce more than twice that level of uncertainty after just one hour. 

“If you’re guiding a missile that flies for one minute, that might be good enough,” he says. “If you’re in an airliner, that’s definitely not good enough.” 

A more accurate version of inertial navigation instead uses sensors that rely on the quantum behavior of subatomic particles to more accurately measure acceleration, direction, and time.

Several companies, like the US-based Infleqtion, are developing quantum gyroscopes, which track a vehicle’s bearing, and quantum accelerometers, which can reveal how far it’s traveled. Infleqtion’s sensors are based on a technique called atom interferometry: A beam of rubidium atoms is zapped with precise laser pulses, which split the atoms into two separate paths. Later, other laser pulses recombine the atoms, and they’re measured with a detector. If the vehicle has turned or accelerated while the atoms are in motion, the two paths will be slightly out of phase in a way the detector can interpret. 

Last year the company trialed these inertial sensors on a customized plane flying at a British military testing site. In October of this year, Infleqtion ran its first real-world test of a new generation of inertial sensors that use a steady stream of atoms instead of pulses, allowing for continuous navigation and avoiding long dead times.

Infleqtion's atomic clock named Tiqker.
A view of Infleqtion’s atomic clock Tiqker.
COURTESY INFLEQTION

Infleqtion also has an atomic clock, called Tiqker, that can help determine how far a vehicle has traveled. It is a kind of optical clock that uses infrared lasers tuned to a specific frequency to excite electrons in rubidium, which then release photons at a consistent, known rate. The device “will lose one second every 2 million years or so,” says Max Perez, who oversees the project, and it fits in a standard electronics equipment rack. It has passed tests on flights in the UK, on US Army ground vehicles in New Mexico, and, in late October, on a drone submarine

“Tiqker operated happily through these conditions, which is unheard-of for previous generations of optical clocks,” says Perez. Eventually the company hopes to make the unit smaller and more rugged by switching to lasers generated by microchips. 

Magnetic fields

Vehicles deprived of satellite-based navigation are not entirely on their own; they can get useful clues from magnetic and gravitational fields that surround the planet. These fields vary slightly depending on the location, and the variations, or anomalies, are recorded in various maps. By precisely measuring the local magnetic or gravitational field and comparing those values with anomaly maps, quantum navigation systems can track the location of a vehicle. 

Allison Kealy, a navigation researcher at Swinburne University in Australia, is working on the hardware needed for this approach. Her team uses a material called nitrogen-vacancy diamond. In NV diamonds, one carbon atom in the lattice is replaced with a nitrogen atom, and one neighboring carbon atom is removed entirely. The quantum state of the electrons at the NV defect is very sensitive to magnetic fields. Carefully stimulating the electrons and watching the light they emit offers a way to precisely measure the strength of the field at the diamond’s location, making it possible to infer where it’s situated on the globe. 

Kealy says these quantum magnetometers have a few big advantages over traditional ones, including the fact that they measure the direction of the Earth’s magnetic field in addition to its strength. That additional information could make it easier to determine location. 

The technology is far from commercial deployment, but Kealy and several colleagues successfully tested their magnetometer in a set of flights in Australia late last year, and they plan to run more trials this year and next. “This is where it gets exciting, as we transition from theoretical models and controlled experiments to on-the-ground, operational systems,” she says. “This is a major step forward.” 

Delicate systems

Other teams, like Q-CTRL, an Australian quantum technology company, are focusing on using software to build robust systems from noisy quantum sensors. Quantum navigation involves taking those delicate sensors, honed in the placid conditions of a laboratory, and putting them in vehicles that make sharp turns, bounce with turbulence, and bob with waves, all of which interferes with the sensors’ functioning. Even the vehicles themselves present problems for magnetometers, especially “the fact that the airplane is made of metal, with all this wiring,” says Michael Biercuk, the CEO of Q-CTRL. “Usually there’s 100 to 1,000 times more noise than signal.” 

After Q-CTRL engineers ran trials of their magnetic navigation system in a specially outfitted Cessna last year, they used machine learning to go through the data and try to sift out the signal from all the noise. Eventually they found they could track the plane’s location up to 94 times as accurately as a strategic-grade conventional inertial navigation system could, according to Biercuk. They announced their findings in a non-peer-reviewed paper last spring. 

In August Q-CTRL received two contracts from DARPA to develop its “software-ruggedized” mag-nav product, named Ironstone Opal, for defense applications. The company is also testing the technology with commercial partners, including the defense contractors Northrop Grumman and Lockheed Martin and Airbus, an aerospace manufacturer. 

Infleqtion's atomic clock named Tiqker.
An illustration showing the placement of Q-CTRL’s Ironstone Opal in a drone.
COURTESY Q-CTRL

“Northrop Grumman is working with Q-CTRL to develop a magnetic navigation system that can withstand the physical demands of the real world,” says Michael S. Larsen, a quantum systems architect at the company. “Technology like magnetic navigation and other quantum sensors will unlock capabilities to provide guidance even in GPS-denied or -degraded environments.”

Now Q-CTRL is working on putting Ironstone Opal into a smaller, more rugged container appropriate for deployment; “Ironstone Opal’s first deployment was, and looked like, a science experiment,” says Biercuk. He anticipates delivering the first commercial units next year. 

Sensor fusion

Even as quantum navigation emerges as a legitimate alternative to satellite-based navigation, the satellites themselves are improving. Modern GPS III satellites include new civilian signals called L1C and L5, which should be more accurate and harder to jam and spoof than current signals. Both are scheduled to be fully operational later this decade. 

US and allied military users are intended to have access to far hardier GPS tools, including M-code, a new form of GPS signal that is rolling out now, and Regional Military Protection, a focused GPS beam that will be restricted to small geographic areas. The latter will start to become available when the GPS IIIF generation of satellites is in orbit, with the first scheduled to go up in 2027. A Lockheed Martin spokesperson says new GPS satellites with M-code are eight times as powerful as previous ones, while the GPS IIIF model will be 60 times as strong.

Other plans involve using navigation satellites in low Earth orbit—the zone inhabited by SpaceX’s internet-providing Starlink constellation—rather than the medium Earth orbit used by GPS. Since objects in LEO are closer to Earth, their signals are stronger, which makes them harder to jam and spoof. LEO satellites also transit the sky more quickly, which makes them harder still to spoof and helps GPS receivers get a lock on their position faster. “This really helps for signal convergence,” says Lotfi Massarweh, a satellite navigation researcher at Delft University of Technology, in the Netherlands. “They can get a good position in just a few minutes. So that is a huge leap.”

Ultimately, says Massarweh, navigation will depend not only on satellites, quantum sensors, or any other single technology, but on the combination of all of them. “You need to think always in terms of sensor fusion,” he says. 

The navigation resources that a vehicle draws on will change according to its environment—whether it’s an airliner, a submarine, or an autonomous car in an urban canyon. But quantum navigation will be one important resource. He says, “If quantum technology really delivers what we see in the literature—if it’s stable over one week rather than tens of minutes—at that point it is a complete game changer.”

This story was updated to better reflect the current status of Ironstone Opal.

How one controversial startup hopes to cool the planet

10 December 2025 at 05:00

Stardust Solutions believes that it can solve climate change—for a price.

The Israel-based geoengineering startup has said it expects  nations will soon pay it more than a billion dollars a year to launch specially equipped aircraft into the stratosphere. Once they’ve reached the necessary altitude, those planes will disperse particles engineered to reflect away enough sunlight to cool down the planet, purportedly without causing environmental side effects. 

The proprietary (and still secret) particles could counteract all the greenhouse gases the world has emitted over the last 150 years, the company stated in a 2023 pitch deck it presented to venture capital firms. In fact, it’s the “only technologically feasible solution” to climate change, the company said.

The company disclosed it raised $60 million in funding in October, marking by far the largest known funding round to date for a startup working on solar geoengineering.

Stardust is, in a sense, the embodiment of Silicon Valley’s simmering frustration with the pace of academic research on the technology. It’s a multimillion-dollar bet that a startup mindset can advance research and development that has crept along amid scientific caution and public queasiness.

But numerous researchers focused on solar geoengineering are deeply skeptical that Stardust will line up the government customers it would need to carry out a global deployment as early as 2035, the plan described in its earlier investor materials—and aghast at the suggestion that it ever expected to move that fast. They’re also highly critical of the idea that a company would take on the high-stakes task of setting the global temperature, rather than leaving it to publicly funded research programs.

“They’ve ignored every recommendation from everyone and think they can turn a profit in this field,” says Douglas MacMartin, an associate professor at Cornell University who studies solar geoengineering. “I think it’s going to backfire. Their investors are going to be dumping their money down the drain, and it will set back the field.”

The company has finally emerged from stealth mode after completing its funding round, and its CEO, Yanai Yedvab, agreed to conduct one of the company’s first extensive interviews with MIT Technology Review for this story.

Yedvab walked back those ambitious projections a little, stressing that the actual timing of any stratospheric experiments, demonstrations, or deployments will be determined by when governments decide it’s appropriate to carry them out. Stardust has stated clearly that it will move ahead with solar geoengineering only if nations pay it to proceed, and only once there are established rules and bodies guiding the use of the technology.

That decision, he says, will likely be dictated by how bad climate change becomes in the coming years.

“It could be a situation where we are at the place we are now, which is definitely not great,” he says. “But it could be much worse. We’re saying we’d better be ready.”

“It’s not for us to decide, and I’ll say humbly, it’s not for these researchers to decide,” he adds. “It’s the sense of urgency that will dictate how this will evolve.”

The building blocks

No one is questioning the scientific credentials of Stardust. The company was founded in 2023 by a trio of prominent researchers, including Yedvab, who served as deputy chief scientist at the Israeli Atomic Energy Commission. The company’s lead scientist, Eli Waxman, is the head of the department of particle physics and astrophysics at the Weizmann Institute of Science. Amyad Spector, the chief product officer, was previously a nuclear physicist at Israel’s secretive Negev Nuclear Research Center.

Stardust CEO Yanai Yedvab (right) and Chief Product Officer Amyad Spector (left) at the company’s facility in Israel.
ROBY YAHAV, STARDUST

Stardust says it employs 25 scientists, engineers, and academics. The company is based in Ness Ziona, Israel, and plans to open a US headquarters soon. 

Yedvab says the motivation for starting Stardust was simply to help develop an effective means of addressing climate change. 

“Maybe something in our experience, in the tool set that we bring, can help us in contributing to solving one of the greatest problems humanity faces,” he says.

Lowercarbon Capital, the climate-tech-focused investment firm  cofounded by the prominent tech investor Chris Sacca, led the $60 million investment round. Future Positive, Future Ventures, and Never Lift Ventures, among others, participated as well.

AWZ Ventures, a firm focused on security and intelligence technologies, co-led the company’s earlier seed round, which totaled $15 million.

Yedvab says the company will use that money to advance research, development, and testing for the three components of its system, which are also described in the pitch deck: safe particles that could be affordably manufactured; aircraft dispersion systems; and a means of tracking particles and monitoring their effects.

“Essentially, the idea is to develop all these building blocks and to upgrade them to a level that will allow us to give governments the tool set and all the required information to make decisions about whether and how to deploy this solution,” he says. 

The company is, in many ways, the opposite of Make Sunsets, the first company that came along offering to send particles into the stratosphere—for a fee—by pumping sulfur dioxide into weather balloons and hand-releasing them into the sky. Many researchers viewed it as a provocative, unscientific, and irresponsible exercise in attention-gathering. 

But Stardust is serious, and now it’s raised serious money from serious people—all of which raises the stakes for the solar geoengineering field and, some fear, increases the odds that the world will eventually put the technology to use.

“That marks a turning point in that these types of actors are not only possible, but are real,” says Shuchi Talati, executive director of the Alliance for Just Deliberation on Solar Geoengineering, a nonprofit that strives to ensure that developing nations are included in the global debate over such climate interventions. “We’re in a more dangerous era now.”

Many scientists studying solar geoengineering argue strongly that universities, governments, and transparent nonprofits should lead the work in the field, given the potential dangers and deep public concerns surrounding a tool with the power to alter the climate of the planet. 

It’s essential to carry out the research with appropriate oversight, explore the potential downsides of these approaches, and publicly publish the results “to ensure there’s no bias in the findings and no ulterior motives in pushing one way or another on deployment or not,” MacMartin says. “[It] shouldn’t be foisted upon people without proper and adequate information.”

He criticized, for instance, the company’s claims to have developed what he described as their “magic aerosol particle,” arguing that the assertion that it is perfectly safe and inert can’t be trusted without published findings. Other scientists have also disputed those scientific claims.

Plenty of other academics say solar geoengineering shouldn’t be studied at all, fearing that merely investigating it starts the world down a slippery slope toward its use and diminishes the pressures to cut greenhouse-gas emissions. In 2022, hundreds of them signed an open letter calling for a global ban on the development and use of the technology, adding the concern that there is no conceivable way for the world’s nations to pull together to establish rules or make collective decisions ensuring that it would be used in “a fair, inclusive, and effective manner.”

“Solar geoengineering is not necessary,” the authors wrote. “Neither is it desirable, ethical, or politically governable in the current context.”

The for-profit decision 

Stardust says it’s important to pursue the possibility of solar geoengineering because the dangers of climate change are accelerating faster than the world’s ability to respond to it, requiring a new “class of solution … that buys us time and protects us from overheating.”

Yedvab says he and his colleagues thought hard about the right structure for the organization, finally deciding that for-profits working in parallel with academic researchers have delivered “most of the groundbreaking technologies” in recent decades. He cited advances in genome sequencing, space exploration, and drug development, as well as the restoration of the ozone layer.

He added that a for-profit structure was also required to raise funds and attract the necessary talent.

“There is no way we could, unfortunately, raise even a small portion of this amount by philanthropic resources or grants these days,” he says.

He adds that while academics have conducted lots of basic science in solar geoengineering, they’ve done very little in terms of building the technological capacities. Their geoengineering research is also primarily focused on the potential use of sulfur dioxide, because it is known to help reduce global temperatures after volcanic eruptions blast massive amounts of it into the stratospheric. But it has well-documented downsides as well, including harm to the protective ozone layer.

“It seems natural that we need better options, and this is why we started Stardust: to develop this safe, practical, and responsible solution,” the company said in a follow-up email. “Eventually, policymakers will need to evaluate and compare these options, and we’re confident that our option will be superior over sulfuric acid primarily in terms of safety and practicability.”

Public trust can be won not by excluding private companies, but by setting up regulations and organizations to oversee this space, much as the US Food and Drug Administration does for pharmaceuticals, Yedvab says.

“There is no way this field could move forward if you don’t have this governance framework, if you don’t have external validation, if you don’t have clear regulation,” he says.

Meanwhile, the company says it intends to operate transparently, pledging to publish its findings whether they’re favorable or not.

That will include finally revealing details about the particles it has developed, Yedvab says. 

Early next year, the company and its collaborators will begin publishing data or evidence “substantiating all the claims and disclosing all the information,” he says, “so that everyone in the scientific community can actually check whether we checked all these boxes.”

In the follow-up email, the company acknowledged that solar geoengineering isn’t a “silver bullet” but said it is “the only tool that will enable us to cool the planet in the short term, as part of a larger arsenal of technologies.”

“The only way governments could be in a position to consider [solar geoengineering] is if the work has been done to research, de-risk, and engineer safe and responsible solutions—which is what we see as our role,” the company added later. “We are hopeful that research will continue not just from us, but also from academic institutions, nonprofits, and other responsible companies that may emerge in the future.”

Ambitious projections

Stardust’s earlier pitch deck stated that the company expected to conduct its first “stratospheric aerial experiments” last year, though those did not move ahead (more on that in a moment).

On another slide, the company said it expected to carry out a “large-scale demonstration” around 2030 and proceed to a “global full-scale deployment” by about 2035. It said it expected to bring in roughly $200 million and $1.5 billion in annual revenue by those periods, respectively.

Every researcher interviewed for this story was adamant that such a deployment should not happen so quickly.

Given the global but uneven and unpredictable impacts of solar geoengineering, any decision to use the technology should be reached through an inclusive, global agreement, not through the unilateral decisions of individual nations, Talati argues. 

“We won’t have any sort of international agreement by that point given where we are right now,” she says.

A global agreement, to be clear, is a big step beyond setting up rules and oversight bodies—and some believe that such an agreement on a technology so divisive could never be achieved.

There’s also still a vast amount of research that must be done to better understand the negative side effects of solar geoengineering generally and any ecological impacts of Stardust’s materials specifically, adds Holly Buck, an associate professor at the University of Buffalo and author of After Geoengineering.

“It is irresponsible to talk about deploying stratospheric aerosol injection without fundamental research about its impacts,” Buck wrote in an email.

She says the timelines are also “unrealistic” because there are profound public concerns about the technology. Her polling work found that a significant fraction of the US public opposes even research (though polling varies widely). 

Meanwhile, most academic efforts to move ahead with even small-scale outdoor experiments have sparked fierce backlash. That includes the years-long effort by researchers then at Harvard to carry out a basic equipment test for their so-called ScopeX experiment. The high-altitude balloon would have launched from a flight center in Sweden, but the test was ultimately scratched amid objections from environmentalists and Indigenous groups. 

Given this baseline of public distrust, Stardust’s for-profit proposals only threaten to further inflame public fears, Buck says.

“I find the whole proposal incredibly socially naive,” she says. “We actually could use serious research in this field, but proposals like this diminish the chances of that happening.”

Those public fears, which cross the political divide, also mean politicians will see little to no political upside to paying Stardust to move ahead, MacMartin says.

“If you don’t have the constituency for research, it seems implausible to me that you’d turn around and give money to an Israeli company to deploy it,” he says.

An added risk is that if one nation or a small coalition forges ahead without broader agreement, it could provoke geopolitical conflicts. 

“What if Russia wants it a couple of degrees warmer, and India a couple of degrees cooler?” asked Alan Robock, a professor at Rutgers University, in the Bulletin of the Atomic Scientists in 2008. “Should global climate be reset to preindustrial temperature or kept constant at today’s reading? Would it be possible to tailor the climate of each region of the planet independently without affecting the others? If we proceed with geoengineering, will we provoke future climate wars?”

Revised plans

Yedvab says the pitch deck reflected Stardust’s strategy at a “very early stage in our work,” adding that their thinking has “evolved,” partly in response to consultations with experts in the field.

He says that the company will have the technological capacity to move ahead with demonstrations and deployments on the timelines it laid out but adds, “That’s a necessary but not sufficient condition.”

“Governments will need to decide where they want to take it, if at all,” he says. “It could be a case that they will say ‘We want to move forward.’ It could be a case that they will say ‘We want to wait a few years.’”

“It’s for them to make these decisions,” he says.

Yedvab acknowledges that the company has conducted flights in the lower atmosphere to test its monitoring system, using white smoke as a simulant for its particles, as the Wall Street Journal reported last year. It’s also done indoor tests of the dispersion system and its particles in a wind tunnel set up within its facility.

But in response to criticisms like the ones above, Yedvab says the company hasn’t conducted outdoor particle experiments and won’t move forward with them until it has approval from governments. 

“Eventually, there will be a need to conduct outdoor testing,” he says. “There is no way you can validate any solution without outdoor testing.” But such testing of sunlight reflection technology, he says, “should be done only working together with government and under these supervisions.”

Generating returns  

Stardust may be willing to wait for governments to be ready to deploy its system, but there’s no guarantee that its investors will have the same patience. In accepting tens of millions in venture capital, Stardust may now face financial pressures that could “drive the timelines,” says Gernot Wagner, a climate economist at Columbia University. 

And that raises a different set of concerns.

Obliged to deliver returns, the company might feel it must strive to convince government leaders that they should pay for its services, Talati says. 

“The whole point of having companies and investors is you want your thing to be used,” she says. “There’s a massive incentive to lobby countries to use it, and that’s the whole danger of having for-profit companies here.”

She argues those financial incentives threaten to accelerate the use of solar geoengineering ahead of broader international agreements and elevate business interests above the broader public good.

Stardust has “quietly begun lobbying on Capitol Hill” and has hired the law firm Holland & Knight, according to Politico.

It has also worked with Red Duke Strategies, a consulting firm based in McLean, Virginia, to develop “strategic relationships and communications that promote understanding and enable scientific testing,” according to a case study on the company’s  website. 

“The company needed to secure both buy-in and support from the United States government and other influential stakeholders to move forward,” Red Duke states. “This effort demanded a well-connected and authoritative partner who could introduce Stardust to a group of experts able to research, validate, deploy, and regulate its SRM technology.”

Red Duke didn’t respond to an inquiry from MIT Technology Review. Stardust says its work with the consulting firm was not a government lobbying effort.

Yedvab acknowledges that the company is meeting with government leaders in the US, Europe, its own region, and the Global South. But he stresses that it’s not asking any country to contribute funding or to sign off on deployments at this stage. Instead, it’s making the case for nations to begin crafting policies to regulate solar geoengineering.

“When we speak to policymakers—and we speak to policymakers; we don’t hide it—essentially, what we tell them is ‘Listen, there is a solution,’” he says. “‘It’s not decades away—it’s a few years away. And it’s your role as policymakers to set the rules of this field.’”

“Any solution needs checks and balances,” he says. “This is how we see the checks and balances.”

He says the best-case scenario is still a rollout of clean energy technologies that accelerates rapidly enough to drive down emissions and curb climate change.

“We are perfectly fine with building an option that will sit on the shelf,” he says. “We’ll go and do something else. We have a great team and are confident that we can find also other problems to work with.”

He says the company’s investors are aware of and comfortable with that possibility, supportive of the principles that will guide Stardust’s work, and willing to wait for regulations and government contracts.

Lowercarbon Capital didn’t respond to an inquiry from MIT Technology Review.

‘Sentiment of hope’

Others have certainly imagined the alternative scenario Yedvab raises: that nations will increasingly support the idea of geoengineering in the face of mounting climate catastrophes. 

In Kim Stanley Robinson’s 2020 novel, The Ministry for the Future, India unilaterally forges ahead with solar geoengineering following a heat wave that kills millions of people. 

Wagner sketched a variation on that scenario in his 2021 book, Geoengineering: The Gamble, speculating that a small coalition of nations might kick-start a rapid research and deployment program as an emergency response to escalating humanitarian crises. In his version, the Philippines offers to serve as the launch site after a series of super-cyclones batter the island nation, forcing millions from their homes. 

It’s impossible to know today how the world will react if one nation or a few go it alone, or whether nations could come to agreement on where the global temperature should be set. 

But the lure of solar geoengineering could become increasingly enticing as more and more nations endure mass suffering, starvation, displacement, and death.

“We understand that probably it will not be perfect,” Yedvab says. “We understand all the obstacles, but there is this sentiment of hope, or cautious hope, that we have a way out of this dark corridor we are currently in.”

“I think that this sentiment of hope is something that gives us a lot of energy to move on forward,” he adds.

When Your Calendar Becomes the Compromise

6 November 2025 at 13:42

A new meeting on your calendar or a new attack vector?

It starts innocently enough. A new meeting appears in your Google calendar and the subject seems ordinary, perhaps even urgent: “Security Update Briefing,” “Your Account Verification Meeting,” or “Important Notice Regarding Benefits.” You assume you missed this invitation in your overloaded email inbox, and click “Yes” to accept.

Unfortunately, calendar invites have become an overlooked delivery mechanism for social engineering and phishing campaigns. Attackers are increasingly abusing the .ics file format, a universally trusted, text-based standard to embed malicious links, redirect victims to fake meeting pages, or seed events directly into users’ calendars without interaction. 

Because calendar files often bypass traditional email and attachment defenses, they offer a low-friction attack path into corporate environments. 

Defenders should treat .ics files as active content, tighten client defaults, and raise awareness that even legitimate-looking calendar invites can carry hidden risk.

The underestimated threat of .ics files

The iCalendar (.ics) format is one of those technologies we all rely on without thinking. It’s text-based, universally supported, and designed for interoperability between Outlook, Google Calendar, Apple, and countless other clients.

Each invite contains a structured list of fields like SUMMARY, LOCATION, DESCRIPTION, and ATTACH. Within these, attackers have found an opportunity: they can embed URLs, malicious redirects, or even base64-encoded content. The result is a file that appears completely legitimate to a calendar client, yet quietly delivers the attacker’s message, link, or payload.

Because calendar files are plain text, they easily slip through traditional security controls. Most email gateways and endpoint filters don’t treat .ics files with the same scrutiny as executables or macros. And since users expect to receive meeting invites, often from outside their organization, it’s an ideal format for social engineering.

How threat actors abuse the invite

Over the past year, researchers have observed a rise in campaigns abusing calendar invites to phish credentials, deliver malware, or trick users into joining fake meetings. These attacks often look mundane but rely on subtle manipulation:

  • The lure: A professional-looking meeting name and sender, sometimes spoofed from a legitimate organization.

  • The link: A URL hidden in the DESCRIPTION or LOCATION field, often pointing to a fake login page or document-sharing site.

  • The timing: Invites scheduled within minutes, creating urgency (“Your access expires in 15 minutes — join now”).

  • The automation: Calendar clients that automatically add external invites, ensuring the trap appears directly in the user’s daily schedule.

Cal1.png

Example of where some of the malicious components would reside in the .ics file

It’s clever, low-effort social engineering leveraging trust in a system built for collaboration.

The “invisible click” problem

The real danger of malicious calendar invites isn’t just the link inside,  it’s the automatic delivery mechanism. In certain configurations, Outlook and Google Calendar will automatically process .ics attachments and create tentative events, even if the user never opens or even receives the email. That means the malicious link is now part of the user’s trusted interface with their calendar.

This bypasses the usual cognitive warning signs. The email might look suspicious, but the event reminder popping up later? That feels like part of your day. It’s phishing that moves in quietly and waits.

Why traditional defenses miss it

Security tooling has historically focused on attachments that execute code or scripts. By contrast, .ics files are plain text and standards-based, so they don’t inherently appear dangerous. Many detection engines ignore or minimally parse them.

Attackers exploit that gap. They rely on the fact that few organizations monitor for BEGIN:VCALENDAR content or inspect calendar metadata for embedded URLs. Once delivered, the file can bypass filters, land in the user’s calendar, and lead to a high-confidence click.

What defenders can do now

Defending against calendar-based attacks begins with recognizing that these are not edge cases anymore. They’re a natural evolution of phishing  where user convenience becomes the delivery mechanism.

Here are a few pragmatic steps every organization should consider:

  1. Treat .ics files like active content. Configure email filters and attachment scanners to inspect calendar files for URLs, base64-encoded data, or ATTACH fields.

  2. Review calendar client defaults. Disable automatic addition of external events when possible, or flag external organizers with clear warnings.

  3. Sanitize incoming invites. Content disarm and reconstruction (CDR) tools can strip out or neutralize dangerous links embedded in calendar fields.

  4. Raise awareness among users. Train employees to verify unexpected invites — especially those urging immediate action or containing meeting links they didn’t anticipate. Employees can also follow the helpful advice in this Google Support article.

  5. Use strong identity protection. Multi-factor authentication and conditional access policies mitigate the impact if a phishing link successfully steals credentials.

These steps don’t eliminate the threat, but they significantly increase friction for attackers and their malware.

A quiet evolution in social engineering campaigns

Malicious calendar invites represent a subtle yet telling shift in attacker behavior: blending into legitimate business processes rather than breaking them. In the same way that invoice-themed phishing emails once exploited trust in accounting workflows, .ics abuse leverages the quiet reliability of collaboration tools.

As organizations continue to integrate calendars with chat, cloud storage, and video platforms, the attack surface will only expand. Links inside invites will lead to files in shared drives, authentication requests, and embedded meeting credentials. These are all opportunities for exploitation.

Rethinking trust in everyday workflows

Defenders often focus on the extraordinary like zero days, ransomware binaries, and new exploits. Yet the most effective attacks remain the simplest: exploiting human trust in ordinary digital habits. A calendar invite feels harmless and that’s exactly why it works.

The next time an unexpected meeting appears in your calendar, it might be more than just a double-booking. It could be a reminder that security isn’t only about blocking malware, but about questioning what we assume to be safe.

Account Takeover Scams Surge as FBI Reports Over $262 Million in Losses

26 November 2025 at 00:34

Account Takeover fraud

The Account Takeover fraud threat is accelerating across the United States, prompting the Federal Bureau of Investigation (FBI) to issue a new alert warning individuals, businesses, and organizations of all sizes to stay vigilant. According to the FBI Internet Crime Complaint Center (IC3), more than 5,100 complaints related to ATO fraud have been filed since January 2025, with reported losses exceeding $262 million. The bureau warns that cyber criminals are increasingly impersonating financial institutions to steal money or sensitive information. As the annual Black Friday sale draws millions of shoppers online, the FBI notes that the surge in digital purchases creates an ideal environment for Account Takeover fraud. With consumers frequently visiting unfamiliar retail websites and acting quickly to secure limited-time deals, cyber criminals deploy fake customer support calls, phishing pages, and fraudulent ads disguised as payment or discount portals. The increased online activity during Black Friday makes it easier for attackers to blend in and harder for victims to notice red flags, making the shopping season a lucrative window for ATO scams.

How Account Takeover Fraud Works

In an ATO scheme, cyber criminals gain unauthorized access to online financial, payroll, or health savings accounts. Their goal is simple: steal funds or gather personal data that can be reused for additional fraudulent activities. The FBI notes that these attacks often start with impersonation, either of a financial institution’s staff, customer support teams, or even the institution’s official website. To carry out their schemes, criminals rely heavily on social engineering and phishing websites designed to look identical to legitimate portals. These tactics create a false sense of trust, encouraging account owners to unknowingly hand over their login credentials.

Social Engineering Tactics Increase in Frequency

The FBI highlights that most ATO cases begin with social engineering, where cyber criminals manipulate victims into sharing sensitive information such as passwords, multi-factor authentication (MFA) codes, or one-time passcodes (OTP). Common techniques include:
  • Fraudulent text messages, emails, or calls claiming unusual activity or unauthorized charges. Victims are often directed to click on phishing links or speak to fake customer support representatives.
  • Attackers posing as bank employees or technical support agents who convince victims to share login details under the guise of preventing fraudulent transactions.
  • Scenarios where cyber criminals claim the victim’s identity was used to make unlawful purchases—sometimes involving firearms, and escalate the scam by introducing another impersonator posing as law enforcement.
Once armed with stolen credentials, criminals reset account passwords and gain full control, locking legitimate users out of their own accounts.

Phishing Websites and SEO Poisoning Drive More Losses

Another growing trend is the use of sophisticated phishing domains and websites that perfectly mimic authentic financial institution portals. Victims believe they are logging into their bank or payroll system, but instead, they are handing their details directly to attackers. The FBI also warns about SEO poisoning, a method in which cyber criminals purchase search engine ads or manipulate search rankings to make fraudulent sites appear legitimate. When victims search for their bank online, these deceptive ads redirect them to phishing sites that capture their login information. Once attackers secure access, they rapidly transfer funds to criminal-controlled accounts—many linked to cryptocurrency wallets—making transactions difficult to trace or recover.

How to Stay Protected Against ATO Fraud

The FBI urges customers and businesses to take proactive measures to defend against ATO fraud attempts:
  • Limit personal information shared publicly, especially on social media.
  • Monitor financial accounts regularly for missing deposits, unauthorized withdrawals, or suspicious wire transfers.
  • Use unique, complex passwords and enable MFA on all accounts.
  • Bookmark financial websites and avoid clicking on search engine ads or unsolicited links.
  • Treat unexpected calls, emails, or texts claiming to be from a bank with skepticism.

What To Do If You Experience an Account Takeover

Victims of ATO fraud are advised to act quickly:
  1. Contact your financial institution immediately to request recalls or reversals, and report the incident to IC3.gov.
  2. Reset all compromised credentials, including any accounts using the same passwords.
  3. File a detailed complaint at IC3.gov with all relevant information, such as impersonated institutions, phishing links, emails, or phone numbers used.
  4. Notify the impersonated company so it can warn others and request fraudulent sites be taken down.
  5. Stay informed through updated alerts and advisories published on IC3.gov.

Android malware steals your card details and PIN to make instant ATM withdrawals

6 November 2025 at 11:48

The Polish Computer Emergency Response Team (CERT Polska) analyzed a new Android-based malware that uses NFC technology to perform unauthorized ATM cash withdrawals and drain victims’ bank accounts.

Researchers found that the malware, called NGate, lets attackers withdraw cash from ATMs (Automated Teller Machines, or cash machines) using banking data exfiltrated from victims’ phones—without ever physically stealing the cards.

NFC is a wireless technology that allows devices such as smartphones, payment cards, and terminals to communicate when they’re very close together. So, instead of stealing your bank card, the attackers capture NFC (Near Field Communication) activity on a mobile phone infected with the NGate malware and forward that transaction data to devices at ATMs. In NGate’s case the stolen data is sent over the network to the attackers’ servers rather than being relayed purely by radio.

NFC comes in a few “flavors.” Some produce a static code—for example, the card that opens my apartment building door. That kind of signal can easily be copied to a device like my “Flipper Zero” so I can use that to open the door. But sophisticated contactless payment cards (like your Visa or Mastercard debit and credit cards) use dynamic codes. Each time you use the NFC, your card’s chip generates a unique, one-time code (often called a cryptogram or token) that cannot be reused and is different every time.

So, that’s what makes the NGate malware more sophisticated. It doesn’t simply grab a signal from your card. The phone must be infected, and the victim must be tricked into performing a tap-to-pay or card-verification action and entering their PIN. When that happens, the app captures all the necessary NFC transaction data exchanged — not just the card number, but the fresh one-time codes and other details generated in that moment.

The malware then instantly sends all that NFC data, including the PIN, to the attacker’s device. Because the codes are freshly generated and valid only for a short time, the attacker uses them immediately to imitate your card at an ATM; the accomplice at the ATM presents the captured data using a card-emulating device such as a phone, smartwatch, or custom hardware.

But, as you can imagine, being ready at an ATM when the data comes in takes planning—and social engineering.

First, attackers need to plant the malware on the victim’s device. Typically, they send phishing emails or SMS messages to potential victims. These often claim there is a security or technical issue with their bank account, trying to induce worry or urgency. Sometimes, they follow up with a phone call, pretending to be from the bank. These messages or calls direct victims to download a fake “banking” app from a non-official source, such as a direct link instead of Google Play.

Once installed, the app app asks for permissions and leads victims through fake “card verification” steps. The goal is to get victims to act quickly and trustingly—while an accomplice waits at an ATM to cash out.

How to stay safe

NGate only works if your phone is infected and you’re tricked into initiating a tap-to-pay action on the fake banking app and entering your PIN. So the best way to stay safe from this malware is keep your phone protected and stay vigilant to social engineering:

  • Stick to trusted sources. Download apps only from Google Play, Apple’s App Store, or the official provider. Your bank will never ask you to use another source.
  • Protect your devices. Use an up-to-date real-time anti-malware solution like Malwarebytes for Android, which already detects this malware.
  • Do not engage with unsolicited callers. If someone claims to be from your bank, tell them you’ll call them back at the number you have on file.
  • Ignore suspicious texts. Do not respond to or act upon unsolicited messages, no matter how harmless or urgent they seem.

Malwarebytes for Android detects these banking Trojans as Android/Trojan.Spy.NGate.C; Android/Trojan.Agent.SIB01022b454eH140; Android/Trojan.Agent.SIB01c84b1237H62; Android/Trojan.Spy.Generic.AUR9552b53bH2756 and Android/Trojan.Banker.AURf26adb59C19.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Cybercriminals Targeting Payroll Sites

4 November 2025 at 07:05

Microsoft is warning of a scam involving online payroll systems. Criminals use social engineering to steal people’s credentials, and then divert direct deposits into accounts that they control. Sometimes they do other things to make it harder for the victim to realize what is happening.

I feel like this kind of thing is happening everywhere, with everything. As we move more of our personal and professional lives online, we enable criminals to subvert the very systems we rely on.

Under the engineering hood: Why Malwarebytes chose WordPress as its CMS

17 October 2025 at 04:10

It might surprise some that a security company would choose WordPress as the backbone of its digital content operations. After all, WordPress is often associated with open-source plugins, community themes, and a wide range of deployment practices—some stronger than others. But that perception overlooks what modern WordPress can deliver when it’s architected, operated, and governed with discipline. In our Digital Experience Platform (DXP) at Malwarebytes, WordPress serves as the content layer—an editorial hub that feeds multiple customer experiences.

The reason is pragmatic and security-forward. WordPress offers transparency (open code and ecosystem), control (self-hosted in our environment, with strict governance), and maturity (a seasoned core with an established security model). Combined with a decoupled architecture, strong identity and access controls, rigorous supply chain management, and a hardened infrastructure, WordPress becomes an ideal content engine for an enterprise-grade, security-first DXP within an enterprise-grade MarTech stack.

DXP vision and the role of WordPress

When we say DXP, we mean the orchestration layer that brings together content, personalization, analytics, experimentation, commerce, support experiences, and more. It’s not a single product; it’s the way we coordinate systems to deliver cohesive customer journeys across web, mobile, and product surfaces.

In that model, WordPress is our content authoring hub. Editors draft, review, and publish content once; APIs then power multiple front-ends—websites built with Next.js/React, mobile applications, and support portals. This headless pattern decouples the authoring experience from delivery.

Why decouple?

By delivering both static and server-side rendered (SSR) pages directly from the edge, we meet aggressive latency goals and excel in Core Web Vitals scores on a global scale. This approach ensures content is as close as possible to end users, providing consistently fast load times regardless of location. Our architecture isolates site performance from backend processes, meaning bursts of traffic or complex deployments don’t degrade the visitor experience.

Security isolation is equally foundational to our platform design. The public-facing runtime never exposes the WordPress admin interface or control endpoints—instead, these administrative components reside securely behind private networking, protected by robust access controls and authentication. This segmentation shields both business-critical operations and sensitive data, lowering the attack surface and reducing risk without impeding editors or developers.

This architecture also boosts development velocity. Front-end engineers can iterate rapidly, independently releasing new features or improvements without being bottlenecked by backend deployments. At the same time, content editors retain full publishing agility via the headless CMS, able to launch and update site content at will. This parallel, decoupled workflow ensures that technical and editorial teams each operate at their highest efficiency, supporting an environment of continuous innovation and timely content delivery.

How speed helps security

Rapid and reliable deployments are a cornerstone of our security posture, empowering us to respond quickly to new threats and vulnerabilities. By streamlining and automating our release processes, we can efficiently ship patches and mitigations as soon as issues arise, minimizing the window of exposure. Equally important, our deployment pipelines are built to support safe rollbacks, allowing us to confidently revert any changes that introduce instability or unexpected behavior—maintaining operational continuity no matter how urgent the circumstances.

Shortening our development and deployment cycle is not just about speed—it’s one of the most effective security controls we employ. Frequent, predictable deploys mean our systems are always running the latest protections and bug fixes, dramatically reducing the risks associated with outdated code or configurations. This agility ensures we stay ahead of evolving threats, support innovation without sacrificing safety, and adapt to changing requirements with minimal disruption, making security a continuous, integrated aspect of our delivery workflow.

Why WordPress aligns with security-first

Open-source transparency matters. With WordPress, we can inspect every line of core and plugin code, run our own audits, and make informed decisions about the attack surface. The community’s response to security issues adds resilience through coordinated disclosures, rapid patches, and widely disseminated advisories.

The core platform is mature and stable. The WordPress security team has established processes for responsible disclosure and a consistent patch cadence. Operating close to core (and avoiding heavy core modifications) enables us to adopt updates quickly.

Finally, talent availability accelerates secure outcomes. A large pool of WordPress developers and security practitioners means faster remediation, effective code reviews, and a healthy ecosystem of best practices and tooling.

Architecture that reduces risks

Headless/decoupled architecture

Our public website leverages the powerful combination of a Content Delivery Network (CDN) and a Web Application Firewall (WAF) to deliver a seamless and secure user experience. By distributing static content across global edge locations, the CDN ensures lightning-fast load times while also enabling server-side rendering at the edge for dynamic content. This hybrid approach allows us to serve both static and server-rendered pages efficiently, providing relevant content with minimal latency. Positioned behind the CDN, the WAF offers an added layer of security by blocking malicious traffic and safeguarding our site from threats, ensuring that both performance and protection are at the forefront of our web infrastructure.

To further enhance security and streamline workflows, we utilize single sign-on (SSO) with multi-factor authentication (MFA) for accessing all administrative interfaces and developer endpoints. The WordPress admin area, GraphQL and REST APIs, as well as build hooks, are only accessible through this robust SSO with MFA, ensuring that only authorized team members can reach sensitive controls and data. Access is strictly segmented, treating the admin plane as an internal-only application and fully separating it from the public-facing site. This architecture minimizes risk, protects critical infrastructure, and supports efficient, secure collaboration among our administrative and development teams.

Network and edge security

Our Web Application Firewall (WAF) works in tandem with advanced bot management to protect our site from a wide range of online threats. The WAF actively filters malicious payloads and prevents exploitation attempts, while the bot management system blocks known bad actors and suspicious automated traffic. Together, they help enforce rate limits—ensuring fair usage and preventing abuse that could impact site performance or security. This layered approach allows us to maintain a reliable, secure environment for all our users while shielding our resources from sophisticated cyber threats.

To further secure our infrastructure, we have robust DDoS mitigation controls in place, designed to identify and absorb large-scale volumetric attacks before they reach our application. Coupled with customizable geo-blocking and ASN (Autonomous System Number) policies, we can restrict or filter access from high-risk regions and networks known for hostile activity. This proactive combination not only helps protect against both widespread and targeted attacks, but also ensures the continued availability and performance of our services for legitimate users around the globe.

We enforce modern transport security standards across our entire platform by mandating TLS 1.3 for all connections. This ensures data transmitted between users and our site is encrypted using the latest, most secure protocol available. In addition, HTTP Strict Transport Security (HSTS) is enabled, compelling browsers to interact with our site only via secure HTTPS connections. Together, TLS 1.3 and HSTS provide strong guarantees of data integrity, confidentiality, and protection against common interception or downgrade attacks, giving our users peace of mind with every interaction.

Service isolation and least privilege

Our security framework is built on the principle of least-privilege access, ensuring that databases, object storage, and service accounts are tightly controlled. Each system and user is granted only the permissions essential for their specific role—nothing more. This minimizes the potential impact of accidental or malicious activity, as access is segmented and strictly limited across all layers of our architecture. By aligning permissions closely with functional requirements, we significantly reduce the risk of data exposure or unauthorized operations, reinforcing the integrity and confidentiality of our platform.

Hardening at the application layer

Secure configuration

In our production WordPress environment, we implement a series of stringent measures to protect both the core application and user data. File editing through the wp-admin interface is completely disabled, eliminating a common attack vector and reducing the risk of unauthorized code changes. We enforce the use of strong, unique salts and keys, enhancing the integrity and security of authentication cookies and stored data. Additionally, the core filesystem is kept strictly read-only in production, preventing alterations to critical files and ensuring that even in the event of a compromise, attackers cannot modify system-level code or inject persistent threats.

To further reduce the platform’s attack surface, we restrict XML-RPC functionality—often abused for brute-force attacks—and limit exposed REST API endpoints strictly to those required by our headless WordPress clients. User enumeration patterns, which attackers may exploit to gather account names, are actively blocked, thereby safeguarding user identities. On the front end, we enforce robust security headers, including a finely scoped Content Security Policy (CSP) to mitigate XSS threats, strict X-Frame-Options and Frame-Ancestors to prevent clickjacking, X-Content-Type-Options to block MIME-type attacks, and a privacy-friendly Referrer-Policy to minimize information leakage. Together, these layered controls ensure our site remains resilient against a broad spectrum of web threats.

Auth and session security

We integrate Single Sign-On (SSO) through industry-standard protocols such as SAML and OIDC, streamlining secure access for our teams while reducing the risks associated with password proliferation. Automated user provisioning and deprovisioning are managed via SCIM, ensuring that access is immediately granted to new team members and promptly revoked when it’s no longer needed. MFA is mandatory for all privileged users, significantly strengthening the security of critical accounts and administrative functions, and defending against credential-based attacks.

Access within our environment is granted based on granular, role- and capability-based policies. Custom roles are carefully tailored so that editors, contributors, and admins receive only the permissions essential to their responsibilities, minimizing exposure and preventing privilege creep. We further secure administrative access by enforcing short-lived sessions, reducing the window of opportunity for session hijacking or misuse. This approach ensures that even if an administrative session is compromised, the potential for abuse is tightly constrained, keeping our site and its data safe.

Data handling

Security is at the forefront of our development practices, with a strong emphasis on protecting both our site and its users from application-level threats. We enforce the use of prepared statements for all database queries to defend against SQL injection, mandate thorough output escaping to prevent cross-site scripting (XSS), and ensure rigorous input sanitization in every layer of custom code and approved plugins. For protection against cross-site request forgery (CSRF), we implement nonces, providing an additional safeguard to validate user actions and prevent unauthorized commands. This multifaceted approach applies to every custom solution and trusted extension, reinforcing the reliability and trustworthiness of our platform.

Data privacy and compliance round out our security strategy. We are committed to minimizing the storage of personally identifiable information (PII), classifying data sensitivity, and applying data retention policies that align with both regulatory requirements and customer expectations. Consent management is thoughtfully integrated into both our publishing workflow and the front-end user experience, so we can uphold privacy standards without sacrificing usability. This ensures users remain informed and in control of their data—supporting compliance with privacy laws and building trust through transparency and respect for user choices.

Plugin and supply chain governance

Controlled ecosystem

Our approach to plugin management is deliberately conservative, maintaining a strict allowlist to ensure only vetted and essential plugins are present within our environment. We prioritize the use of “must-use” (mu-) plugins for enforcing global policies and delivering critical functionality, as these plugins are always active and centrally managed. This strategy prevents unauthorized or unnecessary code from entering our system, supports consistency across environments, and enables us to embed security controls directly into our platform’s foundational layers.

Before any plugin or theme is deployed to production, it undergoes a comprehensive code review process to assess security, performance, and compatibility. We are proactive in curbing plugin sprawl, regularly auditing our stack and removing redundant or unsupported components to minimize complexity and reduce our attack surface. By keeping our codebase lean and disciplined, we not only defend against potential vulnerabilities found in third-party additions but also streamline maintenance and updates, ensuring the long-term stability and security of our production environment.

Dependency management

We take a comprehensive approach to dependency management and software supply chain integrity by generating Software Bill of Materials (SBOMs) for both PHP and JavaScript codebases. SBOMs allow us to track all direct and transitive dependencies, as well as their associated licenses, ensuring greater visibility and control over the components that make up our application. Dependencies are always pinned and locked to specific, approved versions, reducing the risk of introducing vulnerabilities through unintentional upgrades or changes. Automated tools like Dependabot continuously monitor for updates and propose them, but nothing reaches production unless it successfully passes through our continuous integration (CI) security gates.

Our CI/CD pipeline is fortified with robust security controls at every stage. Every update, whether a dependency or code change, triggers automated Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify potential vulnerabilities both before and during runtime. We employ secret scanning to prevent accidental exposure of credentials and keys, and every build is evaluated for license compliance and regulatory conformance. This layered approach ensures that our development processes are secure by default, continually verifying software quality, integrity, and compliance before anything is deployed to production.

Vulnerability intelligence and patching

We actively monitor CVE feeds and WordPress-focused security advisories, such as WPScan, to stay ahead of emerging vulnerabilities and threats. By keeping a close eye on both general and platform-specific intelligence sources, we’re able to rapidly identify potential risks relevant to our infrastructure. Upon detection, vulnerabilities are triaged and addressed according to well-defined Service Level Agreements (SLAs) based on severity—ensuring that critical issues receive immediate attention and routine patches are managed efficiently. This structured, proactive posture helps us mitigate risk and maintain the ongoing security and stability of our environment.

In the rare event that a critical vulnerability threatens operational security or integrity, we are prepared with fast rollback plans that allow us to swiftly revert to a secure state. These procedures are designed to be executed with minimal disruption, ensuring urgent patches can be applied without causing extended downtime for users or administrators. By integrating rapid response capabilities into our workflows, we’re able to act decisively and minimize exposure, all while maintaining service availability and reliability at the highest standard.

Infrastructure security operations

Secrets and data

We enforce strict secret management practices by using a centralized vault or cloud-native secret store to handle all sensitive credentials, API keys, and configuration secrets. No secrets are ever embedded in source code or stored within deployment images, reducing the risk of accidental exposure. Secret rotation is scheduled regularly as part of our operational cadence, ensuring that credentials remain fresh and limiting the window of opportunity for misuse even if a secret were somehow compromised.

All data is secured with encryption both at rest and in transit, leveraging strong cryptographic controls across storage and networking layers. Where supported, our databases rely on IAM-based authentication instead of static credentials, further minimizing the risk associated with traditional username-password pairs. This approach not only enhances security but also streamlines access control and auditability, underpinning our commitment to robust, modern data protection practices throughout the stack.

Backups and disaster recovery

Our disaster recovery strategy rests on maintaining versioned, immutable backups that cannot be altered or deleted, providing a reliable safeguard against data loss, corruption, or ransomware attacks. These backups are created on a regular schedule and include not only application data, but also content, media assets, and configuration files. We conduct periodic restore drills to validate that our backups are effective and to ensure our team is prepared to execute recovery procedures smoothly. Explicit Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) are defined, routinely tested, and adjusted as needed to meet the demands of our operations and regulatory obligations.

Data recovery playbooks are meticulously maintained and encompass every critical aspect of our environment, from core content and media to infrastructure-as-code templates that can quickly and predictably rebuild our systems. These playbooks provide step-by-step guidance for recovering data and restoring services, whether in response to accidental deletion, hardware failure, or a targeted attack. By rigorously documenting and testing these processes, we ensure a high degree of resilience and confidence in our ability to restore normal operations with minimal disruption, safeguarding both our assets and the experience of our users.

Observability and response

We maintain a comprehensive observability stack with centralized, structured logging that aggregates data from all key layers—Nginx, PHP-FPM, WordPress, and supporting services. This logging is enriched with real-time metrics and distributed traces, giving us end-to-end visibility into application performance and user activity across our digital experience platform (DXP). All logs are funneled into a Security Information and Event Management (SIEM) system, which acts as the nerve center for detecting and investigating potential threats. Hosts and containers are further protected by Endpoint Detection and Response (EDR) solutions, providing continuous monitoring and the ability to quickly isolate and remediate suspicious behavior.

To enhance detection and incident response, we employ automated anomaly detection and maintain detailed runbooks, dramatically reducing our mean time to detect (MTTD) and mean time to respond (MTTR) to issues. Our security posture is continually tested and validated through regular penetration tests and an active bug bounty program that focus on the entire surface of our DXP, not just on isolated components. This holistic approach ensures we proactively identify vulnerabilities, address weaknesses before they can be exploited, and ultimately maintain a resilient, trustworthy platform for our users and customers.

Certifications Obtained

When it comes to building or selecting hosting for your organization’s sensitive data and mission-critical applications, certifications matter—a lot. Obtaining FedRAMP Moderate certified ensures compliance with rigorous federal security standards, making it a necessity for government-related workloads and a great standard for any organization to abide. Similarly, a SOC 2 Type 1 certification demonstrates that a hosting provider has established robust systems and controls to protect data and ensure privacy, fostering client trust and accountability.

GovRAMP Moderate is critical for U.S. government contractors working with state and local government workloads, ensuring additional layers of compliance and security. If your data processing touches on European clients or users, GDPR and the Data Privacy Framework offer reassurance that personal data is handled and processed lawfully, transparently, and securely. Equally important is the Microsoft SSPA, a must-have for vendors providing services to Microsoft or handling its data. Lastly, WCAG 2.0 AA compliance ensures that your hosted applications and websites are accessible to users and employees with disabilities, strengthening your commitment to inclusivity and expanding your reach. By prioritizing these certifications, organizations not only safeguard compliance and security, but also demonstrate a dedication to transparency, privacy, and accessibility in today’s digital landscape.

Editorial workflow governance

Workflow controls

Every administrative and content-related event is thoroughly audit-logged, capturing a detailed trail of actions for review and oversight. These logs are fully exportable, supporting compliance with regulatory requirements and internal governance policies. By maintaining comprehensive and accessible audit records, we provide the transparency necessary to facilitate investigations, enforce accountability, and demonstrate adherence to best practices and legal obligations—ensuring peace of mind for our organization and stakeholders alike.

Secure content operations

We prioritize security awareness by providing editors with ongoing training on critical topics, such as phishing recognition, safe link practices, and our governance policies for embedded scripts and third-party widgets. This continual education helps staff identify and avoid social engineering attacks, understand the risks associated with external content, and adhere to protocols that maintain the integrity and security of our web platform. By empowering editors with the knowledge to make secure decisions, we reduce the likelihood of errors that could compromise the site or expose sensitive information.

To further protect user interactions, especially on forms, we deploy layered anti-spam defenses, implement bot challenges like CAPTCHAs, and set server-side rate limits to prevent abuse. All form inputs are validated on the server, ensuring robust protection even if client-side checks are bypassed or disabled. This disciplined approach to input handling and abuse prevention ensures our forms remain a secure channel for legitimate user engagement while blocking malicious actors and automated attacks.

Reliable and secure performance

Caching strategy

Our performance strategy centers on comprehensive caching and efficient data handling to deliver a fast, reliable experience for both users and administrators. Edge and page-level caching shield our origin servers by intercepting and serving frequent requests directly at the edge, dramatically reducing the number of dynamic requests that reach the core infrastructure. Object caching solutions like Redis, coupled with thoughtfully optimized queries, keep the admin interface responsive and ensure APIs remain quick even under load. We routinely profile database queries and set strict performance budgets for the slowest paths, preventing regressions that could degrade performance or escalate into broader availability issues. This layered approach ensures our platform stays speedy, stable, and scalable as demands grow.

Build pipeline

Every code change in our workflow is subjected to automated testing, with comprehensive suites that verify functionality, performance, and security. Security gates are tightly integrated into the CI/CD pipeline, ensuring that no changes are merged if any issues or vulnerabilities are detected. Our deployment processes are fully automated and repeatable, significantly reducing the potential for human error and guaranteeing that releases are consistent, predictable, and recoverable.

By managing our infrastructure as code, we further ensure that all environments—from development to production—are consistent, auditable, and easily reproducible. This approach not only accelerates the provisioning of resources and the rollout of updates, but also strengthens compliance and traceability, providing a solid foundation for scalability, reliability, and continuous improvement.

UX and SEO

We finely tune our security headers and Content Security Policies (CSPs) to deliver robust protection without disrupting the user experience, ensuring that all site functionality remains seamless and accessible. Our commitment to performance extends to advanced image optimization, responsive asset delivery, and strict adherence to accessibility standards, enabling our content to load quickly and be usable by everyone. By consistently delivering fast, accessible pages, we not only enhance user engagement but also enable rapid, safe deployment cycles—minimizing potential attack windows through swift rollouts and efficient rollbacks, and maintaining both security and usability at the core of our platform.

Alternatives considered

Proprietary Digital Experience Platforms (DXPs) present a compelling all-in-one suite of features that can streamline operations for many organizations. However, their advantages often come with trade-offs: these platforms tend to be resource intensive, both in terms of infrastructure and licensing fees, and may lack the granular transparency required for deep security audits or targeted customizations. The inherent complexity and tightly-coupled nature of these solutions can slow the pace of change—making it challenging to adapt or patch emergent threats rapidly, which is itself a significant security and business risk in dynamic environments.

Headless-only SaaS CMSes, on the other hand, are designed for flexibility and API excellence, offering developers modern tooling and a frictionless integration experience. Despite these strengths, organizations may encounter challenges such as vendor lock-in, which can limit strategic choices and agility over time. Control over patching and updates is usually in the hands of the SaaS provider, potentially creating gaps between issue discovery and remediation. Further, these platforms may present hurdles in regions with strict data residency or compliance requirements, making them less suitable for regulated industries or global enterprises with nuanced jurisdictional needs.

Systems like Drupal or fully-custom CMS architectures can undoubtably satisfy enterprise requirements for scale, extensibility, and security. However, in our evaluation, team expertise, the maturity and momentum of the adjacent tooling ecosystem, and a clear view of total cost of ownership all ultimately favored the adoption of WordPress. WordPress’s balance of flexibility, a wealth of existing integrations, well-understood operational paradigms, and strong community support enables us to deliver on our goals efficiently while ensuring we maintain the adaptability, security, and cost-effectiveness our organization requires.

WordPress provides the best mix of transparency, control, ecosystem breadth, and speed—when paired with our security architecture and operating model.

Lessons learned and best practices

  • Start headless and isolate the admin plane from day one.
  • Enforce SSO and MFA, least privilege roles, and formal change approval.
  • Treat plugins as third-party code: audit, monitor, and patch under SLAs.
  • Invest in observability and rehearse incident response regularly.
  • Keep WordPress core close to vanilla; extend through vetted plugins and mu-plugins, not core forks.

Security is not a property of a tool; it’s the outcome of architecture, governance, and culture. With a decoupled design, rigorous controls, and a disciplined operational posture, WordPress is a strong foundation for the content layer of an enterprise DXP—combining the openness and speed teams want with the security and control the business requires of its MarTech stack.

❌