Normal view

Received before yesterday

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

8 December 2025 at 16:26

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

AI browsers may be innovative, but they’re “too risky for general adoption by most organizations,” Gartner warned in a recent advisory to clients. The 13-page document, by Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts, cautions that AI browsers’ ability to autonomously navigate the web and conduct transactions “can bypass traditional controls and create new risks like sensitive data leakage, erroneous agentic transactions, and abuse of credentials.” Default AI browser settings that prioritize user experience could also jeopardize security, they said. “Sensitive user data — such as active web content, browsing history, and open tabs — is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed,” the analysts said. “Gartner strongly recommends that organizations block all AI browsers for the foreseeable future because of the cybersecurity risks identified in this research, and other potential risks that are yet to be discovered, given this is a very nascent technology,” they cautioned.

AI Browsers’ Agentic Capabilities Could Introduce Security Risks: Analysts

The researchers largely ignored risks posed by AI browsers’ built-in AI sidebars, noting that LLM-powered search and summarization functions “will always be susceptible to indirect prompt injection attacks, given that current LLMs are inherently vulnerable to such attacks. Therefore, the cybersecurity risks associated with an AI browser’s built-in AI sidebar are not the primary focus of this research.” Still, they noted that use of AI sidebars could result in sensitive data leakage. Their focus was more on the risks posed by AI browsers’ agentic and autonomous transaction capabilities, which could introduce new security risks, such as “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.” AI browsers could also leak sensitive data that users are currently viewing to their cloud-based service back end, they noted.

Analysts Focus on Perplexity Comet

An AI browser’s agentic transaction capability “is a new capability that differentiates AI browsers from third-party conversational AI sidebars and basic script-based browser automation,” the analysts said. Not all AI browsers support agentic transactions, they said, but two prominent ones that do are Perplexity Comet and OpenAI’s ChatGPT Atlas. The analysts said they’ve performed “a limited number of tests using Perplexity Comet,” so that AI browser was their primary focus, but they noted that “ChatGPT Atlas and other AI browsers work in a similar fashion, and the cybersecurity considerations are also similar.” Comet’s documentation states that the browser “may process some local data using Perplexity’s servers to fulfill your queries. This means Comet reads context on the requested page (such as text and email) in order to accomplish the task requested.” “This means sensitive data the user is viewing on Comet might be sent to Perplexity’s cloud-based AI service, creating a sensitive data leakage risk,” the analysts said. Users likely would view more sensitive data in a browser than they would typically enter in a GenAI prompt, they said. Even if an AI browser is approved, users must be educated that “anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions,” the Gartner analysts said. Employees might also be tempted to use AI browsers to automate tasks, which could result in “erroneous agentic transactions against internal resources as a result of the LLM’s inaccurate reasoning or output content.”

AI Browser Recommendations

Gartner said employees should be blocked from accessing, downloading and installing AI browsers through network and endpoint security controls. “Organizations with low risk tolerance must block AI browser installations, while those with higher-risk tolerance can experiment with tightly controlled, low-risk automation use cases, ensuring robust guardrails and minimal sensitive data exposure,” they said. For pilot use cases, they recommended disabling Comet’s “AI data retention” setting so that Perplexity can’t use employee searches to improve their AI models. Users should also be instructed to periodically perform the “delete all memories” function in Comet to minimize the risk of sensitive data leakage.  

Code Formatting Tools Share Secrets by the Thousands: Researchers

25 November 2025 at 14:36

Code Formatting Tools Share Secrets by the Thousands: Researchers

Platforms that developers use to format their input unintentionally share “thousands” of secrets, according to new research. Researchers from watchTowr captured a dataset of more than 80,000 saved pieces of JSON from code formatting tools JSONFormatter and CodeBeautify and parsed the dataset to discover “thousands of secrets” such as Active Directory and AWS credentials, authentication and API keys, and more. In typical watchTowr snark, the researchers noted, “it went exactly as badly as you might expect.”

Code Formatting Tools Create Shareable Links

In a post titled, “Stop Putting Your Passwords Into Random Websites,” the researchers noted that users of the code formatting tools can create “a semi-permanent, shareable link to whatever you just formatted.” “[I]t is fairly apparent that the word ‘SAVE’ and being given shareable link was not enough to help most users understand that, indeed yes, the content is saved and the URL is shareable - enabling anyone to recover your data when armed with the URL,” the researchers wrote. Those links follow common, intuitive formats, they said, and JSONformatter and CodeBeautify also have “Recent Links” pages that allow a random user to browse all saved content and associated links, along with the titles, descriptions, and dates. “This makes extraction trivial - because we can behave like a real user using legitimate functionality,” the researchers said. “For every provided link on a Recent Links page, we extracted the id value, and requested the contents from the /service/getDataFromID endpoint to transform it into the raw content we’re really after.”

Data Shared by Code Formatting Tools

Among the sensitive data found by the researchers were credentials for Docker Hub, JFrog, Grafana and Amazon RDS for a “Data-lake-as-a-service” provider. A cybersecurity company “had actually pasted a bunch of encrypted credentials for a very sensitive configuration file ... to this random website on the Internet.” A financial services company had uploaded sensitive “know your customer” (KYC) data. A consultancy leaked GitHub tokens, hardcoded credentials, and URLs pointed at delivery-related files on GitHub. In the process of uploading an entire configuration file for a tool, “a GitHub token was disclosed that, based on the configuration file, we infer (guess) had permissions to read/write to files and folders on the main consultancy organization’s account.” An MSSP employee uploaded an onboarding email “complete with Active Directory credentials ... they also included a second set: credentials for the MSSP’s largest, most heavily advertised client - a U.S. bank.” A ”major financial exchange” leaked production AWS credentials “directly associated with Splunk SOAR automation at a major international stock exchange.” “[W]e realised we’d found a Splunk SOAR playbook export,” the researchers said. “Embedded in that export were credentials to an S3 bucket containing detection logic and automation logs - essentially the brain powering parts of an incident-response pipeline. “This was not your average organization, but a truly tier-0 target in-scope of the most motivated and determined threat actors, who would absolutely capitalize on being able to leverage any ability to blind or damage security automation. We promptly disclosed them to the affected stock exchange for remediation.”

Researchers Set Up Test Credentials

To make sure that they weren’t the only ones accessing the data, watchTowr set up test credentials with a 24-hour expiry. “[I]f the credentials were used after the 24-hour expiry, it would indicate that someone had stored the upload from the ‘Recent Links’ page before expiry and used it after it had technically expired,” they said. Sure enough, someone started poking around the test datasets a day after the link had expired and the “saved” content was removed. watchTowr told The Cyber Express that if a user chooses to “save” their content, it remains accessible for the duration they configured. "And because most users never set a short — or any — expiry period, that data often sat exposed far longer than they realized," watchTowr said. "Once the configured window passed, the links did technically expire and should no longer have been reachable. But the core issue is that the vast majority of users left content saved indefinitely, creating long-tail exposure that attackers could easily abuse." The researchers concluded: “We’re not alone - someone else is already scraping these sources for credentials, and actively testing them.”

Former Security Company Official Pleads Guilty to Stealing Trade Secrets to Sell to Russian Buyer

29 October 2025 at 15:48

Former Security Company Official Pleads Guilty to Stealing Trade Secrets to Sell to Russian Buyer

A former cybersecurity company official charged with stealing trade secrets to sell them to a Russian buyer pleaded guilty to two counts of theft of trade secrets in U.S. District Court today, the U.S. Department of Justice announced. Peter Williams, 39, an Australian national, pleaded guilty to the charges “in connection with selling his employer’s trade secrets to a Russian cyber-tools broker,” the Justice Department said in a press release. The Justice Department said Williams stole “national-security focused software that included at least eight sensitive and protected cyber-exploit components” over a three-year period from the U.S. defense contractor where he worked. The Justice Department didn’t name the company where Williams worked, but reports have said Williams is a former director and general manager at L3Harris Trenchant, which does vulnerability and security work for government clients. “Those components were meant to be sold exclusively to the U.S. government and select allies,” the Justice Department said. “Williams sold the trade secrets to a Russian cyber-tools broker that publicly advertises itself as a reseller of cyber exploits to various customers, including the Russian government.” Each of the charges carries a statutory maximum of 10 years in prison and a fine of up to $250,000, the Justice Department says, and Williams also must pay $1.3 million in restitution.

U.S. Places Value of Stolen Trade Secrets at $35 Million

The U.S. places the value of the stolen trade secrets at $35 million, according to statements from officials. “Williams placed greed over freedom and democracy by stealing and reselling $35 million of cyber trade secrets from a U.S. cleared defense contractor to a Russian Government supplier,” Assistant Director Roman Rozhavsky of the FBI’s Counterintelligence Division said in a statement. “By doing so, he gave Russian cyber actors an advantage in their massive campaign to victimize U.S. citizens and businesses. This plea sends a clear message that the FBI and our partners will defend the homeland and bring to justice anyone who helps our adversaries jeopardize U.S. national security. According to the facts admitted in connection with the guilty plea, the Justice Department said that from approximately 2022 through 2025, “Williams improperly used his access to the defense contractor’s secure network to steal the cyber exploit components that constituted the trade secrets.” The government says he resold those components “in exchange for the promise of millions of dollars in cryptocurrency. To effectuate these sales, Williams entered into multiple written contracts with the Russian broker, which involved payment for the initial sale of the components, and additional periodic payments for follow-on support. Williams transferred the eight components and trade secrets to the Russian broker through encrypted means.” Williams reportedly worked for the Australian Signals Directorate before L3Harris Trenchant.

Trenchant’s Secretive Security Business

Trenchant was created following the acquisitions of Azimuth Security and Linchpin Labs by defense contractor L3Harris Technologies. According to a company web page, Trenchant’s solutions include vulnerability and exploit research, APIs for intelligence operations, “device and access capabilities,” and computer network operations (CNO) products. TechCrunch put that in plainer terms, saying Trenchant “develops spyware, exploits, and zero-days — security vulnerabilities in software that are unknown to its maker. Trenchant sells its surveillance tech to government customers in Australia, Canada, New Zealand, the United States, and the United Kingdom, the so-called Five Eyes intelligence alliance.”

False Reports of Gmail Data Breach Alarm Internet

29 October 2025 at 13:36

False Reports of Gmail Data Breach Alarm Internet

Breathless news stories about a Gmail data breach began to appear online after media outlets misinterpreted a report about Gmail passwords stolen by infostealers. Urgent headlines like “Urgent alert issued to anyone who uses Gmail after 183 million passwords leaked” created some panic among Google account holders, necessitating a response from Google and a security researcher who had posted the infostealer logs that started the panic. “Reports of a “Gmail security breach impacting millions of users” are false,” Google said in a post on X. “Gmail’s defenses are strong, and users remain protected. “The inaccurate reports are stemming from a misunderstanding of infostealer databases, which routinely compile various credential theft activity occurring across the web," Google added. "It’s not reflective of a new attack aimed at any one person, tool, or platform.” The researcher, Troy Hunt of HaveIBeenPwned, said in his own X post that “This story has suddenly gained *way* more traction in recent hours, and something I thought was obvious needs clarifying: this *is not* a Gmail leak, it simply has the credentials of victims infected with malware, and Gmail is the dominant email provider.”

Gmail Data Breach Stories Appeared After Infostealer Data Published

The news stories began to appear after HaveIBeenPwned published an infostealer data set containing 183 million unique email addresses, the websites they were entered into, and the passwords used. Hunt wrote about the data set in a separate blog post, and stories misunderstanding the nature of infostealer malware took over from there. Gmail may have been the most common email address type in the data set, but hardly the only one, as Hunt noted: “There is every imaginable type of email address in this corpus: Outlook, Yahoo, corporate, government, military and yes, Gmail. This is typical of a corpus of data like this and there is nothing Google specific about it.” Leaks of all manner of account credentials appear in infostealer databases, and Gmail’s wide usage simply makes it one of the more common email credentials stolen by the malware. Credentials involving Gmail addresses appear in Cyble’s “Leaked Credentials” threat intelligence database more than 6 billion times, but many may be duplicates because stolen credentials frequently appear on more than one dark web marketplace or forum.

Protecting Your Gmail Account

Google said that Gmail users “can protect themselves from credential theft by turning on 2-step verification and adopting passkeys as a stronger and safer alternative to passwords, and resetting passwords when they are found in large batches like this. “Gmail takes action when we spot large batches of open credentials, helping users reset passwords and resecure accounts,” the company added. Using complex, unique passwords and resetting them often is another email security step to take. As Hunt noted, “The primary risk is for people who continue to use those credentials on *any* websites, and the mitigation is a password manager and 2FA.”

When Security Is a Matter of Life and Death: The UK Afghan Data Leak

28 October 2025 at 15:15

UK Afghan Data Leak Linked to 49 Deaths

A new study that looked at 231 people exposed by a 2022 UK data leak of Afghans seeking resettlement after the Taliban takeover found that 49 had friends or colleagues killed in Afghanistan. The UK Afghan data leak report, by the charity Refugee Legal Support in consultation with two academics, looked at the damage done by the Ministry of Defence (MoD) data leak of 18,000 people who had applied for asylum. The report was submitted to a House of Commons Defence Committee inquiry into the data breach.

UK Afghan Data Leak Exposed 87% to Risk and Threats

The survey focused on 231 respondents who said they had been told directly by the Ministry of Defence that their data had been exposed in the leak, which was the result of an inadvertent emailing of a spreadsheet by a soldier. Of the 231 affected Afghans, 200, or 87%, “reported personal risks and/or threats to family members,” the report said, and 207 (89%) “reported impacts on their own physical and/or mental health and the same number (207) reported negative impacts on their family’s physical and/or mental health.” Some of the responses detailed in the report are harrowing. One respondent said, “My father was brutally beaten to the point that his toenails were forcibly removed, and my parents remain under constant and serious threat. My family and I continue to face intimidation, repeated house searches, and ongoing danger to our safety.” “I live under constant fear for my life and the safety of my family due to repeated raids, threats from the Taliban and local intelligence groups, and the risk of forced marriage for my daughter,” said another respondent. “The ongoing stress, anxiety, and fear for my family’s well-being have severely impacted my emotional and physical well-being.” One respondent who had relocated to the UK said fears from the breach remain a constant torment for family members who remain in Afghanistan. “Whether it's legal advice, mental health resources, or help accelerating family reunification, anything that can ease this burden would mean the world to me,” the person said.

UK Advice Deemed Inadequate

The report also found that the advice given to the affected Afghans in the wake of the breach was largely inadequate. The report described “a profound mismatch between the MoD’s security advice” – which focused on things like restricting use of social media accounts and advising the use of VPNS – “and the severity of reported risks and threats, which included direct threats, violence, and displacement.” One respondent said, “The security advice provided by the Ministry of Defence was very general and limited. They only advised me not to answer calls from unknown numbers and to secure my emails. These instructions were insufficient given the serious threats and risks I faced, including my house being searched, my brothers being summoned by intelligence services, and direct threats to our lives. Such general advice did not provide any practical help to protect my situation.” The report also found “no evidence that the Ministry of Defence offered local risk management or follow-up with individuals outside of the UK” who were affected by the data breach and were not offered resettlement. The report called for expedited review of remaining resettlement cases, including affected family members. “As both the quantitative and qualitative data from our survey shows, the data breach has had devastating consequences for many individuals and families,” the Refugee Legal Support report said. “The UK Government must act decisively to protect those affected, restore trust, and ensure that such a failure never happens again; or that if it does, those placed at risk will not also be left alone in the dark.”

Lumma Stealer Slowed by Doxxing Campaign

21 October 2025 at 13:33

Lumma Stealer slowed by doxxing campaign

The prolific threat actors behind the Lumma Stealer malware have been slowed by an underground doxxing campaign in recent months. Coordinated law enforcement action earlier this year didn’t do much to slow down the infostealer’s spread, but a recent doxxing campaign appears to have had an impact, according to researchers at Trend Micro. “In September 2025, we noted a striking decline in new command and control infrastructure activity associated with Lummastealer ... as well as a significant reduction in the number of endpoints targeted by this notorious malware,” threat analyst Junestherry Dela Cruz wrote in a recent post. Fueling the drop has been an underground exposure campaign targeting a key administrator, developer and other members of the group, which Trend tracks as “Water Kurita.”

Lumma Stealer Doxxing Campaign Began in August

The Lumma Stealer doxxing campaign began in late August and continued into October, and on September 17, Lumma Stealer’s Telegram accounts were also compromised. “Allegedly driven by competitors, this campaign has unveiled personal and operational details of several supposed core members, leading to significant changes in Lummastealer’s infrastructure and communications,” Dela Cruz wrote. “This development is pivotal, marking a substantial shake-up in one of the most prominent information stealer malware operations of the year. ... The exposure of operator identities and infrastructure details, regardless of their accuracy, could have lasting repercussions on Lummastealer’s viability, customer trust, and the broader underground ecosystem.” The disclosures included highly sensitive details of five alleged Lumma Stealer operators, such as passport numbers, bank account information, email addresses, and links to online and social media profiles, and were leaked on a website called "Lumma Rats." While the campaign may have come from a rival, Dela Cruz said “the campaign’s consistency and depth suggest insider knowledge or access to compromised accounts and databases.” “The exposure campaign was accompanied by threats, accusations of betrayal within the cybercriminal community, and claims that the Lumma Stealer team had prioritized profit over the operational security of their clients,” Dela Cruz wrote. While the researcher noted that the accuracy of the doxed information hasn’t been verified, the accompanying decline in Lumma Stealer activity suggests that the group “has been severely affected—whether through loss of key personnel, erosion of trust, or fear of further exposure.”

Vidar, StealC Gain from Lumma Stealer’s Decline

Lumma Stealer’s decline has been a boon for rival infostealers like Vidar and StealC, Dela Cruz noted, “with many users reporting migrations to these platforms due to Lumma Stealer’s instability and loss of support.” Lumma’s decline has also hit pay-per-install (PPI) services like Amadey that are widely used to deliver infostealer payloads, and rival malware developers have stepped up their marketing efforts, “fueling rapid innovation and intensifying competition among MaaS [Malware as a Service] providers, raising the likelihood of new, stealthier infostealer variants entering the market,” Dela Cruz said. According to Cyble dark web data, Vidar and Redline are the infostealers most rivaling Lumma in volume on dark web marketplaces selling stolen credentials, with StealC, Acreed, Risepro, Rhadamanthys and Metastealer among other stealer logs commonly seen on the dark web. As for Lumma Stealer, Dela Cruz noted that being a top cybercrime group isn’t exactly a secure - pardon the pun - position to be in, as RansomHub found out earlier this year. “[B]eing number one means facing scrutiny and attacks from both defenders and competitors alike,” the researcher noted.
❌