Reading view

There are new articles available, click to refresh the page.

High-severity vulnerabilities affect a wide range of Asus router models

High-severity vulnerabilities affect a wide range of Asus router models

Enlarge (credit: Getty Images)

Hardware manufacturer Asus has released updates patching multiple critical vulnerabilities that allow hackers to remotely take control of a range of router models with no authentication or interaction required of end users.

The most critical vulnerability, tracked as CVE-2024-3080 is an authentication bypass flaw that can allow remote attackers to log into a device without authentication. The vulnerability, according to the Taiwan Computer Emergency Response Team / Coordination Center (TWCERT/CC), carries a severity rating of 9.8 out of 10. Asus said the vulnerability affects the following routers:

Model name Support Site link
XT8 and XT8_V2 https://www.asus.com/uk/supportonly/asus%20zenwifi%20ax%20(xt8)/helpdesk_bios/
RT-AX88U https://www.asus.com/supportonly/RT-AX88U/helpdesk_bios/
RT-AX58U https://www.asus.com/supportonly/RT-AX58U/helpdesk_bios/
RT-AX57 https://www.asus.com/networking-iot-servers/wifi-routers/asus-wifi-routers/rt-ax57/helpdesk_bios
RT-AC86U https://www.asus.com/supportonly/RT-AC86U/helpdesk_bios/
RT-AC68U https://www.asus.com/supportonly/RT-AC68U/helpdesk_bios/

A favorite haven for hackers

A second vulnerability tracked as CVE-2024-3079 affects the same router models. It stems from a buffer overflow flaw and allows remote hackers who have already obtained administrative access to an affected router to execute commands.

Read 5 remaining paragraphs | Comments

Proton is taking its privacy-first apps to a nonprofit foundation model

Swiss flat flying over a landscape of Swiss mountains, with tourists looking on from nearby ledge

Enlarge (credit: Getty Images)

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn't want you to think about it in the way you think about other notable privacy and web foundations.

"We believe that if we want to bring about large-scale change, Proton can’t be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto “foundations”)," Proton CEO Andy Yen wrote in a blog post announcing the transition. "Instead, Proton must have a profitable and healthy business at its core."

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will "make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits." Among other members of the Foundation's board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Read 4 remaining paragraphs | Comments

Ransomware attackers quickly weaponize PHP vulnerability with 9.8 severity rating

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word "exploit"

Enlarge (credit: Getty Images)

Ransomware criminals have quickly weaponized an easy-to-exploit vulnerability in the PHP programming language that executes malicious code on web servers, security researchers said.

As of Thursday, Internet scans performed by security firm Censys had detected 1,000 servers infected by a ransomware strain known as TellYouThePass, down from 1,800 detected on Monday. The servers, primarily located in China, no longer display their usual content; instead, many list the site’s file directory, which shows all files have been given a .locked extension, indicating they have been encrypted. An accompanying ransom note demands roughly $6,500 in exchange for the decryption key.

When opportunity knocks

The vulnerability, tracked as CVE-2024-4577 and carrying a severity rating of 9.8 out of 10, stems from errors in the way PHP converts Unicode characters into ASCII. A feature built into Windows known as Best Fit allows attackers to use a technique known as argument injection to convert user-supplied input into characters that pass malicious commands to the main PHP application. Exploits allow attackers to bypass CVE-2012-1823, a critical code execution vulnerability patched in PHP in 2012.

Read 11 remaining paragraphs | Comments

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code

Illustration of the Apollo lunar lander Eagle over the Moon.

Enlarge / Illustration of the Apollo lunar lander Eagle over the Moon. (credit: Getty Images)

On Friday, a retired software engineer named Martin C. Martin announced that he recently discovered a bug in the original Lunar Lander computer game's physics code while tinkering with the software. Created by a 17-year-old high school student named Jim Storer in 1969, this primordial game rendered the action only as text status updates on a teletype, but it set the stage for future versions to come.

The legendary game—which Storer developed on a PDP-8 minicomputer in a programming language called FOCAL just months after Neil Armstrong and Buzz Aldrin made their historic moonwalks—allows players to control a lunar module's descent onto the Moon's surface. Players must carefully manage their fuel usage to achieve a gentle landing, making critical decisions every ten seconds to burn the right amount of fuel.

In 2009, just short of the 40th anniversary of the first Moon landing, I set out to find the author of the original Lunar Lander game, which was then primarily known as a graphical game, thanks to the graphical version from 1974 and a 1979 Atari arcade title. When I discovered that Storer created the oldest known version as a teletype game, I interviewed him and wrote up a history of the game. Storer later released the source code to the original game, written in FOCAL, on his website.

Read 7 remaining paragraphs | Comments

“Simulation of keyboard activity” leads to firing of Wells Fargo employees

Signage with logo at headquarters of Wells Fargo Capital Finance, the commercial banking division of Wells Fargo Bank, in the Financial District neighborhood of San Francisco, California, September 26, 2016.

Enlarge (credit: Getty Images)

Last month, Wells Fargo terminated over a dozen bank employees following an investigation into claims of faking work activity on their computers, according to a Bloomberg report.

A Financial Industry Regulatory Authority (FINRA) search conducted by Ars confirmed that the fired members of the firm's wealth and investment management division were "discharged after review of allegations involving simulation of keyboard activity creating impression of active work."

A rise in remote work during the COVID-19 pandemic accelerated the adoption of remote worker surveillance techniques, especially those using software installed on machines that keeps track of activity and reports back to corporate management. It's worth noting that the Bloomberg report says the FINRA filing does not specify whether the fired Wells Fargo employees were simulating activity at home or in an office.

Read 6 remaining paragraphs | Comments

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

The OpenAI and Apple logos together.

Enlarge (credit: OpenAI / Apple / Benj Edwards)

On Monday, Apple announced it would be integrating OpenAI's ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google's multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT's placement on its devices as compensation enough.

"Apple isn’t paying OpenAI as part of the partnership," writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. "Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments."

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT's capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

Read 7 remaining paragraphs | Comments

Turkish student creates custom AI device for cheating university exam, gets arrested

A photo illustration of what a shirt-button camera <em>could</em> look like.

Enlarge / A photo illustration of what a shirt-button camera could look like. (credit: Aurich Lawson | Getty Images)

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person's eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a "router" (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

Read 5 remaining paragraphs | Comments

New Stable Diffusion 3 release excels at AI-generated body horror

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. (credit: HorneyMetalBeing)

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]," details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

Read 10 remaining paragraphs | Comments

One of the major sellers of detailed driver behavioral data is shutting down

Interior of car with different aspects of it highlighted, as if by a camera or AI

Enlarge (credit: Getty Images)

One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.

Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a "Driving Behavior Data History Report" to insurers.

Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times' Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every "hard acceleration" or "hard braking event," among other data points.

Read 4 remaining paragraphs | Comments

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

Enlarge

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defense. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

Read 6 remaining paragraphs | Comments

Apple and OpenAI currently have the most misunderstood partnership in tech

A man talks into a smartphone.

Enlarge / He isn't using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered "Apple Intelligence" during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we've seen confusion on social media about why Apple didn't develop a cutting-edge GPT-4-like chatbot internally. Despite Apple's year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple's lack of innovation.

"This is really strange. Surely Apple could train a very good competing LLM if they wanted? They've had a year," wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misconceptions about it—saying things like, "It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!"

Read 19 remaining paragraphs | Comments

Hackers steal “significant volume” of data from hundreds of Snowflake customers

Hackers steal “significant volume” of data from hundreds of Snowflake customers

Enlarge (credit: Getty Images)

As many as 165 customers of cloud storage provider Snowflake have been compromised by a group that obtained login credentials through information-stealing malware, researchers said Monday.

On Friday, Lending Tree subsidiary QuoteWizard confirmed it was among the customers notified by Snowflake that it was affected in the incident. Lending Tree spokesperson Megan Greuling said the company is in the process of determining whether data stored on Snowflake has been stolen.

“That investigation is ongoing,” she wrote in an email. “As of this time, it does not appear that consumer financial account information was impacted, nor information of the parent entity, Lending Tree.”

Read 13 remaining paragraphs | Comments

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Enlarge (credit: Apple)

On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.

The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach.

At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."

Read 10 remaining paragraphs | Comments

Nasty bug with very simple exploit hits PHP just in time for the weekend

Nasty bug with very simple exploit hits PHP just in time for the weekend

Enlarge

A critical vulnerability in the PHP programming language can be trivially exploited to execute malicious code on Windows devices, security researchers warned as they urged those affected to take action before the weekend starts.

Within 24 hours of the vulnerability and accompanying patch being published, researchers from the nonprofit security organization Shadowserver reported Internet scans designed to identify servers that are susceptible to attacks. That—combined with (1) the ease of exploitation, (2) the availability of proof-of-concept attack code, (3) the severity of remotely executing code on vulnerable machines, and (4) the widely used XAMPP platform being vulnerable by default—has prompted security practitioners to urge admins check to see if their PHP servers are affected before starting the weekend.

When “Best Fit” isn't

“A nasty bug with a very simple exploit—perfect for a Friday afternoon,” researchers with security firm WatchTowr wrote.

Read 16 remaining paragraphs | Comments

VMware customers may stay, but Broadcom could face backlash “for years to come”

VMware customers may stay, but Broadcom could face backlash “for years to come”

Enlarge (credit: Getty)

After acquiring VMware, Broadcom swiftly enacted widespread changes that resulted in strong public backlash. A new survey of 300 director-level IT workers at companies that are customers of North American VMware provides insight into the customer reaction to Broadcom's overhaul.

The survey released Thursday doesn't provide feedback from every VMware customer, but it's the first time we've seen responses from IT decision-makers working for companies paying for VMware products. It echos concerns expressed at the announcement of some of Broadcom's more controversial changes to VMware, like the end of perpetual licenses and growing costs.

CloudBolt Software commissioned Wakefield Research, a market research agency, to run the study from May 9 through May 23. The "CloudBolt Industry Insights Reality Report: VMware Acquisition Aftermath" includes responses from workers at 150 companies with fewer than 1,000 workers and 150 companies with more than 1,000 workers. Survey respondents were invited via email and took the survey online, with the report authors writing that results are subject to sampling variation of ±5.7 percentage points at a 95 percent confidence level.

Read 13 remaining paragraphs | Comments

7,000 LockBit decryption keys now in the hands of the FBI, offering victims hope

A ransom note is plastered across a laptop monitor.

Enlarge (credit: Getty Images)

The FBI is urging victims of one of the most prolific ransomware groups to come forward after agents recovered thousands of decryption keys that may allow the recovery of data that has remained inaccessible for months or years.

The revelation, made Wednesday by a top FBI official, comes three months after an international roster of law enforcement agencies seized servers and other infrastructure used by LockBit, a ransomware syndicate that authorities say has extorted more than $1 billion from 7,000 victims around the world. Authorities said at the time that they took control of 1,000 decryption keys, 4,000 accounts, and 34 servers and froze 200 cryptocurrency accounts associated with the operation.

At a speech before a cybersecurity conference in Boston, FBI Cyber Assistant Director Bryan Vorndran said Wednesday that agents have also recovered an asset that will be of intense interest to thousands of LockBit victims—the decryption keys that could allow them to unlock data that’s been held for ransom by LockBit associates.

Read 8 remaining paragraphs | Comments

DuckDuckGo offers “anonymous” access to AI chatbots through new service

DuckDuckGo's AI Chat promotional image.

Enlarge (credit: DuckDuckGo)

On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.

Read 6 remaining paragraphs | Comments

Russian agents deploy AI-produced Tom Cruise narrator to tar Summer Olympics

A visual from the fake documentary <em>Olympics Has Fallen</em> produced by Russia-affiliated influence actor Storm-1679.

Enlarge / A visual from the fake documentary Olympics Has Fallen produced by Russia-affiliated influence actor Storm-1679. (credit: Microsoft)

Last year, a feature-length documentary purportedly produced by Netflix began circulating on Telegram. Titled “Olympics have Fallen” and narrated by a voice with a striking similarity to that of actor Tom Cruise, it sharply criticized the leadership of the International Olympic Committee. The slickly produced film, claiming five-star reviews from The New York Times, Washington Post, and BBC, was quickly amplified on social media. Among those seemingly endorsing the documentary were celebrities on the platform Cameo.

A recently published report by Microsoft (PDF) said the film was not a documentary, had received no such reviews, and that the narrator's voice was an AI-produced deep fake of Cruise. It also said the endorsements on Cameo were faked. The Microsoft Threat Intelligence Report went on to say that the fraudulent documentary and endorsements were only one of many elaborate hoaxes created by agents of the Russian government in a yearlong influence operation intended to discredit the International Olympic Committee (IOC) and deter participation and attendance at the Paris Olympics starting next month.

Other examples of the Kremlin’s ongoing influence operation include:

Read 7 remaining paragraphs | Comments

Ex-OpenAI staff call for “right to warn” about AI risks without retaliation

Illustration of businesspeople with red blank speech bubble standing in line.

Enlarge (credit: Getty Images)

On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling for AI companies to commit to principles allowing employees to raise concerns about AI risks without fear of retaliation. The letter, titled "A Right to Warn about Advanced Artificial Intelligence," has so far been signed by 13 individuals, including some who chose to remain anonymous due to concerns about potential repercussions.

The signatories argue that while AI has the potential to deliver benefits to humanity, it also poses serious risks that include "further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction."

They also assert that AI companies possess substantial non-public information about their systems' capabilities, limitations, and risk levels, but currently have only weak obligations to share this information with governments and none with civil society.

Read 8 remaining paragraphs | Comments

London hospitals declare emergency following ransomware attack

London hospitals declare emergency following ransomware attack

Enlarge

A ransomware attack that crippled a London-based medical testing and diagnostics provider has led several major hospitals in the city to declare a critical incident emergency and cancel non-emergency surgeries and pathology appointments, it was widely reported Tuesday.

The attack was detected Monday against Synnovis, a supplier of blood tests, swabs, bowel tests, and other hospital services in six London boroughs. The company said it has "affected all Synnovis IT systems, resulting in interruptions to many of our pathology services." The company gave no estimate of when its systems would be restored and provided no details about the attack or who was behind it.

Major impact

The outage has led hospitals, including Guy's and St Thomas' and King's College Hospital Trusts, to cancel operations and procedures involving blood transfusions. The cancellations include transplant surgeries, which require blood transfusions.

Read 7 remaining paragraphs | Comments

Zoom CEO envisions AI deepfakes attending meetings in your place

Woman discussing work on video call with team members at office

Enlarge (credit: Getty Images)

Zoom CEO Eric Yuan has a vision for the future of work: sending your AI-powered digital twin to attend meetings on your behalf. In an interview with The Verge's Nilay Patel published Monday, Yuan shared his plans for Zoom to become an "AI-first company," using AI to automate tasks and reduce the need for human involvement in day-to-day work.

"Let’s say the team is waiting for the CEO to make a decision or maybe some meaningful conversation, my digital twin really can represent me and also can be part of the decision making process," Yuan said in the interview. "We’re not there yet, but that’s a reason why there’s limitations in today’s LLMs."

LLMs are large language models—text-predicting AI models that power AI assistants like ChatGPT and Microsoft Copilot. They can output very convincing human-like text based on probabilities, but they are far from being able to replicate human reasoning. Still, Yuan suggests that instead of relying on a generic LLM to impersonate you, in the future, people will train custom LLMs to simulate each person.

Read 16 remaining paragraphs | Comments

Ticketmaster hacked in what’s believed to be a spree hitting Snowflake customers

Ticketmaster hacked in what’s believed to be a spree hitting Snowflake customers

Enlarge (credit: Getty Images)

Cloud storage provider Snowflake said that accounts belonging to multiple customers have been hacked after threat actors obtained credentials through info-stealing malware or by purchasing them on online crime forums.

Ticketmaster parent Live Nation—which disclosed Friday that hackers gained access to data it stored through an unnamed third-party provider—told TechCrunch the provider was Snowflake. The live-event ticket broker said it identified the hack on May 20, and a week later, a “criminal threat actor offered what it alleged to be Company user data for sale via the dark web.”

Ticketmaster is one of six Snowflake customers to be hit in the hacking campaign, said independent security researcher Kevin Beaumont, citing conversations with people inside the affected companies. Australia’s Signal Directorate said Saturday it knew of “successful compromises of several companies utilizing Snowflake environments.” Researchers with security firm Hudson Rock said in a now-deleted post that Santander, Spain’s biggest bank, was also hacked in the campaign. The researchers cited online text conversations with the threat actor. Last month, Santander disclosed a data breach affecting customers in Chile, Spain, and Uruguay.

Read 11 remaining paragraphs | Comments

Nvidia jumps ahead of itself and reveals next-gen “Rubin” AI chips in keynote tease

Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024.

Enlarge / Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024. (credit: SAM YEH/AFP via Getty Images)

On Sunday, Nvidia CEO Jensen Huang reached beyond Blackwell and revealed the company's next-generation AI-accelerating GPU platform during his keynote at Computex 2024 in Taiwan. Huang also detailed plans for an annual tick-tock-style upgrade cycle of its AI acceleration platforms, mentioning an upcoming Blackwell Ultra chip slated for 2025 and a subsequent platform called "Rubin" set for 2026.

Nvidia's data center GPUs currently power a large majority of cloud-based AI models, such as ChatGPT, in both development (training) and deployment (inference) phases, and investors are keeping a close watch on the company, with expectations to keep that run going.

During the keynote, Huang seemed somewhat hesitant to make the Rubin announcement, perhaps wary of invoking the so-called Osborne effect, whereby a company's premature announcement of the next iteration of a tech product eats into the current iteration's sales. "This is the very first time that this next click as been made," Huang said, holding up his presentation remote just before the Rubin announcement. "And I'm not sure yet whether I'm going to regret this or not."

Read 9 remaining paragraphs | Comments

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic

A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

Google’s AI Overview is flawed by design, and a new company blog post hints at why

A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

Federal agency warns critical Linux vulnerability being actively exploited

Federal agency warns critical Linux vulnerability being actively exploited

Enlarge (credit: Getty Images)

The US Cybersecurity and Infrastructure Security Agency has added a critical security bug in Linux to its list of vulnerabilities known to be actively exploited in the wild.

The vulnerability, tracked as CVE-2024-1086 and carrying a severity rating of 7.8 out of a possible 10, allows people who have already gained a foothold inside an affected system to escalate their system privileges. It’s the result of a use-after-free error, a class of vulnerability that occurs in software written in the C and C++ languages when a process continues to access a memory location after it has been freed or deallocated. Use-after-free vulnerabilities can result in remote code or privilege escalation.

The vulnerability, which affects Linux kernel versions 5.14 through 6.6, resides in the NF_tables, a kernel component enabling the Netfilter, which in turn facilitates a variety of network operations, including packet filtering, network address [and port] translation (NA[P]T), packet logging, userspace packet queueing, and other packet mangling. It was patched in January, but as the CISA advisory indicates, some production systems have yet to install it. At the time this Ars post went live, there were no known details about the active exploitation.

Read 4 remaining paragraphs | Comments

Tech giants form AI group to counter Nvidia with new interconnect standard

Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

Law enforcement operation takes aim at an often-overlooked cybercrime linchpin

Law enforcement operation takes aim at an often-overlooked cybercrime linchpin

Enlarge (credit: Getty Images)

An international cast of law enforcement agencies has struck a blow at a cybercrime linchpin that’s as obscure as it is instrumental in the mass-infection of devices: so-called droppers, the sneaky software that’s used to install ransomware, spyware, and all manner of other malware.

Europol said Wednesday it made four arrests, took down 100 servers, and seized 2,000 domain names that were facilitating six of the best-known droppers. Officials also added eight fugitives linked to the enterprises to Europe’s Most Wanted list. The droppers named by Europol are IcedID, SystemBC, Pikabot, Smokeloader, Bumblebee, and Trickbot.

Droppers provide two specialized functions. First, they use encryption, code-obfuscation, and similar techniques to cloak malicious code inside a packer or other form of container. These containers are then put into email attachments, malicious websites, or alongside legitimate software available through malicious web ads. Second, the malware droppers serve as specialized botnets that facilitate the installation of additional malware.

Read 9 remaining paragraphs | Comments

Mystery malware destroys 600,000 routers from a single ISP during 72-hour span

Mystery malware destroys 600,000 routers from a single ISP during 72-hour span

Enlarge (credit: Getty Images)

One day last October, subscribers to an ISP known as Windstream began flooding message boards with reports their routers had suddenly stopped working and remained unresponsive to reboots and all other attempts to revive them.

“The routers now just sit there with a steady red light on the front,” one user wrote, referring to the ActionTec T3200 router models Windstream provided to both them and a next door neighbor. “They won't even respond to a RESET.”

In the messages—which appeared over a few days beginning on October 25—many Windstream users blamed the ISP for the mass bricking. They said it was the result of the company pushing updates that poisoned the devices. Windstream’s Kinetic broadband service has about 1.6 million subscribers in 18 states, including Iowa, Alabama, Arkansas, Georgia, and Kentucky. For many customers, Kinetic provides an essential link to the outside world.

Read 17 remaining paragraphs | Comments

OpenAI board first learned about ChatGPT from Twitter, according to former member

Helen Toner, former OpenAI board member, speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023. (credit: Getty Images)

In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.

"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.

Read 8 remaining paragraphs | Comments

Researchers crack 11-year-old password, recover $3 million in bitcoin

Illustration of a wallet

Enlarge (credit: Flavio Coelho/Getty Images)

Two years ago when “Michael,” an owner of cryptocurrency, contacted Joe Grand to help recover access to about $2 million worth of bitcoin he stored in encrypted format on his computer, Grand turned him down.

Michael, who is based in Europe and asked to remain anonymous, stored the cryptocurrency in a password-protected digital wallet. He generated a password using the RoboForm password manager and stored that password in a file encrypted with a tool called TrueCrypt. At some point, that file got corrupted, and Michael lost access to the 20-character password he had generated to secure his 43.6 BTC (worth a total of about 4,000 euros, or $5,300, in 2013). Michael used the RoboForm password manager to generate the password but did not store it in his manager. He worried that someone would hack his computer and obtain the password.

“At [that] time, I was really paranoid with my security,” he laughs.

Read 26 remaining paragraphs | Comments

US sanctions operators of “free VPN” that routed crime traffic through user PCs

US sanctions operators of “free VPN” that routed crime traffic through user PCs

Enlarge (credit: Getty Images)

The US Treasury Department has sanctioned three Chinese nationals for their involvement in a VPN-powered botnet with more than 19 million residential IP addresses they rented out to cybercriminals to obfuscate their illegal activities, including COVID-19 aid scams and bomb threats.

The criminal enterprise, the Treasury Department said Tuesday, was a residential proxy service known as 911 S5. Such services provide a bank of IP addresses belonging to everyday home users for customers to route Internet connections through. When accessing a website or other Internet service, the connection appears to originate with the home user.

In 2022, researchers at the University of Sherbrooke profiled 911[.]re, a service that appears to be an earlier version of 911 S5. At the time, its infrastructure comprised 120,000 residential IP addresses. This pool was created using one of two free VPNs—MaskVPN and DewVPN—marketed to end users. Besides acting as a legitimate VPN, the software also operated as a botnet that covertly turned users’ devices into a proxy server. The complex structure was designed with the intent of making the botnet hard to reverse engineer.

Read 9 remaining paragraphs | Comments

OpenAI training its next major AI model, forms new safety committee

A man rolling a boulder up a hill.

Enlarge (credit: Getty Images)

On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has "recently begun" training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to two weeks of public setbacks for the company.

Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, "frontier model" is a term for a new AI system designed to push the boundaries of current capabilities. And "AGI" refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).

Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.

Read 5 remaining paragraphs | Comments

Newly discovered ransomware uses BitLocker to encrypt victim data

Stock photo of ransom note with letters cut out of newspapers and magazines.

Enlarge (credit: Getty Images)

A previously unknown piece of ransomware, dubbed ShrinkLocker, encrypts victim data using the BitLocker feature built into the Windows operating system.

BitLocker is a full-volume encryptor that debuted in 2007 with the release of Windows Vista. Users employ it to encrypt entire hard drives to prevent people from reading or modifying data in the event they get physical access to the disk. Starting with the rollout of Windows 10, BitLocker by default has used the 128-bit and 256-bit XTS-AES encryption algorithm, giving the feature extra protection from attacks that rely on manipulating cipher text to cause predictable changes in plain text.

Recently, researchers from security firm Kaspersky found a threat actor using BitLocker to encrypt data on systems located in Mexico, Indonesia, and Jordan. The researchers named the new ransomware ShrinkLocker, both for its use of BitLocker and because it shrinks the size of each non-boot partition by 100 MB and splits the newly unallocated space into new primary partitions of the same size.

Read 10 remaining paragraphs | Comments

Google’s “AI Overview” can give false, misleading, and dangerous answers

This is fine.

Enlarge / This is fine. (credit: Getty Images)

If you use Google regularly, you may have noticed the company's new AI Overviews providing summarized answers to some of your questions in recent days. If you use social media regularly, you may have come across many examples of those AI Overviews being hilariously or even dangerously wrong.

Factual errors can pop up in existing LLM chatbots as well, of course. But the potential damage that can be caused by AI inaccuracy gets multiplied when those errors appear atop the ultra-valuable web real estate of the Google search results page.

"The examples we've seen are generally very uncommon queries and aren’t representative of most people’s experiences," a Google spokesperson told Ars. "The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web."

Read 18 remaining paragraphs | Comments

Crooks plant backdoor in software used by courtrooms around the world

Crooks plant backdoor in software used by courtrooms around the world

Enlarge (credit: JAVS)

A software maker serving more than 10,000 courtrooms throughout the world hosted an application update containing a hidden backdoor that maintained persistent communication with a malicious website, researchers reported Thursday, in the latest episode of a supply-chain attack.

The software, known as the JAVS Viewer 8, is a component of the JAVS Suite 8, an application package courtrooms use to record, play back, and manage audio and video from proceedings. Its maker, Louisville, Kentucky-based Justice AV Solutions, says its products are used in more than 10,000 courtrooms throughout the US and 11 other countries. The company has been in business for 35 years.

JAVS Viewer users at high risk

Researchers from security firm Rapid7 reported that a version of the JAVS Viewer 8 available for download on javs.com contained a backdoor that gave an unknown threat actor persistent access to infected devices. The malicious download, planted inside an executable file that installs the JAVS Viewer version 8.3.7, was available no later than April 1, when a post on X (formerly Twitter) reported it. It’s unclear when the backdoored version was removed from the company’s download page. JAVS representatives didn’t immediately respond to questions sent by email.

Read 10 remaining paragraphs | Comments

Bing outage shows just how little competition Google search really has

Google logo on a phone in front of a Bing logo in the background

Enlarge (credit: Getty Images)

Bing, Microsoft's search engine platform, went down in the very early morning today. That meant that searches from Microsoft's Edge browsers that had yet to change their default providers didn't work. It also meant that services relying on Bing's search API—Microsoft's own Copilot, ChatGPT search, Yahoo, Ecosia, and DuckDuckGo—similarly failed.

Services were largely restored by the morning Eastern work hours, but the timing feels apt, concerning, or some combination of the two. Google, the consistently dominating search platform, just last week announced and debuted AI Overviews as a default addition to all searches. If you don't want an AI response but still want to use Google, you can hunt down the new "Web" option in a menu, or you can, per Ernie Smith, tack "&udm=14" onto your search or use Smith's own "Konami code" shortcut page.

If dismay about AI's hallucinations, power draw, or pizza recipes concern you—along with perhaps broader Google issues involving privacy, tracking, news, SEO, or monopoly power—most of your other major options were brought down by a single API outage this morning. Moving past that kind of single point of vulnerability will take some work, both by the industry and by you, the person wondering if there's a real alternative.

Read 11 remaining paragraphs | Comments

A root-server at the Internet’s core lost touch with its peers. We still don’t know why.

A root-server at the Internet’s core lost touch with its peers. We still don’t know why.

Enlarge

For more than four days, a server at the very core of the Internet’s domain name system was out of sync with its 12 root server peers due to an unexplained glitch that could have caused stability and security problems worldwide. This server, maintained by Internet carrier Cogent Communications, is one of the 13 root servers that provision the Internet’s root zone, which sits at the top of the hierarchical distributed database known as the domain name system, or DNS.

Here's a simplified recap of the way the domain name system works and how root servers fit in:

When someone enters wikipedia.org in their browser, the servers handling the request first must translate the human-friendly domain name into an IP address. This is where the domain name system comes in. The first step in the DNS process is the browser queries the local stub resolver in the local operating system. The stub resolver forwards the query to a recursive resolver, which may be provided by the user's ISP or a service such as 1.1.1.1 or 8.8.8.8 from Cloudflare and Google, respectively.

Read 15 remaining paragraphs | Comments

EmTech Digital 2024: A thoughtful look at AI’s pros and cons with minimal hype

Nathan Benaich of Air Street capital delivers the opening presentation on the state of AI at EmTech Digital 2024 on May 22, 2024.

Enlarge / Nathan Benaich of Air Street Capital delivers the opening presentation on the state of AI at EmTech Digital 2024 on May 22, 2024. (credit: Benj Edwards)

CAMBRIDGE, Massachusetts—On Wednesday, AI enthusiasts and experts gathered to hear a series of presentations about the state of AI at EmTech Digital 2024 on the Massachusetts Institute of Technology's campus. The event was hosted by the publication MIT Technology Review. The overall consensus is that generative AI is still in its very early stages—with policy, regulations, and social norms still being established—and its growth is likely to continue into the future.

I was there to check the event out. MIT is the birthplace of many tech innovations—including the first action-oriented computer video game—among others, so it felt fitting to hear talks about the latest tech craze in the same building that hosts MIT's Media Lab on its sprawling and lush campus.

EmTech's speakers included AI researchers, policy experts, critics, and company spokespeople. A corporate feel pervaded the event due to strategic sponsorships, but it was handled in a low-key way that matches the level-headed tech coverage coming out of MIT Technology Review. After each presentation, MIT Technology Review staff—such as Editor-in-Chief Mat Honan and Senior Reporter Melissa Heikkilä—did a brief sit-down interview with the speaker, pushing back on some points and emphasizing others. Then the speaker took a few audience questions if time allowed.

Read 10 remaining paragraphs | Comments

❌