Normal view

There are new articles available, click to refresh the page.
Before yesterdayArs Technica

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI

14 June 2024 at 13:11
A large Tesla logo

Enlarge (credit: Getty Images | SOPA Images)

A group of Tesla investors yesterday sued Elon Musk, the company, and its board members, alleging that Tesla was harmed by Musk's diversion of resources to his xAI venture. The diversion of resources includes hiring AI employees away from Tesla, diverting microchips from Tesla to X (formerly Twitter) and xAI, and "xAI's use of Tesla's data to develop xAI's own software/hardware, all without compensation to Tesla," the lawsuit said.

The lawsuit in Delaware Court of Chancery was filed by three Tesla shareholders: the Cleveland Bakers and Teamsters Pension Fund, Daniel Hazen, and Michael Giampietro. It seeks financial damages for Tesla and the disgorging of Musk's equity stake in xAI to Tesla.

"Could the CEO of Coca-Cola loyally start a competing soft-drink company on the side, then divert scarce ingredients from Coca-Cola to the startup? Could the CEO of Goldman Sachs loyally start a competing financial advisory company on the side, then hire away key bankers from Goldman Sachs to the startup? Could the board of either company loyally permit such conduct without doing anything about it? Of course not," the lawsuit says.

Read 11 remaining paragraphs | Comments

Microsoft delays Recall again, won’t debut it with new Copilot+ PCs after all

13 June 2024 at 22:40
Recall is part of Microsoft's Copilot+ PC program.

Enlarge / Recall is part of Microsoft's Copilot+ PC program. (credit: Microsoft)

Microsoft will be delaying its controversial Recall feature again, according to an updated blog post by Windows and Devices VP Pavan Davuluri. And when the feature does return "in the coming weeks," Davuluri writes, it will be as a preview available to PCs in the Windows Insider Program, the same public testing and validation pipeline that all other Windows features usually go through before being released to the general populace.

Recall is a new Windows 11 AI feature that will be available on PCs that meet the company's requirements for its "Copilot+ PC" program. Copilot+ PCs need at least 16GB of RAM, 256GB of storage, and a neural processing unit (NPU) capable of at least 40 trillion operations per second (TOPS). The first (and for a few months, only) PCs that will meet this requirement are all using Qualcomm's Snapdragon X Plus and X Elite Arm chips, with compatible Intel and AMD processors following later this year. Copilot+ PCs ship with other generative AI features, too, but Recall's widely publicized security problems have sucked most of the oxygen out of the room so far.

The Windows Insider preview of Recall will still require a PC that meets the Copilot+ requirements, though third-party scripts may be able to turn on Recall for PCs without the necessary hardware. We'll know more when Recall makes its reappearance.

Read 7 remaining paragraphs | Comments

This photo got 3rd in an AI art contest—then its human photographer came forward

13 June 2024 at 18:34
To be fair, I wouldn't put it past an AI model to forget the flamingo's head.

Enlarge / To be fair, I wouldn't put it past an AI model to forget the flamingo's head. (credit: Miles Astray)

A juried photography contest has disqualified one of the images that was originally picked as a top three finisher in its new AI art category. The reason for the disqualification? The photo was actually taken by a human and not generated by an AI model.

The 1839 Awards launched last year as a way to "honor photography as an art form," with a panel of experienced judges who work with photos at The New York Times, Christie's, and Getty Images, among others. The contest rules sought to segregate AI images into their own category as a way to separate out the work of increasingly impressive image generators from "those who use the camera as their artistic medium," as the 1839 Awards site puts it.

For the non-AI categories, the 1839 Awards rules note that they "reserve the right to request proof of the image not being generated by AI as well as for proof of ownership of the original files." Apparently, though, the awards did not request any corresponding proof that submissions in the AI category were generated by AI.

Read 9 remaining paragraphs | Comments

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

13 June 2024 at 13:20
The OpenAI and Apple logos together.

Enlarge (credit: OpenAI / Apple / Benj Edwards)

On Monday, Apple announced it would be integrating OpenAI's ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google's multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT's placement on its devices as compensation enough.

"Apple isn’t paying OpenAI as part of the partnership," writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. "Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments."

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT's capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

Read 7 remaining paragraphs | Comments

Cop busted for unauthorized use of Clearview AI facial recognition resigns

13 June 2024 at 12:16
Cop busted for unauthorized use of Clearview AI facial recognition resigns

Enlarge (credit: Francesco Carta fotografo | Moment)

An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.

According to a press release from the Evansville Police Department, this was a clear "misuse" of Clearview AI's controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.

To help identify suspects, police can scan what Clearview AI describes on its website as "the world's largest facial recognition network." The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.

Read 16 remaining paragraphs | Comments

Wyoming mayoral candidate wants to govern by AI bot

By: WIRED
13 June 2024 at 10:01
Digital chatbot icon on future tech background. Productivity of AI bots evolution. Futuristic chatbot icon and abstract chart in world of technological progress and innovation. CGI 3D render

Enlarge (credit: dakuq via Getty)

Victor Miller is running for mayor of Cheyenne, Wyoming, with an unusual campaign promise: If elected, he will not be calling the shots—an AI bot will. VIC, the Virtual Integrated Citizen, is a ChatGPT-based chatbot that Miller created. And Miller says the bot has better ideas—and a better grasp of the law—than many people currently serving in government.

“I realized that this entity is way smarter than me, and more importantly, way better than some of the outward-facing public servants I see,” he says. According to Miller, VIC will make the decisions, and Miller will be its “meat puppet,” attending meetings, signing documents, and otherwise doing the corporeal job of running the city.

But whether VIC—and Victor—will be allowed to run at all is still an open question.

Read 20 remaining paragraphs | Comments

Turkish student creates custom AI device for cheating university exam, gets arrested

12 June 2024 at 16:52
A photo illustration of what a shirt-button camera <em>could</em> look like.

Enlarge / A photo illustration of what a shirt-button camera could look like. (credit: Aurich Lawson | Getty Images)

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person's eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a "router" (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

Read 5 remaining paragraphs | Comments

New Stable Diffusion 3 release excels at AI-generated body horror

12 June 2024 at 15:26
An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. (credit: HorneyMetalBeing)

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]," details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

Read 10 remaining paragraphs | Comments

One of the major sellers of detailed driver behavioral data is shutting down

12 June 2024 at 13:57
Interior of car with different aspects of it highlighted, as if by a camera or AI

Enlarge (credit: Getty Images)

One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.

Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a "Driving Behavior Data History Report" to insurers.

Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times' Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every "hard acceleration" or "hard braking event," among other data points.

Read 4 remaining paragraphs | Comments

Elon Musk drops claims that OpenAI abandoned mission

11 June 2024 at 17:18
Elon Musk drops claims that OpenAI abandoned mission

Enlarge (credit: JC Olivera / Stringer | WireImage)

While Musk has spent much of today loudly criticizing the Apple/OpenAI deal, he also sought to drop his lawsuit against OpenAI, a court filing today showed.

In the filing, Musk's lawyer, Morgan Chu, notified the Superior Court of California in San Francisco of Musk's request for dismissal of his entire complaint without prejudice.

There are currently no further details as to why Musk decided to drop the suit.

Read 9 remaining paragraphs | Comments

Elon Musk is livid about new OpenAI/Apple deal

11 June 2024 at 16:50
Elon Musk is livid about new OpenAI/Apple deal

Enlarge (credit: Anadolu / Contributor | Anadolu)

Elon Musk is so opposed to Apple's plan to integrate OpenAI's ChatGPT with device operating systems that he's seemingly spreading misconceptions while heavily criticizing the partnership.

On X (formerly Twitter), Musk has been criticizing alleged privacy and security risks since the plan was announced Monday at Apple's annual Worldwide Developers Conference.

"If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies," Musk posted on X. "That is an unacceptable security violation." In another post responding to Apple CEO Tim Cook, Musk wrote, "Don't want it. Either stop this creepy spyware or all Apple devices will be banned from the premises of my companies."

Read 24 remaining paragraphs | Comments

Apple and OpenAI currently have the most misunderstood partnership in tech

11 June 2024 at 13:29
A man talks into a smartphone.

Enlarge / He isn't using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered "Apple Intelligence" during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we've seen confusion on social media about why Apple didn't develop a cutting-edge GPT-4-like chatbot internally. Despite Apple's year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple's lack of innovation.

"This is really strange. Surely Apple could train a very good competing LLM if they wanted? They've had a year," wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misconceptions about it—saying things like, "It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!"

Read 19 remaining paragraphs | Comments

Adobe to update vague AI terms after users threaten to cancel subscriptions

11 June 2024 at 13:06
Adobe to update vague AI terms after users threaten to cancel subscriptions

Enlarge (credit: bennymarty | iStock Editorial / Getty Images Plus)

Adobe has promised to update its terms of service to make it "abundantly clear" that the company will "never" train generative AI on creators' content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives' projects unless they immediately accepted them.

For any users unwilling to accept, canceling annual plans could trigger fees amounting to 50 percent of their remaining subscription cost. Adobe justifies collecting these fees because a "yearly subscription comes with a significant discount."

Read 25 remaining paragraphs | Comments

AI trained on photos from kids’ entire childhood without their consent

10 June 2024 at 18:37
AI trained on photos from kids’ entire childhood without their consent

Enlarge (credit: RicardoImagen | E+)

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Read 34 remaining paragraphs | Comments

Apple integrates ChatGPT into Siri, iOS, and macOS

10 June 2024 at 15:29
  • The AIs are learning to cooperate! Siri talks to ChatGPT. [credit: Apple ]

Reports of Apple signing a deal with OpenAI are true: ChatGPT is coming to your Apple gear.

First up is Siri, which can tap into ChatGPT to answer voice questions. If Siri thinks ChatGPT can help answer your question, you'll get a pop-up permission box asking if you want to send your question to the chatbot. The response will come back in a window indicating that the information came from an outside source. This is the same way Siri treats a search engine (namely, Google), so how exactly Siri draws a line between ChatGPT and a search engine will be interesting. In Apple's lone example, there was a "help" intent, with the input saying to "help me plan a five-course meal" given certain ingredient limitations. That sort of ultra-specific input is something you can't do with a traditional search engine.

Siri can also send photos to ChatGPT. In Apple's example, the user snapped a picture of a wooden deck and asked Siri about decorating options. It sounds like the standard generative AI summary features will be here, too, with Apple SVP of Software Engineering Craig Federighi mentioning that "you can also ask questions about your documents, presentations, or PDFs."

Read 3 remaining paragraphs | Comments

Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

10 June 2024 at 15:15
Apple unveils “Apple Intelligence” AI features for iOS, iPadOS, and macOS

Enlarge (credit: Apple)

On Monday, Apple debuted "Apple Intelligence," a new suite of free AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. These features are achieved through a combination of on-device and cloud processing, with a strong emphasis on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.

The announcements came during a livestream WWDC keynote and a simultaneous event attended by the press on Apple's campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple's approach.

At last year's WWDC, Apple avoided using the term "AI" completely, instead preferring terms like "machine learning" as Apple's way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation "AI" by coining "Apple Intelligence," a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term "AI" was used sparingly in the keynote—most notably near the end of the presentation when Apple executive Craig Federighi said, "It's AI for the rest of us."

Read 10 remaining paragraphs | Comments

Apple’s AI promise: “Your data is never stored or made accessible to Apple”

10 June 2024 at 15:05
Apple Senior VP of Software Engineering Craig Federighi announces "Private Cloud Compute" at WWDC 2024.

Enlarge / Apple Senior VP of Software Engineering Craig Federighi announces "Private Cloud Compute" at WWDC 2024. (credit: Apple)

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new "Apple Intelligence" system it's integrating into its products will use a new "Private Cloud Compute" to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

"You should not have to hand over all the details of your life to be warehoused and analyzed in someone's AI cloud," Apple Senior VP of Software Engineering Craig Federighi said.

Trust, but verify

Part of what Apple calls "a brand new standard for privacy and AI" is achieved through on-device processing. Federighi said "many" of Apple's generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

Read 4 remaining paragraphs | Comments

iOS 18 adds Apple Intelligence, customizations, and makes Android SMS nicer

10 June 2024 at 13:47
Hands manipulating the Conrol Center on an iPhone

Enlarge (credit: Apple)

The biggest feature in iOS 18, the one that affects the most people, was a single item in a comma-stuffed sentence by Apple software boss Craig Federighi: "Support for RCS."

As we noted when Apple announced its support for "RCS Universal Profile," a kind of minimum viable cross-device rich messaging, iPhone users getting RCS means SMS chains with Android users "will be slightly less awful." SMS messages will soon have read receipts, higher-quality media sending, and typing indicators, along with better security. And RCS messages can go over Wi-Fi when you don't have a cellular signal. Apple is certainly downplaying a major cross-platform compatibility upgrade, but it's a notable quality-of-life boost.

  • Prioritized notifications through Apple Intelligence

Apple Intelligence, the new Siri, and the iPhone

iOS 18 is one of the major beneficiaries of Apple's AI rollout, dubbed "Apple Intelligence." Apple Intelligence promises to help iPhone users create and understand language and images, with the proper context from your phone's apps: photos, calendar, email, messages, and more.

Read 10 remaining paragraphs | Comments

Microsoft pulls release preview build of Windows 11 24H2 after Recall controversy

10 June 2024 at 11:27
The Recall feature provides a timeline of screenshots and a searchable database of text, thoroughly tracking everything about a person's PC usage.

Enlarge / The Recall feature provides a timeline of screenshots and a searchable database of text, thoroughly tracking everything about a person's PC usage. (credit: Microsoft)

On Friday, Microsoft announced major changes to its upcoming Recall feature after overwhelming criticism from security researchers, the press, and its users. Microsoft is turning Recall off by default when users set up PCs that are compatible with the feature, and it's adding additional authentication and encryption that will make it harder to access another user's Recall data on the same PC.

It's likely not a coincidence that Microsoft also quietly pulled the build of the Windows 11 24H2 update that it had been testing in its Release Preview channel for Windows Insiders. It's not unheard of for Microsoft to stop distributing a beta build of Windows after releasing it, but the Release Preview channel is typically the last stop for a Windows update before a wider release.

Microsoft hasn't provided a specific rationale for pulling the update; the blog post says the pause is "temporary" and the rollout will be resumed "in the coming weeks." Windows Insider Senior Program Manager Brandon LeBlanc posted on social media that the team was "working to get it rolling out again shortly."

Read 4 remaining paragraphs | Comments

Report: New “Apple Intelligence” AI features will be opt-in by default

7 June 2024 at 13:47
Report: New “Apple Intelligence” AI features will be opt-in by default

Enlarge (credit: Apple)

Apple's Worldwide Developers Conference kicks off on Monday, and per usual, the company is expected to detail most of the big new features in this year's updates to iOS, iPadOS, macOS, and all of Apple's other operating systems.

The general consensus is that Apple plans to use this year's updates to integrate generative AI into its products for the first time. Bloomberg's Mark Gurman has a few implementation details that show how Apple's approach will differ somewhat from Microsoft's or Google's.

Gurman says that the "Apple Intelligence" features will include an OpenAI-powered chatbot, but it will otherwise focus on "features with broad appeal" rather than "whiz-bang technology like image and video generation." These include summaries for webpages, meetings, and missed notifications; a revamped version of Siri that can control apps in a more granular way; Voice Memos transcription; image enhancement features in the Photos app; suggested replies to text messages; automated sorting of emails; and the ability to "create custom emoji characters on the fly that represent phrases or words as they're being typed."

Read 4 remaining paragraphs | Comments

Tesla may be in trouble, but other EVs are selling just fine

7 June 2024 at 11:06
Generic electric car charging on a city street

Enlarge (credit: Getty Images/3alexd)

Have electric vehicles been overhyped? A casual observer might have come to that conclusion after almost a year of stories in the media about EVs languishing on lots and letters to the White House asking for a national electrification mandate to be watered down or rolled back. EVs were even a pain point during last year's auto worker industrial action. But a look at the sales data paints a different picture, one where Tesla's outsize role in the market has had a distorting effect.

"EVs are the future. Our numbers bear that out. Current challenges will be overcome by the industry and government, and EVs will regain momentum and will ultimately dominate the automotive market," said Martin Cardell, head of global mobility solutions at consultancy firm EY.

Public perception hasn't been helped by recent memories of supply shortages and pandemic price gouging, but the chorus of concerns about EV sales became noticeably louder toward the end of last year and the beginning of 2024. EV sales in 2023 grew by 47 percent year on year, but the first three months of this year failed to show such massive growth. In fact, sales in Q1 2024 were up only 2.6 percent over the same period in 2023.

Read 9 remaining paragraphs | Comments

Outcry from big AI firms over California AI “kill switch” bill

7 June 2024 at 09:27
A finger poised over an electrical switch.

Enlarge (credit: Hajohoos via Getty)

Artificial intelligence heavyweights in California are protesting against a state bill that would force technology companies to adhere to a strict safety framework including creating a “kill switch” to turn off their powerful AI models, in a growing battle over regulatory control of the cutting-edge technology.

The California Legislature is considering proposals that would introduce new restrictions on tech companies operating in the state, including the three largest AI start-ups OpenAI, Anthropic, and Cohere as well as large language models run by Big Tech companies such as Meta.

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

Read 25 remaining paragraphs | Comments

Meta uses “dark patterns” to thwart AI opt-outs in EU, complaint says

6 June 2024 at 17:25
Meta uses “dark patterns” to thwart AI opt-outs in EU, complaint says

Enlarge (credit: Boris Zhitkov | Moment)

The European Center for Digital Rights, known as Noyb, has filed complaints in 11 European countries to halt Meta's plan to start training vague new AI technologies on European Union-based Facebook and Instagram users' personal posts and pictures.

Meta's AI training data will also be collected from third parties and from using Meta's generative AI features and interacting with pages, the company has said. Additionally, Meta plans to collect information about people who aren't on Facebook or Instagram but are featured in users' posts or photos. The only exception from AI training is made for private messages sent between "friends and family," which will not be processed, Meta's blog said, but private messages sent to businesses and Meta are fair game. And any data collected for AI training could be shared with third parties.

"Unlike the already problematic situation of companies using certain (public) data to train a specific AI system (e.g. a chatbot), Meta's new privacy policy basically says that the company wants to take all public and non-public user data that it has collected since 2007 and use it for any undefined type of current and future 'artificial intelligence technology,'" Noyb alleged in a press release.

Read 41 remaining paragraphs | Comments

US agencies to probe AI dominance of Nvidia, Microsoft, and OpenAI

6 June 2024 at 14:34
A large Nvidia logo at a conference hall

Enlarge (credit: Getty Images | NurPhoto )

The US Justice Department and Federal Trade Commission reportedly plan investigations into whether Nvidia, Microsoft, and OpenAI are snuffing out competition in artificial intelligence technology.

The agencies struck a deal on how to divide up the investigations, The New York Times reported yesterday. Under this deal, the Justice Department will take the lead role in investigating Nvidia's behavior while the FTC will take the lead in investigating Microsoft and OpenAI.

The agencies' agreement "allows them to proceed with antitrust investigations into the dominant roles that Microsoft, OpenAI, and Nvidia play in the artificial intelligence industry, in the strongest sign of how regulatory scrutiny into the powerful technology has escalated," the NYT wrote.

Read 15 remaining paragraphs | Comments

DuckDuckGo offers “anonymous” access to AI chatbots through new service

6 June 2024 at 12:39
DuckDuckGo's AI Chat promotional image.

Enlarge (credit: DuckDuckGo)

On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.

Read 6 remaining paragraphs | Comments

Can a technology called RAG keep AI models from making stuff up?

6 June 2024 at 07:00
Can a technology called RAG keep AI models from making stuff up?

Enlarge (credit: Aurich Lawson | Getty Images)

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a statistical gap-filling phenomenon AI language models produce when they are tasked with reproducing knowledge that wasn’t present in the training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

Read 30 remaining paragraphs | Comments

Top news app caught sharing “entirely false” AI-generated news

5 June 2024 at 16:57
Top news app caught sharing “entirely false” AI-generated news

Enlarge (credit: gmast3r | iStock / Getty Images Plus)

After the most downloaded local news app in the US, NewsBreak, shared an AI-generated story about a fake New Jersey shooting last Christmas Eve, New Jersey police had to post a statement online to reassure troubled citizens that the story was "entirely false," Reuters reported.

"Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the cops' Facebook post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers."

It took NewsBreak—which attracts over 50 million monthly users—four days to remove the fake shooting story, and it apparently wasn't an isolated incident. According to Reuters, NewsBreak's AI tool, which scrapes the web and helps rewrite local news stories, has been used to publish at least 40 misleading or erroneous stories since 2021.

Read 26 remaining paragraphs | Comments

Ex-OpenAI staff call for “right to warn” about AI risks without retaliation

4 June 2024 at 17:52
Illustration of businesspeople with red blank speech bubble standing in line.

Enlarge (credit: Getty Images)

On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling for AI companies to commit to principles allowing employees to raise concerns about AI risks without fear of retaliation. The letter, titled "A Right to Warn about Advanced Artificial Intelligence," has so far been signed by 13 individuals, including some who chose to remain anonymous due to concerns about potential repercussions.

The signatories argue that while AI has the potential to deliver benefits to humanity, it also poses serious risks that include "further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction."

They also assert that AI companies possess substantial non-public information about their systems' capabilities, limitations, and risk levels, but currently have only weak obligations to share this information with governments and none with civil society.

Read 8 remaining paragraphs | Comments

Zoom CEO envisions AI deepfakes attending meetings in your place

4 June 2024 at 15:23
Woman discussing work on video call with team members at office

Enlarge (credit: Getty Images)

Zoom CEO Eric Yuan has a vision for the future of work: sending your AI-powered digital twin to attend meetings on your behalf. In an interview with The Verge's Nilay Patel published Monday, Yuan shared his plans for Zoom to become an "AI-first company," using AI to automate tasks and reduce the need for human involvement in day-to-day work.

"Let’s say the team is waiting for the CEO to make a decision or maybe some meaningful conversation, my digital twin really can represent me and also can be part of the decision making process," Yuan said in the interview. "We’re not there yet, but that’s a reason why there’s limitations in today’s LLMs."

LLMs are large language models—text-predicting AI models that power AI assistants like ChatGPT and Microsoft Copilot. They can output very convincing human-like text based on probabilities, but they are far from being able to replicate human reasoning. Still, Yuan suggests that instead of relying on a generic LLM to impersonate you, in the future, people will train custom LLMs to simulate each person.

Read 16 remaining paragraphs | Comments

What kind of bug would make machine learning suddenly 40% worse at NetHack?

4 June 2024 at 14:52
Moon rendered in ASCII text, with

Enlarge (credit: Aurich Lawson)

Members of the Legendary Computer Bugs Tribunal, honored guests, if I may have your attention? I would, humbly, submit a new contender for your esteemed judgment. You may or may not find it novel, you may even deign to call it a "bug," but I assure you, you will find it entertaining.

Consider NetHack. It is one of the all-time roguelike games, and I mean that in the more strict sense of that term. The content is procedurally generated, deaths are permanent, and the only thing you keep from game to game is your skill and knowledge. I do understand that the only thing two roguelike fans can agree on is how wrong the third roguelike fan is in their definition of roguelike, but, please, let us move on.

NetHack is great for machine learning…

Being a difficult game full of consequential choices and random challenges, as well as a "single-agent" game that can be generated and played at lightning speed on modern computers, NetHack is great for those working in machine learning—or imitation learning, actually, as detailed in Jens Tuyls' paper on how compute scaling affects single-agent game learning. Using Tuyls' model of expert NetHack behavior, Bartłomiej Cupiał and Maciej Wołczyk trained a neural network to play and improve itself using reinforcement learning.

Read 10 remaining paragraphs | Comments

Google’s AI Overviews misunderstand why people use Google

4 June 2024 at 13:31
robot hand holding glue bottle over a pizza and tomatoes

Enlarge (credit: Aurich Lawson | Getty Images)

Last month, we looked into some of the most incorrect, dangerous, and downright weird answers generated by Google's new AI Overviews feature. Since then, Google has offered a partial apology/explanation for generating those kinds of results and has reportedly rolled back the feature's rollout for at least some types of queries.

But the more I've thought about that rollout, the more I've begun to question the wisdom of Google's AI-powered search results in the first place. Even when the system doesn't give obviously wrong results, condensing search results into a neat, compact, AI-generated summary seems like a fundamental misunderstanding of how people use Google in the first place.

Reliability and relevance

When people type a question into the Google search bar, they only sometimes want the kind of basic reference information that can be found on a Wikipedia page or corporate website (or even a Google information snippet). Often, they're looking for subjective information where there is no one "right" answer: "What are the best Mexican restaurants in Santa Fe?" or "What should I do with my kids on a rainy day?" or "How can I prevent cheese from sliding off my pizza?"

Read 13 remaining paragraphs | Comments

Windows Recall demands an extraordinary level of trust that Microsoft hasn’t earned

4 June 2024 at 13:15
The Recall feature as it currently exists in Windows 11 24H2 preview builds.

Enlarge / The Recall feature as it currently exists in Windows 11 24H2 preview builds. (credit: Andrew Cunningham)

Microsoft’s Windows 11 Copilot+ PCs come with quite a few new AI and machine learning-driven features, but the tentpole is Recall. Described by Microsoft as a comprehensive record of everything you do on your PC, the feature is pitched as a way to help users remember where they’ve been and to provide Windows extra contextual information that can help it better understand requests from and meet the needs of individual users.

This, as many users in infosec communities on social media immediately pointed out, sounds like a potential security nightmare. That’s doubly true because Microsoft says that by default, Recall’s screenshots take no pains to redact sensitive information, from usernames and passwords to health care information to NSFW site visits. By default, on a PC with 256GB of storage, Recall can store a couple dozen gigabytes of data across three months of PC usage, a huge amount of personal data.

The line between “potential security nightmare” and “actual security nightmare” is at least partly about the implementation, and Microsoft has been saying things that are at least superficially reassuring. Copilot+ PCs are required to have a fast neural processing unit (NPU) so that processing can be performed locally rather than sending data to the cloud; local snapshots are protected at rest by Windows’ disk encryption technologies, which are generally on by default if you’ve signed into a Microsoft account; neither Microsoft nor other users on the PC are supposed to be able to access any particular user’s Recall snapshots; and users can choose to exclude apps or (in most browsers) individual websites to exclude from Recall’s snapshots.

Read 18 remaining paragraphs | Comments

Nvidia emails: Elon Musk diverting Tesla GPUs to his other companies

4 June 2024 at 12:07
A row of server racks

Enlarge / Tesla will have to rely on its Dojo supercomputer for a while longer after CEO Elon Musk diverted 12,000 Nvidia GPU clusters to X instead. (credit: Tesla)

Elon Musk is yet again being accused of diverting Tesla resources to his other companies. This time, it's high-end H100 GPU clusters from Nvidia. CNBC's Lora Kolodny reports that while Tesla ordered these pricey computers, emails from Nvidia staff show that Musk instead redirected 12,000 GPUs to be delivered to his social media company X.

It's almost unheard of for a profitable automaker to pivot its business into another sector, but that appears to be the plan at Tesla as Musk continues to say that the electric car company is instead destined to be an AI and robotics firm instead.

Does Tesla make cars or AI?

That explains why Musk told investors in April that Tesla had spent $1 billion on GPUs in the first three months of this year, almost as much as it spent on R&D, despite being desperate for new models to add to what is now an old and very limited product lineup that is suffering rapidly declining sales in the US and China.

Read 9 remaining paragraphs | Comments

Nvidia jumps ahead of itself and reveals next-gen “Rubin” AI chips in keynote tease

3 June 2024 at 13:13
Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024.

Enlarge / Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024. (credit: SAM YEH/AFP via Getty Images)

On Sunday, Nvidia CEO Jensen Huang reached beyond Blackwell and revealed the company's next-generation AI-accelerating GPU platform during his keynote at Computex 2024 in Taiwan. Huang also detailed plans for an annual tick-tock-style upgrade cycle of its AI acceleration platforms, mentioning an upcoming Blackwell Ultra chip slated for 2025 and a subsequent platform called "Rubin" set for 2026.

Nvidia's data center GPUs currently power a large majority of cloud-based AI models, such as ChatGPT, in both development (training) and deployment (inference) phases, and investors are keeping a close watch on the company, with expectations to keep that run going.

During the keynote, Huang seemed somewhat hesitant to make the Rubin announcement, perhaps wary of invoking the so-called Osborne effect, whereby a company's premature announcement of the next iteration of a tech product eats into the current iteration's sales. "This is the very first time that this next click as been made," Huang said, holding up his presentation remote just before the Rubin announcement. "And I'm not sure yet whether I'm going to regret this or not."

Read 9 remaining paragraphs | Comments

No physics? No problem. AI weather forecasting is already making huge strides.

3 June 2024 at 07:00
AI weather models are arriving just in time for the 2024 Atlantic hurricane season.

Enlarge / AI weather models are arriving just in time for the 2024 Atlantic hurricane season. (credit: Aurich Lawson | Getty Images)

Much like the invigorating passage of a strong cold front, major changes are afoot in the weather forecasting community. And the end game is nothing short of revolutionary: an entirely new way to forecast weather based on artificial intelligence that can run on a desktop computer.

Today's artificial intelligence systems require one resource more than any other to operate—data. For example, large language models such as ChatGPT voraciously consume data to improve answers to queries. The more and higher quality data, the better their training, and the sharper the results.

However, there is a finite limit to quality data, even on the Internet. These large language models have hoovered up so much data that they're being sued widely for copyright infringement. And as they're running out of data, the operators of these AI models are turning to ideas such as synthetic data to keep feeding the beast and produce ever more capable results for users.

Read 41 remaining paragraphs | Comments

For the second time in two years, AMD blows up its laptop CPU numbering system

2 June 2024 at 23:00
AMD's Ryzen 9 AI 300 series is a new chip and a new naming scheme.

Enlarge / AMD's Ryzen 9 AI 300 series is a new chip and a new naming scheme. (credit: AMD)

Less than two years ago, AMD announced that it was overhauling its numbering scheme for laptop processors. Each digit in its four-digit CPU model numbers picked up a new meaning, which, with the help of a detailed reference sheet, promised to inform buyers of exactly what it was they were buying.

One potential issue with this, as we pointed out at the time, was that this allowed AMD to change over the first and most important of those four digits every single year that it decided to re-release a processor, regardless of whether that chip actually included substantive improvements or not. Thus a “Ryzen 7730U” from 2023 would look two generations newer than a Ryzen 5800U from 2021, despite being essentially identical.

AMD is partially correcting this today by abandoning the self-described “decoder ring” naming system and resetting it to something more conventional.

Read 8 remaining paragraphs | Comments

AMD intros Ryzen AI 300 chips with Zen 5, better GPU, and hugely improved NPU

2 June 2024 at 23:00
  • AMD's Ryzen AI 300 series is its next-gen laptop platform, and the first to support Copilot+ PC features. [credit: AMD ]

AMD’s next-generation laptop processors are coming later this year, joining new Ryzen 9000 desktop processors and ushering in yet another revamp to the way AMD does laptop CPU model numbers.

But the big thing the company wants to push is the new chips’ performance in generative AI and machine-learning workloads—it’s putting “Ryzen AI” right in the name and emphasizing the presence of an improved neural processing unit (NPU) that meets and exceeds Microsoft’s performance requirements for Copilot+ PCs. The new Ryzen AI 300-series, codenamed Strix Point, succeeds the Ryzen 8040 chips from earlier this year, which were themselves a relatively mild refresh for the Ryzen 7040 processors less than a year before.

AMD promises performance of up to 50 trillion operations per second (TOPS) with its new third-generation NPU, a significant boost from the 10 to 16 TOPS offered by Ryzen 7000 and 8000 processors with NPUs. This would make it faster than the 45 TOPS offered by the Qualcomm Snapdragon X Elite and X Plus in the first wave of Copilot+ compatible PCs, and also Intel’s projected performance for its next-generation Core Ultra chips, codenamed Lunar Lake. All exceed Microsoft’s Copilot+ requirement of 40 TOPS, which enables some Windows 11 features that aren’t normally available on typical PCs. Copilot+ PCs can do more AI processing locally on device rather than relying on the cloud, potentially improving performance and giving users more privacy.

Read 7 remaining paragraphs | Comments

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic

31 May 2024 at 17:56
A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

Google’s AI Overview is flawed by design, and a new company blog post hints at why

31 May 2024 at 15:47
A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

Russia and China are using OpenAI tools to spread disinformation

31 May 2024 at 09:47
OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective."

Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images)

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.

The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.

Read 14 remaining paragraphs | Comments

Report: Apple and OpenAI have signed a deal to partner on AI

30 May 2024 at 17:39
OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.

It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.

"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.

Read 7 remaining paragraphs | Comments

Tech giants form AI group to counter Nvidia with new interconnect standard

30 May 2024 at 16:42
Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

OpenAI board first learned about ChatGPT from Twitter, according to former member

29 May 2024 at 11:54
Helen Toner, former OpenAI board member, speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023. (credit: Getty Images)

In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.

"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.

Read 8 remaining paragraphs | Comments

Nvidia denies pirate e-book sites are “shadow libraries” to shut down lawsuit

28 May 2024 at 15:09
Nvidia denies pirate e-book sites are “shadow libraries” to shut down lawsuit

Enlarge (credit: Westend61 | Westend61)

Some of the most infamous so-called shadow libraries have increasingly faced legal pressure to either stop pirating books or risk being shut down or driven to the dark web. Among the biggest targets are Z-Library, which the US Department of Justice has charged with criminal copyright infringement, and Library Genesis (Libgen), which was sued by textbook publishers last fall for allegedly distributing digital copies of copyrighted works "on a massive scale in willful violation" of copyright laws.

But now these shadow libraries and others accused of spurning copyrights have seemingly found an unlikely defender in Nvidia, the AI chipmaker among those profiting most from the recent AI boom.

Nvidia seemed to defend the shadow libraries as a valid source of information online when responding to a lawsuit from book authors over the list of data repositories that were scraped to create the Books3 dataset used to train Nvidia's AI platform NeMo.

Read 12 remaining paragraphs | Comments

OpenAI training its next major AI model, forms new safety committee

28 May 2024 at 12:05
A man rolling a boulder up a hill.

Enlarge (credit: Getty Images)

On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has "recently begun" training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to two weeks of public setbacks for the company.

Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, "frontier model" is a term for a new AI system designed to push the boundaries of current capabilities. And "AGI" refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).

Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.

Read 5 remaining paragraphs | Comments

Google Search’s “udm=14” trick lets you kill AI search for good

24 May 2024 at 13:54
The now-normal "AI" results versus the old-school "Web" results.

Enlarge / The now-normal "AI" results versus the old-school "Web" results. (credit: Ron Amadeo / Google)

If you're tired of Google's AI Overview extracting all value from the web while also telling people to eat glue or run with scissors, you can turn it off—sort of. Google has been telling people its AI box at the top of search results is the future, and you can't turn it off, but that ignores how Google search works: A lot of options are powered by URL parameters. That means you can turn off AI search with this one simple trick! (Sorry.)

Our method for killing AI search is defaulting to the new "web" search filter, which Google recently launched as a way to search the web without Google's alpha-quality AI junk. It's actually pretty nice, showing only the traditional 10 blue links, giving you a clean (well, other than the ads), uncluttered results page that looks like it's from 2011. Sadly, Google's UI doesn't have a way to make "web" search the default, and switching to it means digging through the "more" options drop-down after you do a search, so it's a few clicks deep.

Check out the URL after you do a search, and you'll see a mile-long URL full of esoteric tracking information and mode information. We'll put each search result URL parameter on a new line so the URL is somewhat readable:

Read 6 remaining paragraphs | Comments

OpenAI backpedals on scandalous tactic to silence former employees

24 May 2024 at 11:32
OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Former and current OpenAI employees received a memo this week that the AI company hopes to end the most embarrassing scandal that Sam Altman has ever faced as OpenAI's CEO.

The memo finally clarified for employees that OpenAI would not enforce a non-disparagement contract that employees since at least 2019 were pressured to sign within a week of termination or else risk losing their vested equity. For an OpenAI employee, that could mean losing millions for expressing even mild criticism about OpenAI's work.

You can read the full memo below in a post on X (formerly Twitter) from Andrew Carr, a former OpenAI employee whose LinkedIn confirms that he left the company in 2021.

Read 22 remaining paragraphs | Comments

Google’s “AI Overview” can give false, misleading, and dangerous answers

24 May 2024 at 07:00
This is fine.

Enlarge / This is fine. (credit: Getty Images)

If you use Google regularly, you may have noticed the company's new AI Overviews providing summarized answers to some of your questions in recent days. If you use social media regularly, you may have come across many examples of those AI Overviews being hilariously or even dangerously wrong.

Factual errors can pop up in existing LLM chatbots as well, of course. But the potential damage that can be caused by AI inaccuracy gets multiplied when those errors appear atop the ultra-valuable web real estate of the Google search results page.

"The examples we've seen are generally very uncommon queries and aren’t representative of most people’s experiences," a Google spokesperson told Ars. "The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web."

Read 18 remaining paragraphs | Comments

Bing outage shows just how little competition Google search really has

23 May 2024 at 16:01
Google logo on a phone in front of a Bing logo in the background

Enlarge (credit: Getty Images)

Bing, Microsoft's search engine platform, went down in the very early morning today. That meant that searches from Microsoft's Edge browsers that had yet to change their default providers didn't work. It also meant that services relying on Bing's search API—Microsoft's own Copilot, ChatGPT search, Yahoo, Ecosia, and DuckDuckGo—similarly failed.

Services were largely restored by the morning Eastern work hours, but the timing feels apt, concerning, or some combination of the two. Google, the consistently dominating search platform, just last week announced and debuted AI Overviews as a default addition to all searches. If you don't want an AI response but still want to use Google, you can hunt down the new "Web" option in a menu, or you can, per Ernie Smith, tack "&udm=14" onto your search or use Smith's own "Konami code" shortcut page.

If dismay about AI's hallucinations, power draw, or pizza recipes concern you—along with perhaps broader Google issues involving privacy, tracking, news, SEO, or monopoly power—most of your other major options were brought down by a single API outage this morning. Moving past that kind of single point of vulnerability will take some work, both by the industry and by you, the person wondering if there's a real alternative.

Read 11 remaining paragraphs | Comments

❌
❌