❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 1 June 2024Main stream

Google Rolls Back A.I. Search Feature After Flubs and Flaws

1 June 2024 at 05:04
Google appears to have turned off its new A.I. Overviews for a number of searches as it works to minimize errors.

Β© Jeff Chiu/Associated Press

Sundar Pichai, Google’s chief executive, introduced A.I. Overviews, an A.I. feature in its search engine, last month.
Yesterday β€” 31 May 2024Main stream

Google’s AI Overview is flawed by design, and a new company blog post hints at why

31 May 2024 at 15:47
A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview featureβ€”which the company showed off at Google I/O a few weeks agoβ€”aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

The New ChatGPT Offers a Lesson in AI Hype

31 May 2024 at 05:07
OpenAI released GPT-4o, its latest chatbot technology, in a partly finished state. It has much to prove.

Β© Arsenii Vaselenko for The New York Times

ChatGPT-4o trying to solve a geometry problem
Before yesterdayMain stream

OpenAI says Russian and Israeli groups used its tools to spread disinformation

30 May 2024 at 18:25

Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.

Continue reading...

πŸ’Ύ

© Photograph: Dado Ruvić/Reuters

πŸ’Ύ

© Photograph: Dado Ruvić/Reuters

Report: Apple and OpenAI have signed a deal to partner on AI

30 May 2024 at 17:39
OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.

It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.

"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.

Read 7 remaining paragraphs | Comments

Tech giants form AI group to counter Nvidia with new interconnect standard

30 May 2024 at 16:42
Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplicationsβ€”necessary for running neural network architectureβ€”in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)β€”created by Intel in 2019β€”which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

Japanese Man Arrested for GenAI Ransomware as AI Jailbreak Concerns Grow

AI Jailbreak, AI security

A 25-year-old man from Kawasaki, Japan was arrested this week for allegedly using generative AI tools to create ransomware in an AI jailbreaking case that may be the first of its kind in Japan. The arrest of Ryuki Hayashi, widely reported in Japan, is the latest example of an attacker defeating AI guardrails, which has become something of an obsession for hackers and cybersecurity researchers alike. Just this week, researchers from Germany’s CISPA Helmholtz Center for Information Security reported on their efforts to jailbreak GPT-4o, the latest multimodal large language model (MLLM) released by OpenAI a little more than two weeks ago. Concerns raised by those researchers and others led OpenAI to establish a safety and security committee this week to try to address AI risks.

AI Jailbreak Tools and Methods Unclear

News reports on Hayashi’s arrest have been lacking in details on the tools and methods he used to create the ransomware. The Japan Times reported that Hayashi, a former factory worker, β€œis not an expert on malware. He allegedly learned online how to ask AI tools questions that would elicit information on how to create malware.” Hayashi came under suspicion after police arrested him in March β€œfor allegedly using fake identification to obtain a SIM card registered under someone else's name,” the paper reported. The Japan News, which reported that Hayashi is unemployed, said police found β€œa homemade virus on a computer” following the March arrest. The News said police suspect he β€œused his home computer and smartphone to combine information about creating malware programs obtained after giving instructions to several generative AI systems in March last year.” Hayashi β€œallegedly gave instructions to the AI systems while concealing his purpose of creating the virus to obtain design information necessary for encrypting files and demanding ransom,” the News reported. β€œHe is said to have searched online for ways to illegally obtain information.” Hayashi reportedly admitted to charges during questioning, and told police, β€œI wanted to make money through ransomware. I thought I could do anything if I asked AI.” There have been no reports of damage from the ransomware he created, the News said.

LLM Jailbreak Research Heats Up

The news comes as research on AI jailbreaking and attack techniques has grown, with a number of recent reports on risks and possible solutions. In a paper posted to arXiv this week, the CISPA researchers said they were able to more than double their attack success rate (ASR) on GPT-4o’s voice mode with an attack they dubbed VOICEJAILBREAK, β€œa novel voice jailbreak attack that humanizes GPT-4o and attempts to persuade it through fictional storytelling (setting, character, and plot).” Another arXiv paper, posted in February by researchers at the University of California at Berkeley, looked at a range of risks associated with GenAI tools such as Microsoft Copilot and ChatGPT, along with possible solutions, such as development of an β€œAI firewall” to monitor and change LLM inputs and outputs if necessary. And earlier this month, OT and IoT security company SCADAfence outlined a wide range of AI tools, threat actors and attack techniques. In addition to general use case chatbots like ChatGPT and Google Gemini, the report looked at β€œdark LLMs” created for malicious purposes, such as WormGPT, FraudGPT, DarkBERT and DarkBART. SCADAfence recommended that OT and SCADA organizations follow best practices such as limiting network exposure for control systems, patching, access control and up to date offline backups. GenAI uses and misuses is also expected to be the topic of a number of presentations at Gartner’s Security and Risk Management Summit next week in National Harbor, Maryland, just outside the U.S. capital.

Microsoft’s Windows Recall: Cutting-Edge Search Tech or Creepy Overreach?

30 May 2024 at 12:07

SecurityWeek editor-at-large Ryan Naraine examines the broad tension between tech innovation and privacy rights at a time when ChatGPT-like bots and generative-AI apps are starting to dominate the landscape.Β 

The post Microsoft’s Windows Recall: Cutting-Edge Search Tech or Creepy Overreach? appeared first on SecurityWeek.

OpenAI board first learned about ChatGPT from Twitter, according to former member

29 May 2024 at 11:54
Helen Toner, former OpenAI board member, speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023. (credit: Getty Images)

In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.

"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.

Read 8 remaining paragraphs | Comments

OpenAI Announces Safety and Security Committee Amid New AI Model Development

OpenAI Announces Safety and Security Committee

OpenAI announced a new safety and security committee as it begins training a new AI model intended to replace the GPT-4 system that currently powers its ChatGPT chatbot. The San Francisco-based startup announced the formation of the committee in a blog post on Tuesday, highlighting its role in advising the board on crucial safety and security decisions related to OpenAI’s projects and operations. The creation of the committee comes amid ongoing debates about AI safety at OpenAI. The company faced scrutiny after Jan Leike, a researcher, resigned, criticizing OpenAI for prioritizing product development over safety. Following this, co-founder and chief scientist Ilya Sutskever also resigned, leading to the disbandment of the "superalignment" team that he and Leike co-led, which was focused on addressing AI risks. Despite these controversies, OpenAI emphasized that its AI models are industry leaders in both capability and safety. The company expressed openness to robust debate during this critical period.

OpenAI's Safety and Security Committee Composition and Responsibilities

The safety committee comprises company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four OpenAI technical and policy experts. It also features board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, a former general counsel for Sony.
"A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days."Β 
The committee's initial task is to evaluate and further develop OpenAI’s existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.

Development of the New AI Model

OpenAI also announced that it has recently started training a new AI model, described as a "frontier model." These frontier models represent the most advanced AI systems, capable of generating text, images, video, and human-like conversations based on extensive datasets. The company also recently launched its newest flagship model GPT-4o ('o' stands for omni), which is a multilingual, multimodal generative pre-trained transformer designed by OpenAI. It was announced by OpenAI CTO Mira Murati during a live-streamed demo on May 13 and released the same day. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. GPT-4o has a context window supporting up to 128,000 tokens, which helps it maintain coherence over longer conversations or documents, making it suitable for detailed analysis. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model

28 May 2024 at 09:57

OpenAI is setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The post OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model appeared first on SecurityWeek.

OpenAI training its next major AI model, forms new safety committee

28 May 2024 at 12:05
A man rolling a boulder up a hill.

Enlarge (credit: Getty Images)

On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has "recently begun" training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to two weeks of public setbacks for the company.

Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, "frontier model" is a term for a new AI system designed to push the boundaries of current capabilities. And "AGI" refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).

Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.

Read 5 remaining paragraphs | Comments

OpenAI Says It Has Begun Training a New Flagship A.I. Model

By: Cade Metz
28 May 2024 at 23:31
The advanced A.I. system would succeed GPT-4, which powers ChatGPT. The company has also created a new safety committee to address A.I.’s risks.

Β© Jason Redmond/Agence France-Presse β€” Getty Images

As Sam Altman’s OpenAI trains its new model, its new Safety and Security committee will work to hone policies and processes for safeguarding the technology, the company said.

Bing outage shows just how little competition Google search really has

23 May 2024 at 16:01
Google logo on a phone in front of a Bing logo in the background

Enlarge (credit: Getty Images)

Bing, Microsoft's search engine platform, went down in the very early morning today. That meant that searches from Microsoft's Edge browsers that had yet to change their default providers didn't work. It also meant that services relying on Bing's search APIβ€”Microsoft's own Copilot, ChatGPT search, Yahoo, Ecosia, and DuckDuckGoβ€”similarly failed.

Services were largely restored by the morning Eastern work hours, but the timing feels apt, concerning, or some combination of the two. Google, the consistently dominating search platform, just last week announced and debuted AI Overviews as a default addition to all searches. If you don't want an AI response but still want to use Google, you can hunt down the new "Web" option in a menu, or you can, per Ernie Smith, tack "&udm=14" onto your search or use Smith's own "Konami code" shortcut page.

If dismay about AI's hallucinations, power draw, or pizza recipes concern youβ€”along with perhaps broader Google issues involving privacy, tracking, news, SEO, or monopoly powerβ€”most of your other major options were brought down by a single API outage this morning. Moving past that kind of single point of vulnerability will take some work, both by the industry and by you, the person wondering if there's a real alternative.

Read 11 remaining paragraphs | Comments

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

23 May 2024 at 14:27
Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023. (credit: Sean Zanni / Contributor | Patrick McMullan)

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson's voice when seeking an actor for ChatGPT's "Sky" voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman's defense against Johansson's claims that Sky was made to sound "eerily similar" to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

Read 40 remaining paragraphs | Comments

OpenAI and Wall Street Journal owner News Corp sign content deal

22 May 2024 at 17:39

Deal lets ChatGPT maker use all articles from Wall Street Journal, New York Post, Times and Sunday Times for AI model development

ChatGPT developer OpenAI has signed a deal to bring news content from the Wall Street Journal, the New York Post, the Times and the Sunday Times to the artificial intelligence platform, the companies said on Wednesday. Neither party disclosed a dollar figure for the deal.

The deal will give OpenAI access to current and archived content from all of News Corp’s publications. The deal comes weeks after the AI heavyweight signed a deal with the Financial Times to license its content for the development of AI models. Earlier this year, OpenAI inked a similar contract with Axel Springer, the parent company of Business Insider and Politico.

Continue reading...

πŸ’Ύ

Β© Photograph: Michael Dwyer/AP

πŸ’Ύ

Β© Photograph: Michael Dwyer/AP

Scarlett Johansson Said No, but OpenAI’s Virtual Assistant Sounds Just Like Her

21 May 2024 at 14:35
Last week, the company released a chatbot with an option that sounded like the actress, who provided the voice of an A.I. system in the movie β€œHer.”

Β© Haiyun Jiang for The New York Times

Scarlett Johansson said in a statement on Monday that an OpenAI chatbot’s voice sounded β€œeerily similar to mine” despite her refusals.

Productivity soars in sectors of global economy most exposed to AI, says report

Employers in UK, one of 15 countries studied, willing to pay 14% wage premium for jobs requiring AI skills

The sectors of the global economy most heavily exposed to artificial intelligence (AI) are witnessing a marked productivity increase and command a significant wage premium, according to a report.

Boosting hopes that AI might help lift the global economy out of a 15-year, low-growth trough, a PwC study found productivity growth was almost five times as rapid in parts of the economy where AI penetration was highest than in less exposed sectors.

Continue reading...

πŸ’Ύ

© Photograph: Dado Ruvić/Reuters

πŸ’Ύ

© Photograph: Dado Ruvić/Reuters

New Windows AI feature records everything you’ve done on your PC

20 May 2024 at 17:43
A screenshot of Microsoft's new

Enlarge / A screenshot of Microsoft's new "Recall" feature in action. (credit: Microsoft)

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called "Recall" for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.

"Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds," Microsoft says on its website. "The snapshots are encrypted and saved on your PC’s hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots."

By performing a Recall action, users can access a snapshot from a specific time period, providing context for the event or moment they are searching for. It also allows users to search through teleconference meetings they've participated in and videos watched using an AI-powered feature that transcribes and translates speech.

Read 6 remaining paragraphs | Comments

OpenAI on the defensive after multiple PR setbacks in one week

20 May 2024 at 16:51
The OpenAI logo under a raincloud.

Enlarge (credit: Benj Edwards | Getty Images)

Since the launch of its latest AI language model, GPT-4o, OpenAI has found itself on the defensive over the past week due to a string of bad news, rumors, and ridicule circulating on traditional and social media. The negative attention is potentially a sign that OpenAI has entered a new level of public visibilityβ€”and is more prominently receiving pushback to its AI approach beyond what it has seen from tech pundits and government regulators.

OpenAI's rough week started last Monday when the company previewed a flirty AI assistant with a voice seemingly inspired by Scarlett Johansson from the 2013 film Her. OpenAI CEO Sam Altman alluded to the film himself on X just before the event, and we had previously made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.

While that September update included a voice called "Sky" that some have said sounds like Johansson, it was GPT-4o's seemingly lifelikeΒ new conversational interface, complete with laughing and emotionally charged tonal shifts, that led to a widely circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Next, a Saturday Night Live joke reinforced an implied connection to Johansson's voice.

Read 15 remaining paragraphs | Comments

Can AI Make the PC Cool Again? Microsoft Thinks So.

Microsoft, HP, Dell and others unveiled a new kind of laptop tailored to work with artificial intelligence. Analysts expect Apple to do something similar.

Β© Grant Hindsley for The New York Times

Satya Nadella, Microsoft’s chief executive, announcing the new artificial intelligence functionality on Monday.

Scarlett Johansson says Altman insinuated that AI soundalike was intentional

20 May 2024 at 18:45
Scarlett Johansson and Joaquin Phoenix attend <em>Her</em> premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy.

Enlarge / Scarlett Johansson and Joaquin Phoenix attend Her premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy. (credit: Franco Origlia / Contributor | Getty Images Entertainment)

OpenAI has paused a voice mode option for ChatGPT-4o, Sky, after backlash accusing the AI company of intentionally ripping off Scarlett Johansson's critically acclaimed voice-acting performance in the 2013 sci-fi film Her.

In a blog defending its casting decision for Sky, OpenAI went into great detail explaining its process for choosing the individual voice options for its chatbot. But ultimately, the company seemed pressed to admit that Sky's voice was just too similar to Johansson's to keep using it, at least for now.

"We believe that AI voices should not deliberately mimic a celebrity's distinctive voiceβ€”Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice," OpenAI's blog said.

Read 24 remaining paragraphs | Comments

ChatGPT suspends Scarlett Johansson-like voice as actor speaks out against OpenAI

20 May 2024 at 19:58

OpenAI says β€˜Sky’ is not an imitation of actor’s voice after users compare it to AI companion character in film Her

Scarlett Johansson has spoken out against OpenAI after the company used a voice eerily resembling her own in its new ChatGPT product.

The actor said in a statement she was approached by OpenAI nine months ago to voice its AI system but declined for β€œpersonal reasons”. Johansson was β€œshocked” and β€œangered” when she heard the voice option, which β€œsounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” she said.

Continue reading...

πŸ’Ύ

Β© Photograph: Sarah Meyssonnier/Reuters

πŸ’Ύ

Β© Photograph: Sarah Meyssonnier/Reuters

AI chatbots’ safeguards can be easily bypassed, say UK researchers

All five systems tested were found to be β€˜highly vulnerable’ to attempts to elicit harmful responses

Guardrails to prevent artificial intelligence models behind chatbots from issuing illegal, toxic or explicit responses can be bypassed with simple techniques, UK government researchers have found.

The UK’s AI Safety Institute (AISI) said systems it had tested were β€œhighly vulnerable” to jailbreaks, a term for text prompts designed to elicit a response that a model is supposedly trained to avoid issuing.

Continue reading...

πŸ’Ύ

Β© Photograph: Koshiro K/Alamy

πŸ’Ύ

Β© Photograph: Koshiro K/Alamy

What happened to OpenAI’s long-term AI risk team?

By: WIRED
18 May 2024 at 11:54
A glowing OpenAI logo on a blue background.

Enlarge (credit: Benj Edwards)

In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.

Now OpenAI’s β€œsuperalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other co-lead. The group’s work will be absorbed into OpenAI’s other research efforts.

Read 14 remaining paragraphs | Comments

OpenAI putting β€˜shiny products’ above safety, says departing researcher

Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o

A former senior employee at OpenAI has said the company behind ChatGPT is prioritising β€œshiny products” over safety, revealing that he quit after a disagreement over key aims reached β€œbreaking point”.

Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.

Continue reading...

πŸ’Ύ

Β© Photograph: Michael Dwyer/AP

πŸ’Ύ

Β© Photograph: Michael Dwyer/AP

OpenAI will use Reddit posts to train ChatGPT under new deal

17 May 2024 at 17:18
An image of a woman holding a cell phone in front of the Reddit logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

Enlarge (credit: Getty)

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.

Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.

The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.

Read 8 remaining paragraphs | Comments

Inside OpenAI’s Library

OpenAI may be changing how the world interacts with language. But inside headquarters, there is a homage to the written word: a library.

Β© Christie Hemm Klok for The New York Times

Chief Scientist Ilya Sutskever leaves OpenAI six months after Altman ouster

14 May 2024 at 23:05
An image Illya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati.

Enlarge / An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati. (credit: Ilya Sutskever / X)

On Tuesday evening, OpenAI Chief Scientist Ilya Sutskever announced that he is leaving the company he co-founded, six months after he participated in the coup that temporarily ousted OpenAI CEO Sam Altman. Jan Leike, a fellow member of Sutskever's Superalignment team, is reportedly resigning with him.

"After almost a decade, I have made the decision to leave OpenAI," Sutskever tweeted. "The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly."

Sutskever has been with the company since its founding in 2015 and is widely seen as one of the key engineers behind some of OpenAI's biggest technical breakthroughs. As a former OpenAI board member, he played a key role in the removal of Sam Altman as CEO in the shocking firing last November. While it later emerged that Altman's firing primarily stemmed from a power struggle with former board member Helen Toner, Sutskever sided with Toner and personally delivered the news to Altman that he was being fired on behalf of the board.

Read 6 remaining paragraphs | Comments

OpenAI’s Chief Scientist, Ilya Sutskever, Is Leaving the Company

By: Cade Metz
14 May 2024 at 21:39
In November, Ilya Sutskever joined three other OpenAI board members to force out Sam Altman, the chief executive, before saying he regretted the move.

Β© Jim Wilson/The New York Times

Ilya Sutskever, who contributed to breakthrough research in artificial intelligence, brought instant credibility to OpenAI.

Google strikes back at OpenAI with β€œProject Astra” AI agent prototype

14 May 2024 at 15:11
A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024.

Enlarge / A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024. (credit: Google)

Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what's taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California.

Hassabis called Astra "a universal agent helpful in everyday life." During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts.

Google says that Astra uses the camera and microphone on a user's device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera's frame.

Read 14 remaining paragraphs | Comments

Cybersecurity Concerns Surround ChatGPT 4o’s Launch; Open AI Assures Beefed up Safety Measure

OpenAI GPT-4o security

The field of Artificial Intelligence is rapidly evolving, and OpenAI's ChatGPT is a leader in this revolution. This groundbreaking large language model (LLM) redefined the expectations for AI. Just 18 months after its initial launch, OpenAI has released a major update: GPT-4o. This update widens the gap between OpenAI and its competitors, especially the likes of Google. OpenAI unveiled GPT-4o, with the "o" signifying "omni," during a live stream earlier this week. This latest iteration boasts significant advancements across various aspects. Here's a breakdown of the key features and capabilities of OpenAI's GPT-4o.

Features of GPT-4o

Enhanced Speed and Multimodality: GPT-4o operates at a faster pace than its predecessors and excels at understanding and processing diverse information formats – written text, audio, and visuals. This versatility allows GPT-4o to engage in more comprehensive and natural interactions. Free Tier Expansion: OpenAI is making AI more accessible by offering some GPT-4o features to free-tier users. This includes the ability to access web-based information during conversations, discuss images, upload files, and even utilize enterprise-grade data analysis tools (with limitations). Paid users will continue to enjoy a wider range of functionalities. Improved User Experience: The blog post accompanying the announcement showcases some impressive capabilities. GPT-4o can now generate convincingly realistic laughter, potentially pushing the boundaries of the uncanny valley and increasing user adoption. Additionally, it excels at interpreting visual input, allowing it to recognize sports on television and explain the rules – a valuable feature for many users. However, despite the new features and capabilities, the potential misuse of ChatGPT is still on the rise. The new version, though deemed safer than the previous versions, is still vulnerable to exploitation and can be leveraged by hackers and ransomware groups for nefarious purposes. Talking about the security concerns regarding the new version, OpenAI shared a detailed post about the new and advanced security measures being implemented in GPT-4o.

Security Concerns Surround ChatGPT 4o

The implications of ChatGPT for cybersecurity have been a hot topic of discussion among security leaders and experts as many worry that the AI software can easily be misused. Since its inception in November 2022, several organizations such as Amazon, JPMorgan Chase & Co., Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, Wells Fargo and Verizon have restricted access or blocked the use of the program citing security concerns. In April 2023, Italy became the first country in the world to ban ChatGPT after accusing OpenAI of stealing user data. These concerns are not unfounded.

OpenAI Assures Safety

OpenAI reassured people that GPT-4o has "new safety systems to provide guardrails on voice outputs," plus extensive post-training and filtering of the training data to prevent ChatGPT from saying anything inappropriate or unsafe. GPT-4o was built in accordance with OpenAI's internal Preparedness Framework and voluntary commitments. More than 70 external security researchers red teamed GPT-4o before its release. In an article published on its official website, OpenAI states that its evaluations of cybersecurity do not score above β€œmedium risk.” β€œGPT-4o has safety built-in by design across modalities, through techniques such as filtering training data and refining the model’s behavior through post-training. We have also created new safety systems to provide guardrails on voice outputs. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories,” the post said. β€œThis assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities,” it added. OpenAI shared that it also employed the services of over 70 experts to identify risks and amplify safety. β€œGPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as they’re discovered,” it said. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Disarmingly lifelike: ChatGPT-4o will laugh at your jokes and your dumb hat

13 May 2024 at 18:25
Oh you silly, silly human. Why are you so silly, you silly human?

Enlarge / Oh you silly, silly human. Why are you so silly, you silly human? (credit: Aurich Lawson | Getty Images)

At this point, anyone with even a passing interest in AI is very familiar with the process of typing out messages to a chatbot and getting back long streams of text in response. Today's announcement of ChatGPT-4oβ€”which lets users converse with a chatbot using real-time audio and videoβ€”might seem like a mere lateral evolution of that basic interaction model.

After looking through over a dozen video demos OpenAI posted alongside today's announcement, though, I think we're on the verge of something more like a sea change in how we think of and work with large language models. While we don't yet have access to ChatGPT-4o's audio-visual features ourselves, the important non-verbal cues on display hereβ€”both from GPT-4o and from the usersβ€”make the chatbot instantly feel much more human. And I'm not sure the average user is fully ready for how they might feel about that.

It thinks it’s people

Take this video, where a newly expectant father looks to ChatGPT-4o for an opinion on a dad joke ("What do you call a giant pile of kittens? A meow-ntain!"). The old ChatGPT4 could easily type out the same responses of "Congrats on the upcoming addition to your family!" and "That's perfectly hilarious. Definitely a top-tier dad joke." But there's much more impact to hearing GPT-4o give that same information in the video, complete with the gentle laughter and rising and falling vocal intonations of a lifelong friend.

Read 14 remaining paragraphs | Comments

Before launching, GPT-4o broke records on chatbot leaderboard under a secret name

13 May 2024 at 17:33
Man in morphsuit and girl lying on couch at home using laptop

Enlarge (credit: Getty Images)

On Monday, OpenAI employee William Fedus confirmed on X that a mysterious chart-topping AI chatbot known as "gpt-chatbot" that had been undergoing testing on LMSYS's Chatbot Arena and frustrating experts was, in fact, OpenAI's newly announced GPT-4o AI model. He also revealed that GPT-4o had topped the Chatbot Arena leaderboard, achieving the highest documented score ever.

"GPT-4o is our new state-of-the-art frontier model. We’ve been testing a version on the LMSys arena as im-also-a-good-gpt2-chatbot," Fedus tweeted.

Chatbot Arena is a website where visitors converse with two random AI language models side by side without knowing which model is which, then choose which model gives the best response. It's a perfect example of vibe-based AI benchmarking, as AI researcher Simon Willison calls it.

Read 8 remaining paragraphs | Comments

OpenAI Unveils New ChatGPT That Listens, Looks and Talks

By: Cade Metz
14 May 2024 at 01:12
Chatbots, image generators and voice assistants are gradually merging into a single technology with a conversational voice.

Β© Jason Henry for The New York Times

The new app is part of a much wider effort to combine conversational chatbots like OpenAI’s ChatGPT with voice assistants like the Google Assistant and Apple’s Siri.

"Well, you seem like a person, but you're just a voice in a computer"

By: Rhaomi
13 May 2024 at 15:14
OpenAI unveils GPT-4o, a new flagship "omnimodel" capable of processing text, audio, and video. While it delivers big improvements in speed, cost, and reasoning ability, perhaps the most impressive is its new voice mode -- while the old version was a clunky speech --> text --> speech approach with tons of latency, the new model takes in audio directly and responds in kind, enabling real-time conversations with an eerily realistic voice, one that can recognize multiple speakers and even respond with sarcasm, laughter, and other emotional content of speech. Rumor has it Apple has neared a deal with the company to revamp an aging Siri, while the advance has clear implications for customer service, translation, education, and even virtual companions (or perhaps "lovers", as the allusions to Spike Jonze's Her, the Samantha-esque demo voice, and opening the door to mature content imply). Meanwhile, the offloading of most premium ChatGPT features to the free tier suggests something bigger coming down the pike.

Major ChatGPT-4o update allows audio-video talks with an β€œemotional” AI chatbot

13 May 2024 at 13:58
Abstract multicolored waveform

Enlarge (credit: Getty Images)

On Monday, OpenAI debuted GPT-4o (o for "omni"), a major new AI model that can ostensibly converse using speech in real time, reading emotional cues and responding to visual input. It operates faster than OpenAI's previous best model, GPT-4 Turbo, and will be free for ChatGPT users and available as a service through API, rolling out over the next few weeks, OpenAI says.

OpenAI revealed the new audio conversation and vision comprehension capabilities in a YouTube livestream titled "OpenAI Spring Update," presented by OpenAI CTO Mira Murati and employees Mark Chen and Barret Zoph that included live demos of GPT-4o in action.

OpenAI claims that GPT-4o responds to audio inputs in about 320 milliseconds on average, which is similar to human response times in conversation, according to a 2009 study, and much shorter than the typical 2–3 second lag experienced with previous models. With GPT-4o, OpenAI says it trained a brand-new AI model end-to-end using text, vision, and audio in a way that all inputs and outputs "are processed by the same neural network."

Read 11 remaining paragraphs | Comments

Friends From the Old Neighborhood Turn Rivals in Big Tech’s A.I. Race

29 April 2024 at 00:01
Demis Hassabis and Mustafa Suleyman, who both grew up in London, feared a corporate rush to build artificial intelligence. Now they’re driving that competition at Google and Microsoft.

Β© Left, Enric Fontcuberta/EPA, via Shutterstock; right, Clara Mokri for The New York Times

Demis Hassabis, left, the chief executive of Google DeepMind, and Mustafa Suleyman, the chief executive of Microsoft AI, were longtime friends from London.

Friends From the Old Neighborhood Turn Rivals in Big Tech’s A.I. Race

29 April 2024 at 00:01
Demis Hassabis and Mustafa Suleyman, who both grew up in London, feared a corporate rush to build artificial intelligence. Now they’re driving that competition at Google and Microsoft.

Β© Left, Enric Fontcuberta/EPA, via Shutterstock; right, Clara Mokri for The New York Times

Demis Hassabis, left, the chief executive of Google DeepMind, and Mustafa Suleyman, the chief executive of Microsoft AI, were longtime friends from London.

Generative A.I. Arrives in the Gene Editing World of CRISPR

By: Cade Metz
22 April 2024 at 16:48
Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms that can edit your DNA.

Β© Profluent Bio

Structure of the first AI-generated and open-sourced gene editor, OpenCRISPR-1.

Microsoft Makes a New Push Into Smaller A.I. Systems

23 April 2024 at 01:18
The company that has invested billions in generative A.I. pioneers like OpenAI says giant systems aren’t necessarily what everyone needs.

Β© Michael M. Santiago/Getty Images

The smallest Microsoft Phi-3 model can fit on a smartphone, and it runs on the kinds of chips that power regular computers.

Generative A.I. Arrives in the Gene Editing World of CRISPR

By: Cade Metz
22 April 2024 at 16:48
Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms that can edit your DNA.

Β© Profluent Bio

Structure of the first AI-generated and open-sourced gene editor, OpenCRISPR-1.
❌
❌