Reading view

There are new articles available, click to refresh the page.

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic

A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

Google’s AI Overview is flawed by design, and a new company blog post hints at why

A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

Russia and China are using OpenAI tools to spread disinformation

OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective."

Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images)

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.

The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.

Read 14 remaining paragraphs | Comments

OpenAI says Russian and Israeli groups used its tools to spread disinformation

Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.

Continue reading...

💾

© Photograph: Dado Ruvić/Reuters

💾

© Photograph: Dado Ruvić/Reuters

Report: Apple and OpenAI have signed a deal to partner on AI

OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.

It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.

"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.

Read 7 remaining paragraphs | Comments

OpenAI board first learned about ChatGPT from Twitter, according to former member

Helen Toner, former OpenAI board member, speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023. (credit: Getty Images)

In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.

"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.

Read 8 remaining paragraphs | Comments

OpenAI Forms Another Safety Committee After Dismantling Prior Team – Source: www.darkreading.com

openai-forms-another-safety-committee-after-dismantling-prior-team-–-source:-wwwdarkreading.com

Source: www.darkreading.com – Author: Dark Reading Staff 1 Min Read Source: SOPA Images Limited via Alamy Stock Photo Open AI is forming a safety and security committee led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman.  The committee is being formed to make recommendations to the full board on safety […]

La entrada OpenAI Forms Another Safety Committee After Dismantling Prior Team – Source: www.darkreading.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Nvidia denies pirate e-book sites are “shadow libraries” to shut down lawsuit

Nvidia denies pirate e-book sites are “shadow libraries” to shut down lawsuit

Enlarge (credit: Westend61 | Westend61)

Some of the most infamous so-called shadow libraries have increasingly faced legal pressure to either stop pirating books or risk being shut down or driven to the dark web. Among the biggest targets are Z-Library, which the US Department of Justice has charged with criminal copyright infringement, and Library Genesis (Libgen), which was sued by textbook publishers last fall for allegedly distributing digital copies of copyrighted works "on a massive scale in willful violation" of copyright laws.

But now these shadow libraries and others accused of spurning copyrights have seemingly found an unlikely defender in Nvidia, the AI chipmaker among those profiting most from the recent AI boom.

Nvidia seemed to defend the shadow libraries as a valid source of information online when responding to a lawsuit from book authors over the list of data repositories that were scraped to create the Books3 dataset used to train Nvidia's AI platform NeMo.

Read 12 remaining paragraphs | Comments

Argentinian president to meet Silicon Valley CEOs in bid to court tech titans

Javier Milei to hold private talks with Sundar Pichai and Sam Altman as Argentina faces worst economic crisis in decades

Javier Milei, Argentina’s president, is set to meet with the leaders of some of the world’s largest tech companies in Silicon Valley this week. The far-right libertarian leader will hold private talks with Sundar Pichai of Google, Sam Altman of OpenAI, Mark Zuckerberg of Meta and Tim Cook of Apple.

Milei also met last month with Elon Musk, who has become one of the South American president’s most prominent cheerleaders and repeatedly shared his pro-deregulation, anti-social justice message on Twitter. Peter Thiel, the tech billionaire, has also twice visited Milei, flying down to Buenos Aires to speak with him in February and May of this year.

Continue reading...

💾

© Photograph: Leandro Bustamante Gomez/Reuters

💾

© Photograph: Leandro Bustamante Gomez/Reuters

OpenAI Announces Safety and Security Committee Amid New AI Model Development

OpenAI Announces Safety and Security Committee

OpenAI announced a new safety and security committee as it begins training a new AI model intended to replace the GPT-4 system that currently powers its ChatGPT chatbot. The San Francisco-based startup announced the formation of the committee in a blog post on Tuesday, highlighting its role in advising the board on crucial safety and security decisions related to OpenAI’s projects and operations. The creation of the committee comes amid ongoing debates about AI safety at OpenAI. The company faced scrutiny after Jan Leike, a researcher, resigned, criticizing OpenAI for prioritizing product development over safety. Following this, co-founder and chief scientist Ilya Sutskever also resigned, leading to the disbandment of the "superalignment" team that he and Leike co-led, which was focused on addressing AI risks. Despite these controversies, OpenAI emphasized that its AI models are industry leaders in both capability and safety. The company expressed openness to robust debate during this critical period.

OpenAI's Safety and Security Committee Composition and Responsibilities

The safety committee comprises company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four OpenAI technical and policy experts. It also features board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, a former general counsel for Sony.
"A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days." 
The committee's initial task is to evaluate and further develop OpenAI’s existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.

Development of the New AI Model

OpenAI also announced that it has recently started training a new AI model, described as a "frontier model." These frontier models represent the most advanced AI systems, capable of generating text, images, video, and human-like conversations based on extensive datasets. The company also recently launched its newest flagship model GPT-4o ('o' stands for omni), which is a multilingual, multimodal generative pre-trained transformer designed by OpenAI. It was announced by OpenAI CTO Mira Murati during a live-streamed demo on May 13 and released the same day. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. GPT-4o has a context window supporting up to 128,000 tokens, which helps it maintain coherence over longer conversations or documents, making it suitable for detailed analysis. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

OpenAI training its next major AI model, forms new safety committee

A man rolling a boulder up a hill.

Enlarge (credit: Getty Images)

On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has "recently begun" training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to two weeks of public setbacks for the company.

Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, "frontier model" is a term for a new AI system designed to push the boundaries of current capabilities. And "AGI" refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).

Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.

Read 5 remaining paragraphs | Comments

Elon Musk’s xAI raises $6bn in bid to take on OpenAI

Funding round values artificial intelligence startup at $18bn before investment, says multibillionaire

Elon Musk’s artificial intelligence company xAI has closed a $6bn (£4.7bn) investment round that will make it among the best-funded challengers to OpenAI.

The startup is only a year old, but it has rapidly built its own large language model (LLM), the technology underpinning many of the recent advances in generative artificial intelligence capable of creating human-like text, pictures, video, and voices.

Continue reading...

💾

© Photograph: Anadolu Agency/Anadolu/Getty Images

💾

© Photograph: Anadolu Agency/Anadolu/Getty Images

If Scarlett Johansson can’t bring the AI firms to heel, what hope for the rest of us? | John Naughton

OpenAI’s unsubtle approximation of the actor’s voice for its new GPT-4o software was a stark illustration of the firm’s high-handed attitude

On Monday 13 May, OpenAI livestreamed an event to launch a fancy new product – a large language model (LLM) dubbed GPT-4o – that the company’s chief technology officer, Mira Murati, claimed to be more user-friendly and faster than boring ol’ ChatGPT. It was also more versatile, and multimodal, which is tech-speak for being able to interact in voice, text and vision. Key features of the new model, we were told, were that you could interrupt it in mid-sentence, that it had very low latency (delay in responding) and that it was sensitive to the user’s emotions.

Viewers were then treated to the customary toe-curling spectacle of “Mark and Barret”, a brace of tech bros straight out of central casting, interacting with the machine. First off, Mark confessed to being nervous, so the machine helped him to do some breathing exercises to calm his nerves. Then Barret wrote a simple equation on a piece of paper and the machine showed him how to find the value of X, after which he showed it a piece of computer code and the machine was able to deal with that too.

Continue reading...

💾

© Photograph: Valéry Hache/AFP/Getty Images

💾

© Photograph: Valéry Hache/AFP/Getty Images

Did OpenAI Illegally Mimic Scarlett Johansson’s Voice? – Source: www.govinfosecurity.com

Source: www.govinfosecurity.com – Author: 1 Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Actor Said She Firmly Declined Offer From AI Firm to Serve as Voice of GPT-4.o Mathew J. Schwartz (euroinfosec) • May 21, 2024     Scarlett Johansson (Image: Gage Skidmore, via Flickr/CC) Imagine these optics: A man asks a […]

La entrada Did OpenAI Illegally Mimic Scarlett Johansson’s Voice? – Source: www.govinfosecurity.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

OpenAI backpedals on scandalous tactic to silence former employees

OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Former and current OpenAI employees received a memo this week that the AI company hopes to end the most embarrassing scandal that Sam Altman has ever faced as OpenAI's CEO.

The memo finally clarified for employees that OpenAI would not enforce a non-disparagement contract that employees since at least 2019 were pressured to sign within a week of termination or else risk losing their vested equity. For an OpenAI employee, that could mean losing millions for expressing even mild criticism about OpenAI's work.

You can read the full memo below in a post on X (formerly Twitter) from Andrew Carr, a former OpenAI employee whose LinkedIn confirms that he left the company in 2021.

Read 22 remaining paragraphs | Comments

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023. (credit: Sean Zanni / Contributor | Patrick McMullan)

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson's voice when seeking an actor for ChatGPT's "Sky" voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman's defense against Johansson's claims that Sky was made to sound "eerily similar" to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

Read 40 remaining paragraphs | Comments

OpenAI and Wall Street Journal owner News Corp sign content deal

Deal lets ChatGPT maker use all articles from Wall Street Journal, New York Post, Times and Sunday Times for AI model development

ChatGPT developer OpenAI has signed a deal to bring news content from the Wall Street Journal, the New York Post, the Times and the Sunday Times to the artificial intelligence platform, the companies said on Wednesday. Neither party disclosed a dollar figure for the deal.

The deal will give OpenAI access to current and archived content from all of News Corp’s publications. The deal comes weeks after the AI heavyweight signed a deal with the Financial Times to license its content for the development of AI models. Earlier this year, OpenAI inked a similar contract with Axel Springer, the parent company of Business Insider and Politico.

Continue reading...

💾

© Photograph: Michael Dwyer/AP

💾

© Photograph: Michael Dwyer/AP

TechScape: The people charged with making sure AI doesn’t destroy humanity have left the building

If OpenAI can’t keep its own team together, what hope is there for the rest of the industry? Plus, AI-generated ‘slop’ is taking over the internet

Don’t get TechScape delivered to your inbox? Sign up for the full article here

Everything happens so much. I’m in Seoul for the International AI summit, the half-year follow-up to last year’s Bletchley Park AI safety summit (the full sequel will be in Paris this autumn). While you read this, the first day of events will have just wrapped up – though, in keeping with the reduced fuss this time round, that was merely a “virtual” leaders’ meeting.

When the date was set for this summit – alarmingly late in the day for, say, a journalist with two preschool children for whom four days away from home is a juggling act – it was clear that there would be a lot to cover. The hot AI summer is upon us:

The inaugural AI safety summit at Bletchley Park in the UK last year announced an international testing framework for AI models, after calls … for a six-month pause in development of powerful systems.

There has been no pause. The Bletchley declaration, signed by UK, US, EU, China and others, hailed the “enormous global opportunities” from AI but also warned of its potential for causing “catastrophic” harm. It also secured a commitment from big tech firms including OpenAI, Google and Mark Zuckerberg’s Meta to cooperate with governments on testing their models before they are released.

A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.

Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said safety culture had become a lower priority. “Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.

I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI”, has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

“Slop” is what you get when you shove artificial intelligence-generated material up on the web for anyone to view.

Unlike a chatbot, the slop isn’t interactive, and is rarely intended to actually answer readers’ questions or serve their needs.

Continue reading...

💾

© Photograph: openai.com/sora

💾

© Photograph: openai.com/sora

New Windows AI feature records everything you’ve done on your PC

A screenshot of Microsoft's new

Enlarge / A screenshot of Microsoft's new "Recall" feature in action. (credit: Microsoft)

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called "Recall" for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.

"Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds," Microsoft says on its website. "The snapshots are encrypted and saved on your PC’s hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots."

By performing a Recall action, users can access a snapshot from a specific time period, providing context for the event or moment they are searching for. It also allows users to search through teleconference meetings they've participated in and videos watched using an AI-powered feature that transcribes and translates speech.

Read 6 remaining paragraphs | Comments

OpenAI on the defensive after multiple PR setbacks in one week

The OpenAI logo under a raincloud.

Enlarge (credit: Benj Edwards | Getty Images)

Since the launch of its latest AI language model, GPT-4o, OpenAI has found itself on the defensive over the past week due to a string of bad news, rumors, and ridicule circulating on traditional and social media. The negative attention is potentially a sign that OpenAI has entered a new level of public visibility—and is more prominently receiving pushback to its AI approach beyond what it has seen from tech pundits and government regulators.

OpenAI's rough week started last Monday when the company previewed a flirty AI assistant with a voice seemingly inspired by Scarlett Johansson from the 2013 film Her. OpenAI CEO Sam Altman alluded to the film himself on X just before the event, and we had previously made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.

While that September update included a voice called "Sky" that some have said sounds like Johansson, it was GPT-4o's seemingly lifelike new conversational interface, complete with laughing and emotionally charged tonal shifts, that led to a widely circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Next, a Saturday Night Live joke reinforced an implied connection to Johansson's voice.

Read 15 remaining paragraphs | Comments

AI-detic Memory

Microsoft held a live event today showcasing their vision of the future of the home PC (or "Copilot+ PC"), boasting longer battery life, better-standardized ARM processors, and (predictably) a whole host of new AI features built on dedicated hardware, from real-time translation to in-system assistant prompts to custom-guided image creation. Perhaps most interesting is the new "Recall" feature that records all on-screen activity securely on-device, allowing natural-language recall of all articles read, text written, and videos seen. It's just the first foray into a new era of AI PCs -- and Apple is expected to join the push with an expected partnership with OpenAI debuting at WWDC next month. In a tech world that has lately been defined by the smartphone, can AI make the PC cool again?

Scarlett Johansson says Altman insinuated that AI soundalike was intentional

Scarlett Johansson and Joaquin Phoenix attend <em>Her</em> premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy.

Enlarge / Scarlett Johansson and Joaquin Phoenix attend Her premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy. (credit: Franco Origlia / Contributor | Getty Images Entertainment)

OpenAI has paused a voice mode option for ChatGPT-4o, Sky, after backlash accusing the AI company of intentionally ripping off Scarlett Johansson's critically acclaimed voice-acting performance in the 2013 sci-fi film Her.

In a blog defending its casting decision for Sky, OpenAI went into great detail explaining its process for choosing the individual voice options for its chatbot. But ultimately, the company seemed pressed to admit that Sky's voice was just too similar to Johansson's to keep using it, at least for now.

"We believe that AI voices should not deliberately mimic a celebrity's distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice," OpenAI's blog said.

Read 24 remaining paragraphs | Comments

ChatGPT suspends Scarlett Johansson-like voice as actor speaks out against OpenAI

OpenAI says ‘Sky’ is not an imitation of actor’s voice after users compare it to AI companion character in film Her

Scarlett Johansson has spoken out against OpenAI after the company used a voice eerily resembling her own in its new ChatGPT product.

The actor said in a statement she was approached by OpenAI nine months ago to voice its AI system but declined for “personal reasons”. Johansson was “shocked” and “angered” when she heard the voice option, which “sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” she said.

Continue reading...

💾

© Photograph: Sarah Meyssonnier/Reuters

💾

© Photograph: Sarah Meyssonnier/Reuters

What happened to OpenAI’s long-term AI risk team?

A glowing OpenAI logo on a blue background.

Enlarge (credit: Benj Edwards)

In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.

Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other co-lead. The group’s work will be absorbed into OpenAI’s other research efforts.

Read 14 remaining paragraphs | Comments

OpenAI putting ‘shiny products’ above safety, says departing researcher

Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o

A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.

Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.

Continue reading...

💾

© Photograph: Michael Dwyer/AP

💾

© Photograph: Michael Dwyer/AP

OpenAI will use Reddit posts to train ChatGPT under new deal

An image of a woman holding a cell phone in front of the Reddit logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

Enlarge (credit: Getty)

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.

Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.

The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.

Read 8 remaining paragraphs | Comments

Google unveils Veo, a high-definition AI video generator that may rival Sora

Still images taken from videos generated by Google Veo.

Enlarge / Still images taken from videos generated by Google Veo. (credit: Google / Benj Edwards)

On Tuesday at Google I/O 2024, Google announced Veo, a new AI video-synthesis model that can create HD videos from text, image, or video prompts, similar to OpenAI's Sora. It can generate 1080p videos lasting over a minute and edit videos from written instructions, but it has not yet been released for broad use.

Veo reportedly includes the ability to edit existing videos using text commands, maintain visual consistency across frames, and generate video sequences lasting up to and beyond 60 seconds from a single prompt or a series of prompts that form a narrative. The company says it can generate detailed scenes and apply cinematic effects such as time-lapses, aerial shots, and various visual styles

Since the launch of DALL-E 2 in April 2022, we've seen a parade of new image synthesis and video synthesis models that aim to allow anyone who can type a written description to create a detailed image or video. While neither technology has been fully refined, both AI image and video generators have been steadily growing more capable.

Read 9 remaining paragraphs | Comments

Chief Scientist Ilya Sutskever leaves OpenAI six months after Altman ouster

An image Illya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati.

Enlarge / An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati. (credit: Ilya Sutskever / X)

On Tuesday evening, OpenAI Chief Scientist Ilya Sutskever announced that he is leaving the company he co-founded, six months after he participated in the coup that temporarily ousted OpenAI CEO Sam Altman. Jan Leike, a fellow member of Sutskever's Superalignment team, is reportedly resigning with him.

"After almost a decade, I have made the decision to leave OpenAI," Sutskever tweeted. "The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly."

Sutskever has been with the company since its founding in 2015 and is widely seen as one of the key engineers behind some of OpenAI's biggest technical breakthroughs. As a former OpenAI board member, he played a key role in the removal of Sam Altman as CEO in the shocking firing last November. While it later emerged that Altman's firing primarily stemmed from a power struggle with former board member Helen Toner, Sutskever sided with Toner and personally delivered the news to Altman that he was being fired on behalf of the board.

Read 6 remaining paragraphs | Comments

Google strikes back at OpenAI with “Project Astra” AI agent prototype

A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024.

Enlarge / A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024. (credit: Google)

Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what's taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California.

Hassabis called Astra "a universal agent helpful in everyday life." During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts.

Google says that Astra uses the camera and microphone on a user's device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera's frame.

Read 14 remaining paragraphs | Comments

Cybersecurity Concerns Surround ChatGPT 4o’s Launch; Open AI Assures Beefed up Safety Measure

OpenAI GPT-4o security

The field of Artificial Intelligence is rapidly evolving, and OpenAI's ChatGPT is a leader in this revolution. This groundbreaking large language model (LLM) redefined the expectations for AI. Just 18 months after its initial launch, OpenAI has released a major update: GPT-4o. This update widens the gap between OpenAI and its competitors, especially the likes of Google. OpenAI unveiled GPT-4o, with the "o" signifying "omni," during a live stream earlier this week. This latest iteration boasts significant advancements across various aspects. Here's a breakdown of the key features and capabilities of OpenAI's GPT-4o.

Features of GPT-4o

Enhanced Speed and Multimodality: GPT-4o operates at a faster pace than its predecessors and excels at understanding and processing diverse information formats – written text, audio, and visuals. This versatility allows GPT-4o to engage in more comprehensive and natural interactions. Free Tier Expansion: OpenAI is making AI more accessible by offering some GPT-4o features to free-tier users. This includes the ability to access web-based information during conversations, discuss images, upload files, and even utilize enterprise-grade data analysis tools (with limitations). Paid users will continue to enjoy a wider range of functionalities. Improved User Experience: The blog post accompanying the announcement showcases some impressive capabilities. GPT-4o can now generate convincingly realistic laughter, potentially pushing the boundaries of the uncanny valley and increasing user adoption. Additionally, it excels at interpreting visual input, allowing it to recognize sports on television and explain the rules – a valuable feature for many users. However, despite the new features and capabilities, the potential misuse of ChatGPT is still on the rise. The new version, though deemed safer than the previous versions, is still vulnerable to exploitation and can be leveraged by hackers and ransomware groups for nefarious purposes. Talking about the security concerns regarding the new version, OpenAI shared a detailed post about the new and advanced security measures being implemented in GPT-4o.

Security Concerns Surround ChatGPT 4o

The implications of ChatGPT for cybersecurity have been a hot topic of discussion among security leaders and experts as many worry that the AI software can easily be misused. Since its inception in November 2022, several organizations such as Amazon, JPMorgan Chase & Co., Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, Wells Fargo and Verizon have restricted access or blocked the use of the program citing security concerns. In April 2023, Italy became the first country in the world to ban ChatGPT after accusing OpenAI of stealing user data. These concerns are not unfounded.

OpenAI Assures Safety

OpenAI reassured people that GPT-4o has "new safety systems to provide guardrails on voice outputs," plus extensive post-training and filtering of the training data to prevent ChatGPT from saying anything inappropriate or unsafe. GPT-4o was built in accordance with OpenAI's internal Preparedness Framework and voluntary commitments. More than 70 external security researchers red teamed GPT-4o before its release. In an article published on its official website, OpenAI states that its evaluations of cybersecurity do not score above “medium risk.” “GPT-4o has safety built-in by design across modalities, through techniques such as filtering training data and refining the model’s behavior through post-training. We have also created new safety systems to provide guardrails on voice outputs. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories,” the post said. “This assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities,” it added. OpenAI shared that it also employed the services of over 70 experts to identify risks and amplify safety. “GPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as they’re discovered,” it said. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Disarmingly lifelike: ChatGPT-4o will laugh at your jokes and your dumb hat

Oh you silly, silly human. Why are you so silly, you silly human?

Enlarge / Oh you silly, silly human. Why are you so silly, you silly human? (credit: Aurich Lawson | Getty Images)

At this point, anyone with even a passing interest in AI is very familiar with the process of typing out messages to a chatbot and getting back long streams of text in response. Today's announcement of ChatGPT-4o—which lets users converse with a chatbot using real-time audio and video—might seem like a mere lateral evolution of that basic interaction model.

After looking through over a dozen video demos OpenAI posted alongside today's announcement, though, I think we're on the verge of something more like a sea change in how we think of and work with large language models. While we don't yet have access to ChatGPT-4o's audio-visual features ourselves, the important non-verbal cues on display here—both from GPT-4o and from the users—make the chatbot instantly feel much more human. And I'm not sure the average user is fully ready for how they might feel about that.

It thinks it’s people

Take this video, where a newly expectant father looks to ChatGPT-4o for an opinion on a dad joke ("What do you call a giant pile of kittens? A meow-ntain!"). The old ChatGPT4 could easily type out the same responses of "Congrats on the upcoming addition to your family!" and "That's perfectly hilarious. Definitely a top-tier dad joke." But there's much more impact to hearing GPT-4o give that same information in the video, complete with the gentle laughter and rising and falling vocal intonations of a lifelong friend.

Read 14 remaining paragraphs | Comments

Before launching, GPT-4o broke records on chatbot leaderboard under a secret name

Man in morphsuit and girl lying on couch at home using laptop

Enlarge (credit: Getty Images)

On Monday, OpenAI employee William Fedus confirmed on X that a mysterious chart-topping AI chatbot known as "gpt-chatbot" that had been undergoing testing on LMSYS's Chatbot Arena and frustrating experts was, in fact, OpenAI's newly announced GPT-4o AI model. He also revealed that GPT-4o had topped the Chatbot Arena leaderboard, achieving the highest documented score ever.

"GPT-4o is our new state-of-the-art frontier model. We’ve been testing a version on the LMSys arena as im-also-a-good-gpt2-chatbot," Fedus tweeted.

Chatbot Arena is a website where visitors converse with two random AI language models side by side without knowing which model is which, then choose which model gives the best response. It's a perfect example of vibe-based AI benchmarking, as AI researcher Simon Willison calls it.

Read 8 remaining paragraphs | Comments

"Well, you seem like a person, but you're just a voice in a computer"

OpenAI unveils GPT-4o, a new flagship "omnimodel" capable of processing text, audio, and video. While it delivers big improvements in speed, cost, and reasoning ability, perhaps the most impressive is its new voice mode -- while the old version was a clunky speech --> text --> speech approach with tons of latency, the new model takes in audio directly and responds in kind, enabling real-time conversations with an eerily realistic voice, one that can recognize multiple speakers and even respond with sarcasm, laughter, and other emotional content of speech. Rumor has it Apple has neared a deal with the company to revamp an aging Siri, while the advance has clear implications for customer service, translation, education, and even virtual companions (or perhaps "lovers", as the allusions to Spike Jonze's Her, the Samantha-esque demo voice, and opening the door to mature content imply). Meanwhile, the offloading of most premium ChatGPT features to the free tier suggests something bigger coming down the pike.

Major ChatGPT-4o update allows audio-video talks with an “emotional” AI chatbot

Abstract multicolored waveform

Enlarge (credit: Getty Images)

On Monday, OpenAI debuted GPT-4o (o for "omni"), a major new AI model that can ostensibly converse using speech in real time, reading emotional cues and responding to visual input. It operates faster than OpenAI's previous best model, GPT-4 Turbo, and will be free for ChatGPT users and available as a service through API, rolling out over the next few weeks, OpenAI says.

OpenAI revealed the new audio conversation and vision comprehension capabilities in a YouTube livestream titled "OpenAI Spring Update," presented by OpenAI CTO Mira Murati and employees Mark Chen and Barret Zoph that included live demos of GPT-4o in action.

OpenAI claims that GPT-4o responds to audio inputs in about 320 milliseconds on average, which is similar to human response times in conversation, according to a 2009 study, and much shorter than the typical 2–3 second lag experienced with previous models. With GPT-4o, OpenAI says it trained a brand-new AI model end-to-end using text, vision, and audio in a way that all inputs and outputs "are processed by the same neural network."

Read 11 remaining paragraphs | Comments

❌