Reading view

There are new articles available, click to refresh the page.

Report: Apple and OpenAI have signed a deal to partner on AI

OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.

It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.

"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.

Read 7 remaining paragraphs | Comments

OpenAI board first learned about ChatGPT from Twitter, according to former member

Helen Toner, former OpenAI board member, speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023. (credit: Getty Images)

In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.

"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.

Read 8 remaining paragraphs | Comments

OpenAI Announces Safety and Security Committee Amid New AI Model Development

OpenAI Announces Safety and Security Committee

OpenAI announced a new safety and security committee as it begins training a new AI model intended to replace the GPT-4 system that currently powers its ChatGPT chatbot. The San Francisco-based startup announced the formation of the committee in a blog post on Tuesday, highlighting its role in advising the board on crucial safety and security decisions related to OpenAI’s projects and operations. The creation of the committee comes amid ongoing debates about AI safety at OpenAI. The company faced scrutiny after Jan Leike, a researcher, resigned, criticizing OpenAI for prioritizing product development over safety. Following this, co-founder and chief scientist Ilya Sutskever also resigned, leading to the disbandment of the "superalignment" team that he and Leike co-led, which was focused on addressing AI risks. Despite these controversies, OpenAI emphasized that its AI models are industry leaders in both capability and safety. The company expressed openness to robust debate during this critical period.

OpenAI's Safety and Security Committee Composition and Responsibilities

The safety committee comprises company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four OpenAI technical and policy experts. It also features board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, a former general counsel for Sony.
"A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days." 
The committee's initial task is to evaluate and further develop OpenAI’s existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.

Development of the New AI Model

OpenAI also announced that it has recently started training a new AI model, described as a "frontier model." These frontier models represent the most advanced AI systems, capable of generating text, images, video, and human-like conversations based on extensive datasets. The company also recently launched its newest flagship model GPT-4o ('o' stands for omni), which is a multilingual, multimodal generative pre-trained transformer designed by OpenAI. It was announced by OpenAI CTO Mira Murati during a live-streamed demo on May 13 and released the same day. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. GPT-4o has a context window supporting up to 128,000 tokens, which helps it maintain coherence over longer conversations or documents, making it suitable for detailed analysis. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

OpenAI backpedals on scandalous tactic to silence former employees

OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Former and current OpenAI employees received a memo this week that the AI company hopes to end the most embarrassing scandal that Sam Altman has ever faced as OpenAI's CEO.

The memo finally clarified for employees that OpenAI would not enforce a non-disparagement contract that employees since at least 2019 were pressured to sign within a week of termination or else risk losing their vested equity. For an OpenAI employee, that could mean losing millions for expressing even mild criticism about OpenAI's work.

You can read the full memo below in a post on X (formerly Twitter) from Andrew Carr, a former OpenAI employee whose LinkedIn confirms that he left the company in 2021.

Read 22 remaining paragraphs | Comments

TechScape: The people charged with making sure AI doesn’t destroy humanity have left the building

If OpenAI can’t keep its own team together, what hope is there for the rest of the industry? Plus, AI-generated ‘slop’ is taking over the internet

Don’t get TechScape delivered to your inbox? Sign up for the full article here

Everything happens so much. I’m in Seoul for the International AI summit, the half-year follow-up to last year’s Bletchley Park AI safety summit (the full sequel will be in Paris this autumn). While you read this, the first day of events will have just wrapped up – though, in keeping with the reduced fuss this time round, that was merely a “virtual” leaders’ meeting.

When the date was set for this summit – alarmingly late in the day for, say, a journalist with two preschool children for whom four days away from home is a juggling act – it was clear that there would be a lot to cover. The hot AI summer is upon us:

The inaugural AI safety summit at Bletchley Park in the UK last year announced an international testing framework for AI models, after calls … for a six-month pause in development of powerful systems.

There has been no pause. The Bletchley declaration, signed by UK, US, EU, China and others, hailed the “enormous global opportunities” from AI but also warned of its potential for causing “catastrophic” harm. It also secured a commitment from big tech firms including OpenAI, Google and Mark Zuckerberg’s Meta to cooperate with governments on testing their models before they are released.

A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.

Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said safety culture had become a lower priority. “Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.

I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI”, has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

“Slop” is what you get when you shove artificial intelligence-generated material up on the web for anyone to view.

Unlike a chatbot, the slop isn’t interactive, and is rarely intended to actually answer readers’ questions or serve their needs.

Continue reading...

💾

© Photograph: openai.com/sora

💾

© Photograph: openai.com/sora

OpenAI on the defensive after multiple PR setbacks in one week

The OpenAI logo under a raincloud.

Enlarge (credit: Benj Edwards | Getty Images)

Since the launch of its latest AI language model, GPT-4o, OpenAI has found itself on the defensive over the past week due to a string of bad news, rumors, and ridicule circulating on traditional and social media. The negative attention is potentially a sign that OpenAI has entered a new level of public visibility—and is more prominently receiving pushback to its AI approach beyond what it has seen from tech pundits and government regulators.

OpenAI's rough week started last Monday when the company previewed a flirty AI assistant with a voice seemingly inspired by Scarlett Johansson from the 2013 film Her. OpenAI CEO Sam Altman alluded to the film himself on X just before the event, and we had previously made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.

While that September update included a voice called "Sky" that some have said sounds like Johansson, it was GPT-4o's seemingly lifelike new conversational interface, complete with laughing and emotionally charged tonal shifts, that led to a widely circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Next, a Saturday Night Live joke reinforced an implied connection to Johansson's voice.

Read 15 remaining paragraphs | Comments

Chief Scientist Ilya Sutskever leaves OpenAI six months after Altman ouster

An image Illya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati.

Enlarge / An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati. (credit: Ilya Sutskever / X)

On Tuesday evening, OpenAI Chief Scientist Ilya Sutskever announced that he is leaving the company he co-founded, six months after he participated in the coup that temporarily ousted OpenAI CEO Sam Altman. Jan Leike, a fellow member of Sutskever's Superalignment team, is reportedly resigning with him.

"After almost a decade, I have made the decision to leave OpenAI," Sutskever tweeted. "The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly."

Sutskever has been with the company since its founding in 2015 and is widely seen as one of the key engineers behind some of OpenAI's biggest technical breakthroughs. As a former OpenAI board member, he played a key role in the removal of Sam Altman as CEO in the shocking firing last November. While it later emerged that Altman's firing primarily stemmed from a power struggle with former board member Helen Toner, Sutskever sided with Toner and personally delivered the news to Altman that he was being fired on behalf of the board.

Read 6 remaining paragraphs | Comments

Major ChatGPT-4o update allows audio-video talks with an “emotional” AI chatbot

Abstract multicolored waveform

Enlarge (credit: Getty Images)

On Monday, OpenAI debuted GPT-4o (o for "omni"), a major new AI model that can ostensibly converse using speech in real time, reading emotional cues and responding to visual input. It operates faster than OpenAI's previous best model, GPT-4 Turbo, and will be free for ChatGPT users and available as a service through API, rolling out over the next few weeks, OpenAI says.

OpenAI revealed the new audio conversation and vision comprehension capabilities in a YouTube livestream titled "OpenAI Spring Update," presented by OpenAI CTO Mira Murati and employees Mark Chen and Barret Zoph that included live demos of GPT-4o in action.

OpenAI claims that GPT-4o responds to audio inputs in about 320 milliseconds on average, which is similar to human response times in conversation, according to a 2009 study, and much shorter than the typical 2–3 second lag experienced with previous models. With GPT-4o, OpenAI says it trained a brand-new AI model end-to-end using text, vision, and audio in a way that all inputs and outputs "are processed by the same neural network."

Read 11 remaining paragraphs | Comments

❌