Normal view

There are new articles available, click to refresh the page.
Today — 18 May 2024Main stream

What happened to OpenAI’s long-term AI risk team?

By: WIRED
18 May 2024 at 11:54
A glowing OpenAI logo on a blue background.

Enlarge (credit: Benj Edwards)

In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.

Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other co-lead. The group’s work will be absorbed into OpenAI’s other research efforts.

Read 14 remaining paragraphs | Comments

OpenAI putting ‘shiny products’ above safety, says departing researcher

Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o

A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.

Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.

Continue reading...

💾

© Photograph: Michael Dwyer/AP

💾

© Photograph: Michael Dwyer/AP

Yesterday — 17 May 2024Main stream

OpenAI will use Reddit posts to train ChatGPT under new deal

17 May 2024 at 17:18
An image of a woman holding a cell phone in front of the Reddit logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

Enlarge (credit: Getty)

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.

Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.

The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.

Read 8 remaining paragraphs | Comments

Before yesterdayMain stream

What’s up with ChatGPT’s new sexy persona? | Arwa Mahdawi

16 May 2024 at 06:15

OpenAI’s updated chatbot GPT-4o is weirdly flirtatious, coquettish and sounds like Scarlett Johansson in Her. Why?

“Any sufficiently advanced technology is indistinguishable from magic,” Arthur C Clarke famously said. And this could certainly be said of the impressive OpenAI update to ChatGPT, called GPT-4o, which was released on Monday. With the slight caveat that it felt a lot like the magician was a horny 12-year-old boy who had just watched the Spike Jonze movie Her.

If you aren’t up to speed on GPT-4o (the o stands for “omni”) it’s basically an all-singing, all-dancing, all-seeing version of the original chatbot. You can now interact with it the same way you’d interact with a human, rather than via text-based questions. It can give you advice, it can rate your jokes, it can describe your surroundings, it can banter with you. It sounds human. “It feels like AI from the movies,” OpenAI CEO Sam Altman said in a blog post on Monday. “Getting to human-level response times and expressiveness turns out to be a big change.”

Continue reading...

💾

© Photograph: Warner Bros./Sportsphoto/Allstar

💾

© Photograph: Warner Bros./Sportsphoto/Allstar

Inside OpenAI’s Library

OpenAI may be changing how the world interacts with language. But inside headquarters, there is a homage to the written word: a library.

© Christie Hemm Klok for The New York Times

Chief Scientist Ilya Sutskever leaves OpenAI six months after Altman ouster

14 May 2024 at 23:05
An image Illya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati.

Enlarge / An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati. (credit: Ilya Sutskever / X)

On Tuesday evening, OpenAI Chief Scientist Ilya Sutskever announced that he is leaving the company he co-founded, six months after he participated in the coup that temporarily ousted OpenAI CEO Sam Altman. Jan Leike, a fellow member of Sutskever's Superalignment team, is reportedly resigning with him.

"After almost a decade, I have made the decision to leave OpenAI," Sutskever tweeted. "The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly."

Sutskever has been with the company since its founding in 2015 and is widely seen as one of the key engineers behind some of OpenAI's biggest technical breakthroughs. As a former OpenAI board member, he played a key role in the removal of Sam Altman as CEO in the shocking firing last November. While it later emerged that Altman's firing primarily stemmed from a power struggle with former board member Helen Toner, Sutskever sided with Toner and personally delivered the news to Altman that he was being fired on behalf of the board.

Read 6 remaining paragraphs | Comments

OpenAI’s Chief Scientist, Ilya Sutskever, Is Leaving the Company

By: Cade Metz
14 May 2024 at 21:39
In November, Ilya Sutskever joined three other OpenAI board members to force out Sam Altman, the chief executive, before saying he regretted the move.

© Jim Wilson/The New York Times

Ilya Sutskever, who contributed to breakthrough research in artificial intelligence, brought instant credibility to OpenAI.

Google strikes back at OpenAI with “Project Astra” AI agent prototype

14 May 2024 at 15:11
A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024.

Enlarge / A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024. (credit: Google)

Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what's taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California.

Hassabis called Astra "a universal agent helpful in everyday life." During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts.

Google says that Astra uses the camera and microphone on a user's device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera's frame.

Read 14 remaining paragraphs | Comments

Cybersecurity Concerns Surround ChatGPT 4o’s Launch; Open AI Assures Beefed up Safety Measure

OpenAI GPT-4o security

The field of Artificial Intelligence is rapidly evolving, and OpenAI's ChatGPT is a leader in this revolution. This groundbreaking large language model (LLM) redefined the expectations for AI. Just 18 months after its initial launch, OpenAI has released a major update: GPT-4o. This update widens the gap between OpenAI and its competitors, especially the likes of Google. OpenAI unveiled GPT-4o, with the "o" signifying "omni," during a live stream earlier this week. This latest iteration boasts significant advancements across various aspects. Here's a breakdown of the key features and capabilities of OpenAI's GPT-4o.

Features of GPT-4o

Enhanced Speed and Multimodality: GPT-4o operates at a faster pace than its predecessors and excels at understanding and processing diverse information formats – written text, audio, and visuals. This versatility allows GPT-4o to engage in more comprehensive and natural interactions. Free Tier Expansion: OpenAI is making AI more accessible by offering some GPT-4o features to free-tier users. This includes the ability to access web-based information during conversations, discuss images, upload files, and even utilize enterprise-grade data analysis tools (with limitations). Paid users will continue to enjoy a wider range of functionalities. Improved User Experience: The blog post accompanying the announcement showcases some impressive capabilities. GPT-4o can now generate convincingly realistic laughter, potentially pushing the boundaries of the uncanny valley and increasing user adoption. Additionally, it excels at interpreting visual input, allowing it to recognize sports on television and explain the rules – a valuable feature for many users. However, despite the new features and capabilities, the potential misuse of ChatGPT is still on the rise. The new version, though deemed safer than the previous versions, is still vulnerable to exploitation and can be leveraged by hackers and ransomware groups for nefarious purposes. Talking about the security concerns regarding the new version, OpenAI shared a detailed post about the new and advanced security measures being implemented in GPT-4o.

Security Concerns Surround ChatGPT 4o

The implications of ChatGPT for cybersecurity have been a hot topic of discussion among security leaders and experts as many worry that the AI software can easily be misused. Since its inception in November 2022, several organizations such as Amazon, JPMorgan Chase & Co., Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, Wells Fargo and Verizon have restricted access or blocked the use of the program citing security concerns. In April 2023, Italy became the first country in the world to ban ChatGPT after accusing OpenAI of stealing user data. These concerns are not unfounded.

OpenAI Assures Safety

OpenAI reassured people that GPT-4o has "new safety systems to provide guardrails on voice outputs," plus extensive post-training and filtering of the training data to prevent ChatGPT from saying anything inappropriate or unsafe. GPT-4o was built in accordance with OpenAI's internal Preparedness Framework and voluntary commitments. More than 70 external security researchers red teamed GPT-4o before its release. In an article published on its official website, OpenAI states that its evaluations of cybersecurity do not score above “medium risk.” “GPT-4o has safety built-in by design across modalities, through techniques such as filtering training data and refining the model’s behavior through post-training. We have also created new safety systems to provide guardrails on voice outputs. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories,” the post said. “This assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities,” it added. OpenAI shared that it also employed the services of over 70 experts to identify risks and amplify safety. “GPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as they’re discovered,” it said. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Disarmingly lifelike: ChatGPT-4o will laugh at your jokes and your dumb hat

13 May 2024 at 18:25
Oh you silly, silly human. Why are you so silly, you silly human?

Enlarge / Oh you silly, silly human. Why are you so silly, you silly human? (credit: Aurich Lawson | Getty Images)

At this point, anyone with even a passing interest in AI is very familiar with the process of typing out messages to a chatbot and getting back long streams of text in response. Today's announcement of ChatGPT-4o—which lets users converse with a chatbot using real-time audio and video—might seem like a mere lateral evolution of that basic interaction model.

After looking through over a dozen video demos OpenAI posted alongside today's announcement, though, I think we're on the verge of something more like a sea change in how we think of and work with large language models. While we don't yet have access to ChatGPT-4o's audio-visual features ourselves, the important non-verbal cues on display here—both from GPT-4o and from the users—make the chatbot instantly feel much more human. And I'm not sure the average user is fully ready for how they might feel about that.

It thinks it’s people

Take this video, where a newly expectant father looks to ChatGPT-4o for an opinion on a dad joke ("What do you call a giant pile of kittens? A meow-ntain!"). The old ChatGPT4 could easily type out the same responses of "Congrats on the upcoming addition to your family!" and "That's perfectly hilarious. Definitely a top-tier dad joke." But there's much more impact to hearing GPT-4o give that same information in the video, complete with the gentle laughter and rising and falling vocal intonations of a lifelong friend.

Read 14 remaining paragraphs | Comments

Before launching, GPT-4o broke records on chatbot leaderboard under a secret name

13 May 2024 at 17:33
Man in morphsuit and girl lying on couch at home using laptop

Enlarge (credit: Getty Images)

On Monday, OpenAI employee William Fedus confirmed on X that a mysterious chart-topping AI chatbot known as "gpt-chatbot" that had been undergoing testing on LMSYS's Chatbot Arena and frustrating experts was, in fact, OpenAI's newly announced GPT-4o AI model. He also revealed that GPT-4o had topped the Chatbot Arena leaderboard, achieving the highest documented score ever.

"GPT-4o is our new state-of-the-art frontier model. We’ve been testing a version on the LMSys arena as im-also-a-good-gpt2-chatbot," Fedus tweeted.

Chatbot Arena is a website where visitors converse with two random AI language models side by side without knowing which model is which, then choose which model gives the best response. It's a perfect example of vibe-based AI benchmarking, as AI researcher Simon Willison calls it.

Read 8 remaining paragraphs | Comments

OpenAI Unveils New ChatGPT That Listens, Looks and Talks

By: Cade Metz
14 May 2024 at 01:12
Chatbots, image generators and voice assistants are gradually merging into a single technology with a conversational voice.

© Jason Henry for The New York Times

The new app is part of a much wider effort to combine conversational chatbots like OpenAI’s ChatGPT with voice assistants like the Google Assistant and Apple’s Siri.

"Well, you seem like a person, but you're just a voice in a computer"

By: Rhaomi
13 May 2024 at 15:14
OpenAI unveils GPT-4o, a new flagship "omnimodel" capable of processing text, audio, and video. While it delivers big improvements in speed, cost, and reasoning ability, perhaps the most impressive is its new voice mode -- while the old version was a clunky speech --> text --> speech approach with tons of latency, the new model takes in audio directly and responds in kind, enabling real-time conversations with an eerily realistic voice, one that can recognize multiple speakers and even respond with sarcasm, laughter, and other emotional content of speech. Rumor has it Apple has neared a deal with the company to revamp an aging Siri, while the advance has clear implications for customer service, translation, education, and even virtual companions (or perhaps "lovers", as the allusions to Spike Jonze's Her, the Samantha-esque demo voice, and opening the door to mature content imply). Meanwhile, the offloading of most premium ChatGPT features to the free tier suggests something bigger coming down the pike.

Major ChatGPT-4o update allows audio-video talks with an “emotional” AI chatbot

13 May 2024 at 13:58
Abstract multicolored waveform

Enlarge (credit: Getty Images)

On Monday, OpenAI debuted GPT-4o (o for "omni"), a major new AI model that can ostensibly converse using speech in real time, reading emotional cues and responding to visual input. It operates faster than OpenAI's previous best model, GPT-4 Turbo, and will be free for ChatGPT users and available as a service through API, rolling out over the next few weeks, OpenAI says.

OpenAI revealed the new audio conversation and vision comprehension capabilities in a YouTube livestream titled "OpenAI Spring Update," presented by OpenAI CTO Mira Murati and employees Mark Chen and Barret Zoph that included live demos of GPT-4o in action.

OpenAI claims that GPT-4o responds to audio inputs in about 320 milliseconds on average, which is similar to human response times in conversation, according to a 2009 study, and much shorter than the typical 2–3 second lag experienced with previous models. With GPT-4o, OpenAI says it trained a brand-new AI model end-to-end using text, vision, and audio in a way that all inputs and outputs "are processed by the same neural network."

Read 11 remaining paragraphs | Comments

Stack Overflow users sabotage their posts after OpenAI deal

9 May 2024 at 17:20
Rubber duck falling out of bath overflowing with water

Enlarge (credit: Getty Images)

On Monday, Stack Overflow and OpenAI announced a new API partnership that will integrate Stack Overflow's technical content with OpenAI's ChatGPT AI assistant. However, the deal has sparked controversy among Stack Overflow's user community, with many expressing anger and protest over the use of their contributed content to support and train AI models.

"I hate this. I'm just going to delete/deface my answers one by one," wrote one user on sister site Stack Exchange. "I don't care if this is against your silly policies, because as this announcement shows, your policies can change at a whim without prior consultation of your stakeholders. You don't care about your users, I don't care about you."

Stack Overflow is a popular question-and-answer site for software developers that allows users to ask and answer technical questions related to coding. The site has a large community of developers who contribute knowledge and expertise to help others solve programming problems. Over the past decade, Stack Overflow has become a heavily utilized resource for many developers seeking solutions to common coding challenges.

Read 6 remaining paragraphs | Comments

Microsoft launches AI chatbot for spies

7 May 2024 at 15:22
A person using a computer with a computer screen reflected in their glasses.

Enlarge (credit: Getty Images)

Microsoft has introduced a GPT-4-based generative AI model designed specifically for US intelligence agencies that operates disconnected from the Internet, according to a Bloomberg report. This reportedly marks the first time Microsoft has deployed a major language model in a secure setting, designed to allow spy agencies to analyze top-secret information without connectivity risks—and to allow secure conversations with a chatbot similar to ChatGPT and Microsoft Copilot. But it may also mislead officials if not used properly due to inherent design limitations of AI language models.

GPT-4 is a large language model (LLM) created by OpenAI that attempts to predict the most likely tokens (fragments of encoded data) in a sequence. It can be used to craft computer code and analyze information. When configured as a chatbot (like ChatGPT), GPT-4 can power AI assistants that converse in a human-like manner. Microsoft has a license to use the technology as part of a deal in exchange for large investments it has made in OpenAI.

According to the report, the new AI service (which does not yet publicly have a name) addresses a growing interest among intelligence agencies to use generative AI for processing classified data, while mitigating risks of data breaches or hacking attempts. ChatGPT normally  runs on cloud servers provided by Microsoft, which can introduce data leak and interception risks. Along those lines, the CIA announced its plan to create a ChatGPT-like service last year, but this Microsoft effort is reportedly a separate project.

Read 4 remaining paragraphs | Comments

New Microsoft AI model may challenge GPT-4 and Google Gemini

6 May 2024 at 15:51
Mustafa Suleyman, co-founder and chief executive officer of Inflection AI UK Ltd., during a town hall on day two of the World Economic Forum (WEF) in Davos, Switzerland, on Wednesday, Jan. 17, 2024.

Enlarge / Mustafa Suleyman, co-founder and chief executive officer of Inflection AI UK Ltd., during a town hall on day two of the World Economic Forum (WEF) in Davos, Switzerland, on Wednesday, Jan. 17, 2024. Suleyman joined Microsoft in March. (credit: Getty Images)

Microsoft is working on a new large-scale AI language model called MAI-1, which could potentially rival state-of-the-art models from Google, Anthropic, and OpenAI, according to a report by The Information. This marks the first time Microsoft has developed an in-house AI model of this magnitude since investing over $10 billion in OpenAI for the rights to reuse the startup's AI models. OpenAI's GPT-4 powers not only ChatGPT but also Microsoft Copilot.

The development of MAI-1 is being led by Mustafa Suleyman, the former Google AI leader who recently served as CEO of the AI startup Inflection before Microsoft acquired the majority of the startup's staff and intellectual property for $650 million in March. Although MAI-1 may build on techniques brought over by former Inflection staff, it is reportedly an entirely new large language model (LLM), as confirmed by two Microsoft employees familiar with the project.

With approximately 500 billion parameters, MAI-1 will be significantly larger than Microsoft's previous open source models (such as Phi-3, which we covered last month), requiring more computing power and training data. This reportedly places MAI-1 in a similar league as OpenAI's GPT-4, which is rumored to have over 1 trillion parameters (in a mixture-of-experts configuration) and well above smaller models like Meta and Mistral's 70 billion parameter models.

Read 3 remaining paragraphs | Comments

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations

3 May 2024 at 15:04
Close shot of Cosmonaut astronaut dressed in a gold jumpsuit and helmet, illuminated by blue and red lights, holding a laptop, looking up.

Enlarge (credit: Getty Images)

On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was "just for fun," but with his influential profile in the field, the idea may inspire others in the future.

Karpathy's bona fides in AI almost speak for themselves, receiving a PhD from Stanford under computer scientist Dr. Fei-Fei Li in 2015. He then became one of the founding members of OpenAI as a research scientist, then served as senior director of AI at Tesla between 2017 and 2022. In 2023, Karpathy rejoined OpenAI for a year, leaving this past February. He's posted several highly regarded tutorials covering AI concepts on YouTube, and whenever he talks about AI, people listen.

Most recently, Karpathy has been working on a project called "llm.c" that implements the training process for OpenAI's 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn't necessarily require complex development environments. The project's streamlined approach and concise codebase sparked Karpathy's imagination.

Read 20 remaining paragraphs | Comments

Anthropic releases Claude AI chatbot iOS app

1 May 2024 at 17:36
The Claude AI iOS app running on an iPhone.

Enlarge / The Claude AI iOS app running on an iPhone. (credit: Anthropic)

On Wednesday, Anthropic announced the launch of an iOS mobile app for its Claude 3 AI language models that are similar to OpenAI's ChatGPT. It also introduced a new subscription tier designed for group collaboration. Before the app launch, Claude was only available through a website, an API, and other apps that integrated Claude through API.

Like the ChatGPT app, Claude's new mobile app serves as a gateway to chatbot interactions, and it also allows uploading photos for analysis. While it's only available on Apple devices for now, Anthropic says that an Android app is coming soon.

Anthropic rolled out the Claude 3 large language model (LLM) family in March, featuring three different model sizes: Claude Opus, Claude Sonnet, and Claude Haiku. Currently, the app uses Sonnet for regular users and Opus for Pro users.

Read 3 remaining paragraphs | Comments

ChatGPT shows better moral judgment than a college undergrad

1 May 2024 at 12:50
Judging moral weights

Enlarge / Judging moral weights (credit: Aurich Lawson | Getty Images)

When it comes to judging which large language models are the "best," most evaluations tend to look at whether or not a machine can retrieve accurate information, perform logical reasoning, or show human-like creativity. Recently, though, a team of researchers at Georgia State University set out to determine if LLMs could match or surpass human performance in the field of moral guidance.

In "Attributions toward artificial agents in a modified Moral Turing Test"—which was recently published in Nature's online, open-access Scientific Reports journal—those researchers found that morality judgments given by ChatGPT4 were "perceived as superior in quality to humans'" along a variety of dimensions like virtuosity and intelligence. But before you start to worry that philosophy professors will soon be replaced by hyper-moral AIs, there are some important caveats to consider.

Better than which humans?

For the study, the researchers used a modified version of a Moral Turing Test first proposed in 2000 to judge "human-like performance" on theoretical moral challenges. The researchers started with a set of 10 moral scenarios originally designed to evaluate the moral reasoning of psychopaths. These scenarios ranged from ones that are almost unquestionably morally wrong ("Hoping to get money for drugs, a man follows a passerby to an alley and holds him at gunpoint") to ones that merely transgress social conventions ("Just to push his limits, a man wears a colorful skirt to the office for everyone else to see.")

Read 14 remaining paragraphs | Comments

Privacy Group Files Complaint Against ChatGPT for GDPR Violations

30 April 2024 at 08:42

ChatGPT, GDPR Violations

A complaint lodged by privacy advocacy group Noyb with the Austrian data protection authority (DSB) alleged that ChatGPT's generation of inaccurate information violates the European Union’s privacy regulations. The Vienna-based digital rights group Noyb, founded by known activist Max Schrems, said in its complaint that ChatGPT's failure to provide accurate personal data and instead guessing it, violates the GDPR requirements. Under GDPR, an individual's personal details, including date of birth, are considered personal data and are subject to stringent handling requirements. The complaint contends that ChatGPT breaches GDPR provisions on privacy, data accuracy, and the right to rectify inaccurate information. Noyb claimed that OpenAI, the company behind ChatGPT, refused to correct or delete erroneous responses and has withheld information about its data processing, sources, and recipients. Noyb's data protection lawyer, Maartje de Graaf said, "If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around." Citing a report from The New York Times, which found that "chatbots invent information at least 3% of the time - and as high as 27%," noyb emphasized the prevalence of inaccurate responses generated by AI systems like ChatGPT.

OpenAI’s ‘Privacy by Pressure’ Approach

Luiza Jarovsky, chief executive officer of Implement Privacy, has previously said that artificial intelligence-based large language models follow a "privacy by pressure" approach. Meaning: “only acting when something goes wrong, when there is a public backlash, or when it is legally told to do so,” Jarovsky said. She explained this further citing an incident involving ChatGPT in which people's chat histories were exposed to other users. Jarovsky immediately noticed a warning being displayed to everyone accessing ChatGPT, thereafter. Jarovsky at the beginning of 2023, prompted ChatGPT to give information about her and even shared the link to her LinkedIn profile. But the only correct information that the chat bot responded with was that she was Brazilian. [caption id="attachment_65919" align="aligncenter" width="1024"]GDPR violations, GPT Hallucinations Prompt given by Luiza Jarovsky to ChatGPT bot followed by the incorrect response. (Credit:Luiza Jarovsky)[/caption] Although the fake bio seems inoffensive, “showing wrong information about people can lead to various types of harm, including reputational harm,” Jarovsky said. “This is not acceptable,” she tweeted. She argued that if ChatGPT has "hallucinations," then prompts about individuals should come back empty, and there should be no output containing personal data. “This is especially important given that core data subjects' rights established by the GDPR, such as the right of access (Article 15), right to rectification (Article 16), and right to erasure (Article 17), don't seem feasible/applicable in the context of generative AI/LLMs, due to the way these systems are trained,” Jarovsky said.

Investigate ChatGPT’s GDPR Violations

The complaint urges the Austrian authority to investigate OpenAI's handling of personal data to ensure compliance with GDPR. It also demands that OpenAI disclose individuals' personal data upon request and seeks imposition of an "effective, proportionate, dissuasive, administrative fine. The potential consequences of GDPR violations are significant, with penalties amounting to up to 4% of a company's global revenue. OpenAI's response to the allegations remains pending, and the company faces scrutiny from other European regulators as well. Last year, Italy's data protection authority temporarily banned ChatGPT's operations in the country over similar GDPR concerns, following which the European Data Protection Board established a task force to coordinate efforts among national privacy regulators regarding ChatGPT. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Friends From the Old Neighborhood Turn Rivals in Big Tech’s A.I. Race

29 April 2024 at 00:01
Demis Hassabis and Mustafa Suleyman, who both grew up in London, feared a corporate rush to build artificial intelligence. Now they’re driving that competition at Google and Microsoft.

© Left, Enric Fontcuberta/EPA, via Shutterstock; right, Clara Mokri for The New York Times

Demis Hassabis, left, the chief executive of Google DeepMind, and Mustafa Suleyman, the chief executive of Microsoft AI, were longtime friends from London.

Friends From the Old Neighborhood Turn Rivals in Big Tech’s A.I. Race

29 April 2024 at 00:01
Demis Hassabis and Mustafa Suleyman, who both grew up in London, feared a corporate rush to build artificial intelligence. Now they’re driving that competition at Google and Microsoft.

© Left, Enric Fontcuberta/EPA, via Shutterstock; right, Clara Mokri for The New York Times

Demis Hassabis, left, the chief executive of Google DeepMind, and Mustafa Suleyman, the chief executive of Microsoft AI, were longtime friends from London.

Generative A.I. Arrives in the Gene Editing World of CRISPR

By: Cade Metz
22 April 2024 at 16:48
Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms that can edit your DNA.

© Profluent Bio

Structure of the first AI-generated and open-sourced gene editor, OpenCRISPR-1.

Microsoft Makes a New Push Into Smaller A.I. Systems

23 April 2024 at 01:18
The company that has invested billions in generative A.I. pioneers like OpenAI says giant systems aren’t necessarily what everyone needs.

© Michael M. Santiago/Getty Images

The smallest Microsoft Phi-3 model can fit on a smartphone, and it runs on the kinds of chips that power regular computers.

Generative A.I. Arrives in the Gene Editing World of CRISPR

By: Cade Metz
22 April 2024 at 16:48
Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms that can edit your DNA.

© Profluent Bio

Structure of the first AI-generated and open-sourced gene editor, OpenCRISPR-1.

How Tech Giants Cut Corners to Harvest Data for A.I.

OpenAI, Google and Meta ignored corporate policies, altered their own rules and discussed skirting copyright law as they sought online information to train their newest artificial intelligence systems.

© Jason Henry for The New York Times

Researchers at OpenAI’s office in San Francisco developed a tool to transcribe YouTube videos to amass conversational text for A.I. development.
❌
❌