Normal view
Googleβs AI Overview is flawed by design, and a new company blog post hints at why
On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.
To recap, the AI Overview featureβwhich the company showed off at Google I/O a few weeks agoβaims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.
While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.
The New ChatGPT Offers a Lesson in AI Hype
OpenAI says Russian and Israeli groups used its tools to spread disinformation
Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences
OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.
Malicious actors used the companyβs generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.
Continue reading...Report: Apple and OpenAI have signed a deal to partner on AI
Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.
It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.
"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAIβs conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.
Tech giants form AI group to counter Nvidia with new interconnect standard
On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.
The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplicationsβnecessary for running neural network architectureβin parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.
This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)βcreated by Intel in 2019βwhich provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.
- Cybersecurity News and Magazine
- Japanese Man Arrested for GenAI Ransomware as AI Jailbreak Concerns Grow
Japanese Man Arrested for GenAI Ransomware as AI Jailbreak Concerns Grow
AI Jailbreak Tools and Methods Unclear
News reports on Hayashiβs arrest have been lacking in details on the tools and methods he used to create the ransomware. The Japan Times reported that Hayashi, a former factory worker, βis not an expert on malware. He allegedly learned online how to ask AI tools questions that would elicit information on how to create malware.β Hayashi came under suspicion after police arrested him in March βfor allegedly using fake identification to obtain a SIM card registered under someone else's name,β the paper reported. The Japan News, which reported that Hayashi is unemployed, said police found βa homemade virus on a computerβ following the March arrest. The News said police suspect he βused his home computer and smartphone to combine information about creating malware programs obtained after giving instructions to several generative AI systems in March last year.β Hayashi βallegedly gave instructions to the AI systems while concealing his purpose of creating the virus to obtain design information necessary for encrypting files and demanding ransom,β the News reported. βHe is said to have searched online for ways to illegally obtain information.β Hayashi reportedly admitted to charges during questioning, and told police, βI wanted to make money through ransomware. I thought I could do anything if I asked AI.β There have been no reports of damage from the ransomware he created, the News said.LLM Jailbreak Research Heats Up
The news comes as research on AI jailbreaking and attack techniques has grown, with a number of recent reports on risks and possible solutions. In a paper posted to arXiv this week, the CISPA researchers said they were able to more than double their attack success rate (ASR) on GPT-4oβs voice mode with an attack they dubbed VOICEJAILBREAK, βa novel voice jailbreak attack that humanizes GPT-4o and attempts to persuade it through fictional storytelling (setting, character, and plot).β Another arXiv paper, posted in February by researchers at the University of California at Berkeley, looked at a range of risks associated with GenAI tools such as Microsoft Copilot and ChatGPT, along with possible solutions, such as development of an βAI firewallβ to monitor and change LLM inputs and outputs if necessary. And earlier this month, OT and IoT security company SCADAfence outlined a wide range of AI tools, threat actors and attack techniques. In addition to general use case chatbots like ChatGPT and Google Gemini, the report looked at βdark LLMsβ created for malicious purposes, such as WormGPT, FraudGPT, DarkBERT and DarkBART. SCADAfence recommended that OT and SCADA organizations follow best practices such as limiting network exposure for control systems, patching, access control and up to date offline backups. GenAI uses and misuses is also expected to be the topic of a number of presentations at Gartnerβs Security and Risk Management Summit next week in National Harbor, Maryland, just outside the U.S. capital.Microsoftβs Windows Recall: Cutting-Edge Search Tech or Creepy Overreach?
SecurityWeek editor-at-large Ryan Naraine examines the broad tension between tech innovation and privacy rights at a time when ChatGPT-like bots and generative-AI apps are starting to dominate the landscape.Β
The post Microsoftβs Windows Recall: Cutting-Edge Search Tech or Creepy Overreach? appeared first on SecurityWeek.
OpenAI board first learned about ChatGPT from Twitter, according to former member
In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.
OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.
"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.
- Cybersecurity News and Magazine
- OpenAI Announces Safety and Security Committee Amid New AI Model Development
OpenAI Announces Safety and Security Committee Amid New AI Model Development
OpenAI's Safety and Security Committee Composition and Responsibilities
The safety committee comprises company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four OpenAI technical and policy experts. It also features board members Adam DβAngelo, CEO of Quora, and Nicole Seligman, a former general counsel for Sony."A first task of the Safety and Security Committee will be to evaluate and further develop OpenAIβs processes and safeguards over the next 90 days."ΒThe committee's initial task is to evaluate and further develop OpenAIβs existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.
Development of the New AI Model
OpenAI also announced that it has recently started training a new AI model, described as a "frontier model." These frontier models represent the most advanced AI systems, capable of generating text, images, video, and human-like conversations based on extensive datasets. The company also recently launched its newest flagship model GPT-4o ('o' stands for omni), which is a multilingual, multimodal generative pre-trained transformer designed by OpenAI. It was announced by OpenAI CTO Mira Murati during a live-streamed demo on May 13 and released the same day. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. GPT-4o has a context window supporting up to 128,000 tokens, which helps it maintain coherence over longer conversations or documents, making it suitable for detailed analysis. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.- SecurityWeek
- OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model
OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model
OpenAI is setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot.
The post OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model appeared first on SecurityWeek.
OpenAI training its next major AI model, forms new safety committee
On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has "recently begun" training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to two weeks of public setbacks for the company.
Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, "frontier model" is a term for a new AI system designed to push the boundaries of current capabilities. And "AGI" refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).
Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.
OpenAI Says It Has Begun Training a New Flagship A.I. Model
Bing outage shows just how little competition Google search really has
Bing, Microsoft's search engine platform, went down in the very early morning today. That meant that searches from Microsoft's Edge browsers that had yet to change their default providers didn't work. It also meant that services relying on Bing's search APIβMicrosoft's own Copilot, ChatGPT search, Yahoo, Ecosia, and DuckDuckGoβsimilarly failed.
Services were largely restored by the morning Eastern work hours, but the timing feels apt, concerning, or some combination of the two. Google, the consistently dominating search platform, just last week announced and debuted AI Overviews as a default addition to all searches. If you don't want an AI response but still want to use Google, you can hunt down the new "Web" option in a menu, or you can, per Ernie Smith, tack "&udm=14" onto your search or use Smith's own "Konami code" shortcut page.
If dismay about AI's hallucinations, power draw, or pizza recipes concern youβalong with perhaps broader Google issues involving privacy, tracking, news, SEO, or monopoly powerβmost of your other major options were brought down by a single API outage this morning. Moving past that kind of single point of vulnerability will take some work, both by the industry and by you, the person wondering if there's a real alternative.
Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama
OpenAI is sticking to its story that it never intended to copy Scarlett Johansson's voice when seeking an actor for ChatGPT's "Sky" voice mode.
The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman's defense against Johansson's claims that Sky was made to sound "eerily similar" to her critically acclaimed voice acting performance in the sci-fi film Her.
Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.
OpenAI and Wall Street Journal owner News Corp sign content deal
Deal lets ChatGPT maker use all articles from Wall Street Journal, New York Post, Times and Sunday Times for AI model development
ChatGPT developer OpenAI has signed a deal to bring news content from the Wall Street Journal, the New York Post, the Times and the Sunday Times to the artificial intelligence platform, the companies said on Wednesday. Neither party disclosed a dollar figure for the deal.
The deal will give OpenAI access to current and archived content from all of News Corpβs publications. The deal comes weeks after the AI heavyweight signed a deal with the Financial Times to license its content for the development of AI models. Earlier this year, OpenAI inked a similar contract with Axel Springer, the parent company of Business Insider and Politico.
Continue reading...Scarlett Johansson Said No, but OpenAIβs Virtual Assistant Sounds Just Like Her
Productivity soars in sectors of global economy most exposed to AI, says report
Employers in UK, one of 15 countries studied, willing to pay 14% wage premium for jobs requiring AI skills
The sectors of the global economy most heavily exposed to artificial intelligence (AI) are witnessing a marked productivity increase and command a significant wage premium, according to a report.
Boosting hopes that AI might help lift the global economy out of a 15-year, low-growth trough, a PwC study found productivity growth was almost five times as rapid in parts of the economy where AI penetration was highest than in less exposed sectors.
Continue reading...New Windows AI feature records everything youβve done on your PC
At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called "Recall" for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.
"Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds," Microsoft says on its website. "The snapshots are encrypted and saved on your PCβs hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots."
By performing a Recall action, users can access a snapshot from a specific time period, providing context for the event or moment they are searching for. It also allows users to search through teleconference meetings they've participated in and videos watched using an AI-powered feature that transcribes and translates speech.
OpenAI on the defensive after multiple PR setbacks in one week
Since the launch of its latest AI language model, GPT-4o, OpenAI has found itself on the defensive over the past week due to a string of bad news, rumors, and ridicule circulating on traditional and social media. The negative attention is potentially a sign that OpenAI has entered a new level of public visibilityβand is more prominently receiving pushback to its AI approach beyond what it has seen from tech pundits and government regulators.
OpenAI's rough week started last Monday when the company previewed a flirty AI assistant with a voice seemingly inspired by Scarlett Johansson from the 2013 film Her. OpenAI CEO Sam Altman alluded to the film himself on X just before the event, and we had previously made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.
While that September update included a voice called "Sky" that some have said sounds like Johansson, it was GPT-4o's seemingly lifelikeΒ new conversational interface, complete with laughing and emotionally charged tonal shifts, that led to a widely circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Next, a Saturday Night Live joke reinforced an implied connection to Johansson's voice.
Can AI Make the PC Cool Again? Microsoft Thinks So.
Scarlett Johansson says Altman insinuated that AI soundalike was intentional
OpenAI has paused a voice mode option for ChatGPT-4o, Sky, after backlash accusing the AI company of intentionally ripping off Scarlett Johansson's critically acclaimed voice-acting performance in the 2013 sci-fi film Her.
In a blog defending its casting decision for Sky, OpenAI went into great detail explaining its process for choosing the individual voice options for its chatbot. But ultimately, the company seemed pressed to admit that Sky's voice was just too similar to Johansson's to keep using it, at least for now.
"We believe that AI voices should not deliberately mimic a celebrity's distinctive voiceβSkyβs voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice," OpenAI's blog said.
ChatGPT suspends Scarlett Johansson-like voice as actor speaks out against OpenAI
OpenAI says βSkyβ is not an imitation of actorβs voice after users compare it to AI companion character in film Her
Scarlett Johansson has spoken out against OpenAI after the company used a voice eerily resembling her own in its new ChatGPT product.
The actor said in a statement she was approached by OpenAI nine months ago to voice its AI system but declined for βpersonal reasonsβ. Johansson was βshockedβ and βangeredβ when she heard the voice option, which βsounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,β she said.
Continue reading...AI chatbotsβ safeguards can be easily bypassed, say UK researchers
All five systems tested were found to be βhighly vulnerableβ to attempts to elicit harmful responses
Guardrails to prevent artificial intelligence models behind chatbots from issuing illegal, toxic or explicit responses can be bypassed with simple techniques, UK government researchers have found.
The UKβs AI Safety Institute (AISI) said systems it had tested were βhighly vulnerableβ to jailbreaks, a term for text prompts designed to elicit a response that a model is supposedly trained to avoid issuing.
Continue reading...What happened to OpenAIβs long-term AI risk team?
In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAIβs chief scientist and one of the companyβs co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.
Now OpenAIβs βsuperalignment teamβ is no more, the company confirms. That comes after the departures of several researchers involved, Tuesdayβs news that Sutskever was leaving the company, and the resignation of the teamβs other co-lead. The groupβs work will be absorbed into OpenAIβs other research efforts.
OpenAI putting βshiny productsβ above safety, says departing researcher
Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising βshiny productsβ over safety, revealing that he quit after a disagreement over key aims reached βbreaking pointβ.
Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.
Continue reading...OpenAI will use Reddit posts to train ChatGPT under new deal
Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Redditβs Data API, giving the generative AI firm real-time access to Reddit posts.
Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAIβs AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.
The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.
OpenAIβs Flirty New Assistant, Google Guts the Web and We Play HatGPT
Inside OpenAIβs Library
Chief Scientist Ilya Sutskever leaves OpenAI six months after Altman ouster
On Tuesday evening, OpenAI Chief Scientist Ilya Sutskever announced that he is leaving the company he co-founded, six months after he participated in the coup that temporarily ousted OpenAI CEO Sam Altman. Jan Leike, a fellow member of Sutskever's Superalignment team, is reportedly resigning with him.
"After almost a decade, I have made the decision to leave OpenAI," Sutskever tweeted. "The companyβs trajectory has been nothing short of miraculous, and Iβm confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly."
Sutskever has been with the company since its founding in 2015 and is widely seen as one of the key engineers behind some of OpenAI's biggest technical breakthroughs. As a former OpenAI board member, he played a key role in the removal of Sam Altman as CEO in the shocking firing last November. While it later emerged that Altman's firing primarily stemmed from a power struggle with former board member Helen Toner, Sutskever sided with Toner and personally delivered the news to Altman that he was being fired on behalf of the board.
OpenAIβs Chief Scientist, Ilya Sutskever, Is Leaving the Company
Google strikes back at OpenAI with βProject Astraβ AI agent prototype
Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what's taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California.
Hassabis called Astra "a universal agent helpful in everyday life." During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts.
Google says that Astra uses the camera and microphone on a user's device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera's frame.
A.I.βs βHerβ Era Has Arrived
- Cybersecurity News and Magazine
- Cybersecurity Concerns Surround ChatGPT 4oβs Launch; Open AI Assures Beefed up Safety Measure
Cybersecurity Concerns Surround ChatGPT 4oβs Launch; Open AI Assures Beefed up Safety Measure
Features of GPT-4o
Enhanced Speed and Multimodality: GPT-4o operates at a faster pace than its predecessors and excels at understanding and processing diverse information formats β written text, audio, and visuals. This versatility allows GPT-4o to engage in more comprehensive and natural interactions. Free Tier Expansion: OpenAI is making AI more accessible by offering some GPT-4o features to free-tier users. This includes the ability to access web-based information during conversations, discuss images, upload files, and even utilize enterprise-grade data analysis tools (with limitations). Paid users will continue to enjoy a wider range of functionalities. Improved User Experience: The blog post accompanying the announcement showcases some impressive capabilities. GPT-4o can now generate convincingly realistic laughter, potentially pushing the boundaries of the uncanny valley and increasing user adoption. Additionally, it excels at interpreting visual input, allowing it to recognize sports on television and explain the rules β a valuable feature for many users. However, despite the new features and capabilities, the potential misuse of ChatGPT is still on the rise. The new version, though deemed safer than the previous versions, is still vulnerable to exploitation and can be leveraged by hackers and ransomware groups for nefarious purposes. Talking about the security concerns regarding the new version, OpenAI shared a detailed post about the new and advanced security measures being implemented in GPT-4o.Security Concerns Surround ChatGPT 4o
The implications of ChatGPT for cybersecurity have been a hot topic of discussion among security leaders and experts as many worry that the AI software can easily be misused. Since its inception in November 2022, several organizations such as Amazon, JPMorgan Chase & Co., Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, Wells Fargo and Verizon have restricted access or blocked the use of the program citing security concerns. In April 2023, Italy became the first country in the world to ban ChatGPT after accusing OpenAI of stealing user data. These concerns are not unfounded.OpenAI Assures Safety
OpenAI reassured people that GPT-4o has "new safety systems to provide guardrails on voice outputs," plus extensive post-training and filtering of the training data to prevent ChatGPT from saying anything inappropriate or unsafe. GPT-4o was built in accordance with OpenAI's internal Preparedness Framework and voluntary commitments. More than 70 external security researchers red teamed GPT-4o before its release. In an article published on its official website, OpenAI states that its evaluations of cybersecurity do not score above βmedium risk.β βGPT-4o has safety built-in by design across modalities, through techniques such as filtering training data and refining the modelβs behavior through post-training. We have also created new safety systems to provide guardrails on voice outputs. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories,β the post said. βThis assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities,β it added. OpenAI shared that it also employed the services of over 70 experts to identify risks and amplify safety. βGPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as theyβre discovered,β it said. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.Disarmingly lifelike: ChatGPT-4o will laugh at your jokes and your dumb hat
At this point, anyone with even a passing interest in AI is very familiar with the process of typing out messages to a chatbot and getting back long streams of text in response. Today's announcement of ChatGPT-4oβwhich lets users converse with a chatbot using real-time audio and videoβmight seem like a mere lateral evolution of that basic interaction model.
After looking through over a dozen video demos OpenAI posted alongside today's announcement, though, I think we're on the verge of something more like a sea change in how we think of and work with large language models. While we don't yet have access to ChatGPT-4o's audio-visual features ourselves, the important non-verbal cues on display hereβboth from GPT-4o and from the usersβmake the chatbot instantly feel much more human. And I'm not sure the average user is fully ready for how they might feel about that.
It thinks itβs people
Take this video, where a newly expectant father looks to ChatGPT-4o for an opinion on a dad joke ("What do you call a giant pile of kittens? A meow-ntain!"). The old ChatGPT4 could easily type out the same responses of "Congrats on the upcoming addition to your family!" and "That's perfectly hilarious. Definitely a top-tier dad joke." But there's much more impact to hearing GPT-4o give that same information in the video, complete with the gentle laughter and rising and falling vocal intonations of a lifelong friend.
Before launching, GPT-4o broke records on chatbot leaderboard under a secret name
On Monday, OpenAI employee William Fedus confirmed on X that a mysterious chart-topping AI chatbot known as "gpt-chatbot" that had been undergoing testing on LMSYS's Chatbot Arena and frustrating experts was, in fact, OpenAI's newly announced GPT-4o AI model. He also revealed that GPT-4o had topped the Chatbot Arena leaderboard, achieving the highest documented score ever.
"GPT-4o is our new state-of-the-art frontier model. Weβve been testing a version on the LMSys arena as im-also-a-good-gpt2-chatbot," Fedus tweeted.
Chatbot Arena is a website where visitors converse with two random AI language models side by side without knowing which model is which, then choose which model gives the best response. It's a perfect example of vibe-based AI benchmarking, as AI researcher Simon Willison calls it.
OpenAI Unveils New ChatGPT That Listens, Looks and Talks
"Well, you seem like a person, but you're just a voice in a computer"
Major ChatGPT-4o update allows audio-video talks with an βemotionalβ AI chatbot
On Monday, OpenAI debuted GPT-4o (o for "omni"), a major new AI model that can ostensibly converse using speech in real time, reading emotional cues and responding to visual input. It operates faster than OpenAI's previous best model, GPT-4 Turbo, and will be free for ChatGPT users and available as a service through API, rolling out over the next few weeks, OpenAI says.
OpenAI revealed the new audio conversation and vision comprehension capabilities in a YouTube livestream titled "OpenAI Spring Update," presented by OpenAI CTO Mira Murati and employees Mark Chen and Barret Zoph that included live demos of GPT-4o in action.
OpenAI claims that GPT-4o responds to audio inputs in about 320 milliseconds on average, which is similar to human response times in conversation, according to a 2009 study, and much shorter than the typical 2β3 second lag experienced with previous models. With GPT-4o, OpenAI says it trained a brand-new AI model end-to-end using text, vision, and audio in a way that all inputs and outputs "are processed by the same neural network."
Apple Will Revamp Siri to Catch Up to Its Chatbot Competitors
Criminal Use of AI Growing, But Lags Behind Defenders
When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.
The post Criminal Use of AI Growing, But Lags Behind Defenders appeared first on SecurityWeek.