Normal view

There are new articles available, click to refresh the page.
Yesterday — 31 May 2024Cybersecurity

OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit

31 May 2024 at 06:10

Altman spent part of his virtual appearance fending off thorny questions about governance, an AI voice controversy and criticism from ousted board members.

The post OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit appeared first on SecurityWeek.

Before yesterdayCybersecurity

OpenAI Forms Another Safety Committee After Dismantling Prior Team – Source: www.darkreading.com

openai-forms-another-safety-committee-after-dismantling-prior-team-–-source:-wwwdarkreading.com

Source: www.darkreading.com – Author: Dark Reading Staff 1 Min Read Source: SOPA Images Limited via Alamy Stock Photo Open AI is forming a safety and security committee led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman.  The committee is being formed to make recommendations to the full board on safety […]

La entrada OpenAI Forms Another Safety Committee After Dismantling Prior Team – Source: www.darkreading.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

OpenAI Announces Safety and Security Committee Amid New AI Model Development

OpenAI Announces Safety and Security Committee

OpenAI announced a new safety and security committee as it begins training a new AI model intended to replace the GPT-4 system that currently powers its ChatGPT chatbot. The San Francisco-based startup announced the formation of the committee in a blog post on Tuesday, highlighting its role in advising the board on crucial safety and security decisions related to OpenAI’s projects and operations. The creation of the committee comes amid ongoing debates about AI safety at OpenAI. The company faced scrutiny after Jan Leike, a researcher, resigned, criticizing OpenAI for prioritizing product development over safety. Following this, co-founder and chief scientist Ilya Sutskever also resigned, leading to the disbandment of the "superalignment" team that he and Leike co-led, which was focused on addressing AI risks. Despite these controversies, OpenAI emphasized that its AI models are industry leaders in both capability and safety. The company expressed openness to robust debate during this critical period.

OpenAI's Safety and Security Committee Composition and Responsibilities

The safety committee comprises company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four OpenAI technical and policy experts. It also features board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, a former general counsel for Sony.
"A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days." 
The committee's initial task is to evaluate and further develop OpenAI’s existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.

Development of the New AI Model

OpenAI also announced that it has recently started training a new AI model, described as a "frontier model." These frontier models represent the most advanced AI systems, capable of generating text, images, video, and human-like conversations based on extensive datasets. The company also recently launched its newest flagship model GPT-4o ('o' stands for omni), which is a multilingual, multimodal generative pre-trained transformer designed by OpenAI. It was announced by OpenAI CTO Mira Murati during a live-streamed demo on May 13 and released the same day. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. GPT-4o has a context window supporting up to 128,000 tokens, which helps it maintain coherence over longer conversations or documents, making it suitable for detailed analysis. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model

28 May 2024 at 09:57

OpenAI is setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The post OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model appeared first on SecurityWeek.

Social Distortion: The Threat of Fear, Uncertainty and Deception in Creating Security Risk

By: Tom Eston
28 May 2024 at 09:32

While Red Teams can expose and root out organization specific weaknesses, there is another growing class of vulnerability at an industry level.

The post Social Distortion: The Threat of Fear, Uncertainty and Deception in Creating Security Risk appeared first on SecurityWeek.

Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy

By: Tom Eston
27 May 2024 at 00:00

Episode 331 of the Shared Security Podcast discusses privacy and security concerns related to two major technological developments: the introduction of Windows PC’s new feature ‘Recall,’ part of Microsoft’s Copilot+, which captures desktop screenshots for AI-powered search tools, and Slack’s policy of using user data to train machine learning features with users opted in by […]

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Shared Security Podcast.

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Security Boulevard.

💾

Did OpenAI Illegally Mimic Scarlett Johansson’s Voice? – Source: www.govinfosecurity.com

Source: www.govinfosecurity.com – Author: 1 Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Actor Said She Firmly Declined Offer From AI Firm to Serve as Voice of GPT-4.o Mathew J. Schwartz (euroinfosec) • May 21, 2024     Scarlett Johansson (Image: Gage Skidmore, via Flickr/CC) Imagine these optics: A man asks a […]

La entrada Did OpenAI Illegally Mimic Scarlett Johansson’s Voice? – Source: www.govinfosecurity.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com

attempts-to-regulate-ai’s-hidden-hand-in-americans’-lives-flounder-in-us-statehouses-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Associated Press The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide. Only one of seven bills aimed at preventing AI’s penchant to discriminate when making […]

La entrada Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com

averlon-emerges-from-stealth-mode-with-$8-million-in-funding-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Ionut Arghire Cloud security startup Averlon has emerged from stealth mode with $8 million in seed funding, which brings the total raised by the company to $10.5 million. The new investment round was led by Voyager Capital, with additional funding from Outpost Ventures, Salesforce Ventures, and angel investors. Co-founded by […]

La entrada Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent – Source: www.securityweek.com

us-intelligence-agencies’-embrace-of-generative-ai-is-at-once-wary-and-urgent-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Associated Press Long before generative AI’s boom, a Silicon Valley firm contracted to collect and analyze non-classified data on illicit Chinese fentanyl trafficking made a compelling case for its embrace by U.S. intelligence agencies. The operation’s results far exceeded human-only analysis, finding twice as many companies and 400% more people […]

La entrada US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

7 best practices for tackling dangerous emails – Source: www.cybertalk.org

7-best-practices-for-tackling-dangerous-emails-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau EXECUTIVE SUMMARY: Email is the #1 means of communication globally. It’s simple, affordable and easily available. However, email systems weren’t designed with security in mind. In the absence of first-rate security measures, email can become a hacker’s paradise, offering unfettered access to a host of tantalizingly lucrative opportunities. Optimize your […]

La entrada 7 best practices for tackling dangerous emails – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses

24 May 2024 at 12:36

Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed.

The post Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses appeared first on SecurityWeek.

Recall feature in Microsoft Copilot+ PCs raises privacy and security concerns – Source: securityaffairs.com

recall-feature-in-microsoft-copilot+-pcs-raises-privacy-and-security-concerns-–-source:-securityaffairs.com

Source: securityaffairs.com – Author: Pierluigi Paganini Recall feature in Microsoft Copilot+ PCs raises privacy and security concerns UK data watchdog is investigating Microsoft regarding the new Recall feature in Copilot+ PCs that captures screenshots of the user’s laptop every few seconds. The UK data watchdog, the Information Commissioner’s Office (ICO), is investigating a new feature, […]

La entrada Recall feature in Microsoft Copilot+ PCs raises privacy and security concerns – Source: securityaffairs.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

The Rise and Risks of Shadow AI

24 May 2024 at 13:20

 

Shadow AI, the internal
use of AI tools and services without the enterprise oversight teams expressly
knowing about it (ex. IT, legal,
cybersecurity, compliance, and privacy teams, just to name a few), is becoming a problem!

Workers are flocking to use 3rd party AI services
(ex. websites like ChatGPT) but also there are often savvy technologists who
are importing models and building internal AI systems (it really is not that
difficult) without telling the enterprise ops teams. Both situations are
increasing and many organizations are blind to the risks.

According to a recent Cyberhaven
report
:

  • AI is Accelerating:  Corporate data
    input into AI tools surged by 485%
  • Increased Data Risks:  Sensitive data
    submission jumped 156%, led by customer support data
  • Threats are Hidden:  Majority of AI use
    on personal accounts lacks enterprise safeguards
  • Security Vulnerabilities:  Increased
    risk of data breaches and exposure through AI tool use.


The risks are real and
the problem is growing.

Now is the time to get ahead of this problem.
1. Establish policies for use and
development/deployment

2. Define and communicate an AI Ethics posture
3. Incorporate cybersecurity/privacy/compliance
teams early into such programs

4. Drive awareness and compliance by including
these AI topics in the employee/vendor training


Overall, the goal is to build awareness and
collaboration. Leveraging AI can bring tremendous benefits, but should be done
in a controlled way that aligns with enterprise oversight requirements.


"Do what is great, while it is small" -
A little effort now can help avoid serious mishaps in the future!

The post The Rise and Risks of Shadow AI appeared first on Security Boulevard.

How the Internet of Things (IoT) became a dark web target – and what to do about it – Source: www.cybertalk.org

how-the-internet-of-things-(iot)-became-a-dark-web-target-–-and-what-to-do-about-it-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau By Antoinette Hodes, Office of the CTO, Check Point Software Technologies. The dark web has evolved into a clandestine marketplace where illicit activities flourish under the cloak of anonymity. Due to its restricted accessibility, the dark web exhibits a decentralized structure with minimal enforcement of security controls, making it a […]

La entrada How the Internet of Things (IoT) became a dark web target – and what to do about it – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Leading LLMs Insecure, Highly Vulnerable to Basic Jailbreaks

23 May 2024 at 17:16
too many files

“All tested LLMs remain highly vulnerable to basic jailbreaks, and some will provide harmful outputs even without dedicated attempts to circumvent their safeguards,” the report noted.

The post Leading LLMs Insecure, Highly Vulnerable to Basic Jailbreaks appeared first on Security Boulevard.

US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent

23 May 2024 at 13:19

U.S. intelligence agencies are scrambling to embrace the AI revolution, believing they’ll be smothered by exponential data growth as sensor-generated surveillance tech further blankets the planet.

The post US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent appeared first on SecurityWeek.

U.S. House Panel Takes On AI Security and Misuse

AI, Misuse of AI, AI security

The difficulty of defending against the misuse of AI – and possible solutions – was the topic of a U.S. congressional hearing today. Data security and privacy officials and advocates were among those testifying before the House Committee on Homeland Security at a hearing titled, “Advancing Innovation (AI): Harnessing Artificial Intelligence to Defend and Secure the Homeland.” The committee plans to include AI in legislation that it’s drafting, said chairman Mark E. Green (R-TN). From cybersecurity and privacy threats to election interference and nation-state attacks, the hearing highlighted AI’s wide-ranging threats and the challenges of mounting a defense. Nonetheless, the four panelists at the hearing – representing technology and cybersecurity companies and a public interest group – put forth some ideas, both technological and regulatory.

Cybercrime Gets Easier

Much of the testimony – and concerns raised by committee members – focused on the advantages that AI has given cybercriminals and nation-state actors, advantages that cybersecurity officials say must be countered by increasingly building AI into products. “AI is democratizing the threat landscape by providing any aspiring cybercriminal with easy-to-use, advanced tools capable of achieving sophisticated outcomes,” said Ajay Amlani, senior vice president at biometric company iProov.
“The crime as a service dark web is very affordable. The only way to combat AI-based attacks is to harness the power of AI in our cybersecurity strategies.”
AI can also help cyber defenders make sense of the overwhelming amount of data and alerts they have to contend with, said Michael Sikorski, CTO of Palo Alto Networks’ Unit 42. “To stop the bad guys from winning, we must aggressively leverage AI for cyber defense,” said Sikorski, who detailed some of the “transformative results” customers have achieved from AI-enhanced products.
“Outcomes like these are necessary to stop threat actors before they can encrypt systems or steal sensitive information, and none of this would be possible without AI,” Sikorski added.
Sikorski said organizations must adopt “secure AI by design” principles and AI usage oversight. “Organizations will need to secure every step of the AI application development lifecycle and supply chain to protect AI data from unauthorized access and leakage at all times,” he said, noting that the principles align with the NIST AI risk management framework released last month.

Election Security and Disinformation Loom Large

Ranking member Bennie Thompson (D-MS) asked the panelists what can be done to improve election security and defend against interference, issues of critical importance in a presidential election year. Amlani said digital identity could play an important role in battling disinformation and interference, principles included in section 4.5 of President Biden’s National Cyber Security Strategy that have yet to be implemented.
“Our country is one of the only ones in the western world that doesn't have a digital identity strategy,” Amlani said.
“Making sure that it's the right person, it's a real person that's actually posting and communicating, and making sure that that person is in fact right there at that time, is a very important component to make sure that we know who it is that's actually generating content online. There is no identity layer to the internet currently today.”

Safe AI Use Guidelines Proposed by Public Policy Advocate

The most detailed proposal for addressing the AI threat came from Jake Laperruque, deputy director of the Security and Surveillance Project at the Center for Democracy and Technology, who argued that the “AI arms race” should proceed responsibly.
“Principles for responsible use of AI technologies should be applied broadly across development and deployment,” Laperruque said.
Laperruque gave the Department of Homeland Security credit for starting the process with its recently published AI roadmap. He said government use of AI should be based on seven principles:
  1. Built upon proper training data
  2. Subject to independent testing and high performance standards
  3. Deployed only within the bounds of the technology’s designed function
  4. Used exclusively by trained staff and corroborated by human review
  5. Subject to internal governance mechanisms that define and promote responsible use
  6. Bound by safeguards to protect human rights and constitutional values
  7. Regulated by institutional mechanisms for ensuring transparency and oversight
“If we rush to deploy AI quickly rather than carefully, it will harm security and civil liberties alike,” Laperruque concluded. “But if we establish a strong foundation now for responsible use, we can reap benefits well into the future.” Media Disclaimer: This article is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Lasso Security Data Protection Tool Aimed at GenAI Applications

22 May 2024 at 10:00
a bunch of blue wires cabled together in a network.

The custom policy wizard helps prevent data leaks in GenAI tools by using CDP, requires no coding, and offers adaptive, intuitive policies.

“The real threat is in unstructured data, the kind of problem that requires data scientists and developers to solve.”

The post Lasso Security Data Protection Tool Aimed at GenAI Applications appeared first on Security Boulevard.

Microsoft AI “Recall” feature records everything, secures far less

22 May 2024 at 05:14

Developing an AI-powered threat to security, privacy, and identity is certainly a choice, but it’s one that Microsoft was willing to make this week at its “Build” developer conference.

On Monday, the computing giant unveiled a new line of PCs that integrate Artificial Intelligence (AI) technology to promise faster speeds, enhanced productivity, and a powerful data collection and search tool that screenshots a device’s activity—including password entry—every few seconds.

This is “Recall,” a much-advertised feature within what Microsoft is calling its “Copilot+ PCs,” a reference to the AI assistant and companion which the company released in late 2023. With Recall on the new Copilot+ PCs, users no longer need to manage and remember their own browsing and chat activity. Instead, by regularly taking and storing screenshots of a user’s activity, the Copilot+ PCs can comb through that visual data to deliver answers to natural language questions, such as “Find the site with the white sneakers,” and “blue pantsuit with a sequin lace from abuelita.”

As any regularly updated repository of device activity poses an enormous security threat—imagine hackers getting access to a Recall database and looking for, say, Social Security Numbers, bank account info, and addresses—Microsoft has said that all Recall screenshots are encrypted and stored locally on a device.

But, in terms of security, that’s about all users will get, as Recall will not detect and obscure passwords, shy away from recording pornographic material, or turn a blind eye to sensitive information.

According to Microsoft:

“Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.”

The consequences of such a system could be enormous.

With Recall, a CEO’s personal laptop could become an even more enticing target for hackers equipped with infostealers, a journalist’s protected sources could be within closer grasp of an oppressive government that isn’t afraid to target dissidents with malware, and entire identities could be abused and impersonated by a separate device user.

In fact, Recall seems to only work best in a one-device-per-person world. Though Microsoft explained that its Copilot+ PCs will only record Recall snapshots to specific device accounts, plenty of people share devices and accounts. For the domestic abuse survivor who is forced to share an account with their abuser, for the victim of theft who—like many people—used a weak device passcode that can easily be cracked, and for the teenager who questions their identity on the family computer, Recall could be more of a burden than a benefit.

For Malwarebytes General Manager of Consumer Business Unit Mark Beare, Recall raises yet another issue:

“I worry that we are heading to a social media 2.0 like world.”

When users first raced to upload massive quantities of sensitive, personal data onto social media platforms more than 10 years ago, they couldn’t predict how that data would be scrutinized in the future, or how it would be scoured and weaponized by cybercriminals, Beare said.

“With AI there will be a strong pull to put your full self into a model (so it knows you),” Beare said. “I don’t think it’s easy to understand all the negative aspects of what can happen from doing that and how bad actors can benefit.”


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Hackers Leverage AI as Application Security Threats Mount

21 May 2024 at 20:37
smartphone screen pointing finger

Reverse-engineering tools, rising jailbreaking activities, and the surging use of AI and ML to enhance malware development were among the worrying trends in a recent report.

AI and ML are making life easier for developers. They’re also making life easier for threat actors.

The post Hackers Leverage AI as Application Security Threats Mount appeared first on Security Boulevard.

Palo Alto Networks Looks for Growth Amid Changing Cybersecurity Market

Palo Alto Networks

After years of hypergrowth, Palo Alto Networks’ (PANW) revenue growth has been slowing, suggesting major shifts in cybersecurity spending patterns and raising investor concerns about the cybersecurity giant’s long-term growth potential. Even as overall cybersecurity spending is predicted to remain strong, Palo Alto’s revenue growth has dropped to roughly half of the 30% growth rate investors have enjoyed for the last several years. Also Read: Digital Transformation Market to grow to $1247.5 Billion by 2026 Those concerns came to a head in February, when Palo Alto’s stock plunged 28% in a single day after the company slashed its growth outlook amid a move to “platformization,” with the company essentially giving away some products in hopes of luring more customers to its broader platform. Investor caution continued yesterday after the company merely reaffirmed its financial guidance, suggesting the possibility of a longer road back to hypergrowth. PANW shares were down 3% in recent trading after initially falling 10% after the company’s latest earnings report was released yesterday. Fortinet (FTNT), Palo Alto’s long-term network security rival, is also struggling amid cybersecurity market uncertainty, as analysts expect the company’s growth rate to slow from greater than 30% to around 10%.

SIEM, AI Signal Major Market Shifts

The changes in cybersecurity spending patterns show up most clearly in SIEM market consolidation and AI cybersecurity tools. Buyers may be waiting to see what cybersecurity vendors do with AI. On the company’s earnings call late Monday, Palo Alto CEO Nikesh Arora told analysts that he expects the company “will be first to market with capabilities to protect the range of our customers' AI security needs.” Seismic changes in the market for security information and event management (SIEM) systems are another sign of a rapidly changing cybersecurity market. Cisco’s (CSCO) acquisition of Splunk in March was just the start of major consolidation among legacy SIEM vendors. Last week, LogRhythm and Exabeam announced merger plans, and on the same day Palo Alto announced plans to acquire QRadar assets from IBM. AI and platformization factored strongly into those announcements. Palo Alto will transition QRadar customers to its Cortex XSIAM next-gen security operations (SOC) platform, and Palo Alto will incorporate IBM’s watsonx large language models (LLMs) in Cortex XSIAM “to deliver additional Precision AI solutions.” Palo Alto will also become IBM’s preferred cybersecurity partner across cloud, network and SOC. Forrester analysts said of the Palo Alto-IBM deal, “This is the biggest concession of a SIEM vendor to an XDR vendor so far and signals a sea change for the threat detection and response market. Security buyers may be finally getting the SIEM alternative they’ve been seeking for years.” The moves may yet be enough to return Palo Alto to better-than-expected growth, but one data point on Monday’s earnings call suggests buyers may be cautious. “We have initiated way more conversations in our platformization than we expected,” said Arora. “If meetings were a measure of outcome, they have gone up 30%, and a majority of them have been centered on platform opportunities.” It remains to be seen if sales will follow the same growth trajectory as meetings. For now, it’s clear that even as the overall cybersecurity market remains strong, the undercurrents suggest rapid changes in where that money is going. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Your vacation, reservations, and online dates, now chosen by AI: Lock and Code S05E11

20 May 2024 at 11:10

This week on the Lock and Code podcast…

The irrigation of the internet is coming.

For decades, we’ve accessed the internet much like how we, so long ago, accessed water—by traveling to it. We connected (quite literally), we logged on, and we zipped to addresses and sites to read, learn, shop, and scroll. 

Over the years, the internet was accessible from increasingly more devices, like smartphones, smartwatches, and even smart fridges. But still, it had to be accessed, like a well dug into the ground to pull up the water below.

Moving forward, that could all change.

This year, several companies debuted their vision of a future that incorporates Artificial Intelligence to deliver the internet directly to you, with less searching, less typing, and less decision fatigue. 

For the startup Humane, that vision includes the use of the company’s AI-powered, voice-operated wearable pin that clips to your clothes. By simply speaking to the AI pin, users can text a friend, discover the nutritional facts about food that sits directly in front of them, and even compare the prices of an item found in stores with the price online.

For a separate startup, Rabbit, that vision similarly relies on a small, attractive smart-concierge gadget, the R1. With the bright-orange slab designed in coordination by the company Teenage Engineering, users can hail an Uber to take them to the airport, play an album on Spotify, and put in a delivery order for dinner.

Away from physical devices, The Browser Company of New York is also experimenting with AI in its own web browser, Arc. In February, the company debuted its endeavor to create a “browser that browses for you” with a snazzy video that showed off Arc’s AI capabilities to create unique, individualized web pages in response to questions about recipes, dinner reservations, and more.

But all these small-scale projects, announced in the first month or so of 2024, had to make room a few months later for big-money interest from the first ever internet conglomerate of the world—Google. At the company’s annual Google I/O conference on May 14, VP and Head of Google Search Liz Reid pitched the audience on an AI-powered version of search in which “Google will do the Googling for you.”

Now, Reid said, even complex, multi-part questions can be answered directly within Google, with no need to click a website, evaluate its accuracy, or flip through its many pages to find the relevant information within.

This, it appears, could be the next phase of the internet… and our host David Ruiz has a lot to say about it.

Today, on the Lock and Code podcast, we bring back Director of Content Anna Brading and Cybersecurity Evangelist Mark Stockley to discuss AI-powered concierges, the value of human choice when so many small decisions could be taken away by AI, and, as explained by Stockley, whether the appeal of AI is not in finding the “best” vacation, recipe, or dinner reservation, but rather the best of anything for its user.

“It’s not there to tell you what the best chocolate chip cookie in the world is for everyone. It’s there to help you figure out what the best chocolate chip cookie is for you, on a Monday evening, when the weather’s hot, and you’re hungry.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Generative AI’s Game-Changing Impact on InsurTech

Generative AI

By Sachin Panicker, Chief AI Officer, Fulcrum Digital  Over the past year, Generative AI has gained prominence in discussions around Artificial Intelligence due to the emergence of advanced large multimodal models such as OpenAI's GPT-4, Google’s Gemini 1.5 Pro etc. Across verticals, organizations have been actively exploring Generative AI applications for their business functions. The excitement around the technology, and its vast untapped potential, is reflected in a prediction by Bloomberg that the Generative AI will become a USD 1.3 trillion market by 2032. Insurance is one of the key sectors where Generative AI is expected to have a revolutionary impact – enhancing operational efficiency and service delivery and elevating customer experience. From automating claims processing to predictive risk assessments, let us take a deeper look at some of the Generative AI use cases that will redefine InsurTech in the years ahead.

Automated and Efficient Claims Settlement

Lengthy and complex claims settlement processes have long been a pain point for insurance customers. Generative AI addresses this by streamlining the claims process through seamless automation. AI analyzes images or other visual data to generate damage assessments. It can extract and analyze relevant information from documents such as invoices, medical records, and insurance policies – enabling it to swiftly determine the validity of the claim, as well as the coverage, and expedite the settlement. This serves to improve process efficiency, reduce the administrative burden on staff, and significantly boost customer satisfaction.

Optimized Underwriting and Streamlining Risk Assessment

Underwriting is another key area where this technology can create immense value for insurance firms. With their ability to analyze vast amounts of data, Generative AI models build comprehensive risk assessment frameworks that enable them to swiftly identify patterns and highlight potential risks. It automates evaluation of a policy applicant’s data, including medical and financial records submitted, in order to determine the appropriate coverage and premium. Leveraging AI, underwriters are empowered to better assess risks and make more informed decisions. By reducing manual effort, minimizing the possibility of human error, and ensuring both accuracy and consistency in risk assessment, Generative AI is poised to play a pivotal role in optimizing underwriting processes.

Empowering Predictive Risk Assessment

Generative AI’s ability to process and analyze complex data is immensely valuable in terms of building capabilities for predictive risk assessment. Analyzing real-time and historical data, and identifying emerging patterns and trends, the technology enables insurers to develop more sophisticated models of risk assessment that factor in a wide range of parameters – past consumer behavior, economic indicators, and weather patterns, to name a few. These models allow insurers to assess the probability of specific claims, for instance, those related to property damage, or automobile accidents. Moreover, the predictive capabilities of Generative AI help insurers offer more tailored coverage and align their pricing strategies with a dynamic environment. The ongoing risk monitoring and early detection of potential issues that the technology facilitates can also prove highly effective when it comes to fraud prevention. Through continuous analysis of data streams, AI identifies subtle changes and anomalous patterns that might be indicative of fraudulent activity. This empowers insurers to take proactive measures to identify possible fraudsters, prevent fraud, and mitigate potential losses. The robust predictive risk assessment capabilities offered by Generative AI thus serve to strengthen insurer’s business models, secure their services against fraud and other risks, and enhance customer trust and confidence in the coverage provided.

Unlocking Personalized Customer Service

In a digitally driven world, personalization has emerged as a powerful tool to effectively engage customers and elevate their overall experience. By analyzing vast amounts of consumer data, including interactions across the insurer’s digital touchpoints, Generative AI gains insights into consumer behavior and preferences, which in turn enables it to personalize future customer service interactions. For instance, by analyzing customer profiles, historical data, and various other factors, AI can make personalized policy recommendations, tailored to an individual customer’s specific needs, circumstances, and risk profile. Simulating human-like conversation with near-perfection, Generative AI can also engage with customers across an insurer’s support channels, resolving queries and providing guidance or making recommendations based on their requirements. The personal touch that Generative AI brings to customer engagement, as compared to other more impersonal digital interfaces, coupled with the valuable tailored insights and offerings they provide, will go a long way towards helping insurers build long-term relationships with policyholders.

Charting a Responsible Course with Generative AI in Insurance

The outlook for Generative AI across sectors looks bright, and insurance is no exception to the trend. Insurance firms that embrace the technology, and effectively integrate it into their operations, will certainly gain a significant competitive advantage through providing innovative solutions, streamlining processes, and maximizing customer satisfaction. This optimism however must be tempered with an acknowledgment of concerns by industry stakeholders, and the public at large, around data privacy and the ethics of AI-driven decision-making. Given that insurance is a sector heavily reliant on sustained consumer trust, it is essential for leaders to address these concerns and chart a course towards responsible AI adoption, in order to truly reap the benefits of the technology and usher in a bold new era of InsurTech. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

A Former OpenAI Leader Says Safety Has ‘Taken a Backseat to Shiny Products’ at the AI Company

17 May 2024 at 14:54

Jan Leike, who ran OpenAI’s “Super Alignment” team, believes there should be more focus on preparing for the next generation of AI models, including on things like safety.

The post A Former OpenAI Leader Says Safety Has ‘Taken a Backseat to Shiny Products’ at the AI Company appeared first on SecurityWeek.

An Analysis of AI usage in Federal Agencies

17 May 2024 at 13:54

Existing Regulations As part of its guidance to agencies in the AI Risk Management (AI RMF), the National Institute of Standards and Technology (NIST) recommends that an organization must have an inventory of its AI systems and models. An inventory is necessary from the perspective of risk identification and assessment, monitoring and auditing, and governance […]

The post An Analysis of AI usage in Federal Agencies appeared first on Security Boulevard.

User Outcry as Slack Scrapes Customer Data for AI Model Training

17 May 2024 at 12:43

Slack reveals it has been training AI/ML models on customer data, including messages, files and usage information. It's opt-in by default.

The post User Outcry as Slack Scrapes Customer Data for AI Model Training appeared first on SecurityWeek.

Navigating the New Frontier of AI-Driven Cybersecurity Threats

15 May 2024 at 10:00

A few weeks ago, Best Buy revealed its plans to deploy generative AI to transform its customer service function. It’s betting on the technology to create “new and more convenient ways for customers to get the solutions they need” and to help its customer service reps develop more personalized connections with its consumers. By the […]

The post Navigating the New Frontier of AI-Driven Cybersecurity Threats appeared first on Security Boulevard.

Senators Urge $32 Billion in Emergency Spending on AI After Finishing Yearlong Review

15 May 2024 at 06:01

The group recommends that Congress draft emergency spending legislation to boost U.S. investments in artificial intelligence, including new R&D and testing standards to understand the technology's potential harms.

The post Senators Urge $32 Billion in Emergency Spending on AI After Finishing Yearlong Review appeared first on SecurityWeek.

The Rise of AI and Blended Attacks: Key Takeaways from RSAC 2024

15 May 2024 at 02:12

The 2024 RSA Conference can be summed up in two letters: AI. AI was everywhere. It was the main topic of more than 130 sessions. Almost every company with a booth in the Expo Hall advertised AI as a component in their solution. Even casual conversations with colleagues over lunch turned to AI. In 2023, … Continued

The post The Rise of AI and Blended Attacks: Key Takeaways from RSAC 2024 appeared first on DTEX Systems Inc.

The post The Rise of AI and Blended Attacks: Key Takeaways from RSAC 2024 appeared first on Security Boulevard.

Cybersecurity Concerns Surround ChatGPT 4o’s Launch; Open AI Assures Beefed up Safety Measure

OpenAI GPT-4o security

The field of Artificial Intelligence is rapidly evolving, and OpenAI's ChatGPT is a leader in this revolution. This groundbreaking large language model (LLM) redefined the expectations for AI. Just 18 months after its initial launch, OpenAI has released a major update: GPT-4o. This update widens the gap between OpenAI and its competitors, especially the likes of Google. OpenAI unveiled GPT-4o, with the "o" signifying "omni," during a live stream earlier this week. This latest iteration boasts significant advancements across various aspects. Here's a breakdown of the key features and capabilities of OpenAI's GPT-4o.

Features of GPT-4o

Enhanced Speed and Multimodality: GPT-4o operates at a faster pace than its predecessors and excels at understanding and processing diverse information formats – written text, audio, and visuals. This versatility allows GPT-4o to engage in more comprehensive and natural interactions. Free Tier Expansion: OpenAI is making AI more accessible by offering some GPT-4o features to free-tier users. This includes the ability to access web-based information during conversations, discuss images, upload files, and even utilize enterprise-grade data analysis tools (with limitations). Paid users will continue to enjoy a wider range of functionalities. Improved User Experience: The blog post accompanying the announcement showcases some impressive capabilities. GPT-4o can now generate convincingly realistic laughter, potentially pushing the boundaries of the uncanny valley and increasing user adoption. Additionally, it excels at interpreting visual input, allowing it to recognize sports on television and explain the rules – a valuable feature for many users. However, despite the new features and capabilities, the potential misuse of ChatGPT is still on the rise. The new version, though deemed safer than the previous versions, is still vulnerable to exploitation and can be leveraged by hackers and ransomware groups for nefarious purposes. Talking about the security concerns regarding the new version, OpenAI shared a detailed post about the new and advanced security measures being implemented in GPT-4o.

Security Concerns Surround ChatGPT 4o

The implications of ChatGPT for cybersecurity have been a hot topic of discussion among security leaders and experts as many worry that the AI software can easily be misused. Since its inception in November 2022, several organizations such as Amazon, JPMorgan Chase & Co., Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, Wells Fargo and Verizon have restricted access or blocked the use of the program citing security concerns. In April 2023, Italy became the first country in the world to ban ChatGPT after accusing OpenAI of stealing user data. These concerns are not unfounded.

OpenAI Assures Safety

OpenAI reassured people that GPT-4o has "new safety systems to provide guardrails on voice outputs," plus extensive post-training and filtering of the training data to prevent ChatGPT from saying anything inappropriate or unsafe. GPT-4o was built in accordance with OpenAI's internal Preparedness Framework and voluntary commitments. More than 70 external security researchers red teamed GPT-4o before its release. In an article published on its official website, OpenAI states that its evaluations of cybersecurity do not score above “medium risk.” “GPT-4o has safety built-in by design across modalities, through techniques such as filtering training data and refining the model’s behavior through post-training. We have also created new safety systems to provide guardrails on voice outputs. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories,” the post said. “This assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities,” it added. OpenAI shared that it also employed the services of over 70 experts to identify risks and amplify safety. “GPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as they’re discovered,” it said. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

The Cyber Express Sets the Stage to Host World CyberCon META Edition 2024 in Dubai 

World CyberCon META Edition 2024

This May, the heartbeat of the cybersecurity industry will resonate through Dubai, where The Cyber Express is set to host the much-anticipated third iteration of the World CyberCon META Edition 2024.   Scheduled for May 23, 2024, at Habtoor Palace Dubai, this premier event promises a comprehensive day filled with immersive experiences tailored to address the dynamic challenges and innovations in cybersecurity.  This year’s theme, "Securing Middle East’s Digital Future: Challenges and Solutions," lays the foundation for a unique gathering that is crucial for any professional navigating the cybersecurity landscape.   The World CyberCon META Edition will feature a stellar lineup of more than 40 prominent Chief Information Security Officers (CISOs) and other cybersecurity leaders who will share invaluable insights and strategies. Notable speakers include: 
  • Sithembile (Nkosi) Songo, CISO, ESKOM  
  • Dina Alsalamen, VP, Head of Cyber and Information Security Department, Bank ABC  
  • Anoop Kumar, Head of Information Security Governance Risk & Compliance, Gulf News  
  • Irene Corpuz, Cyber Policy Expert, Dubai Government Entity, Board Member, and Co-Founder, Women in Cyber Security Middle East (WiCSME)   
  • Abhilash Radhadevi, Head of Cybersecurity, OQ Trading  
  • Ahmed Nabil Mahmoud, Head of Cyber Defense and Security Operations, Abu Dhabi Islamic Bank 

The World CyberCon META Edition 2024

[caption id="attachment_68285" align="alignnone" width="1140"]World CyberCon META Edition 2024 Highlights from the 2023 World CyberCon in Mumbai.[/caption] A Comprehensive Platform for Learning & Innovation  The World CyberCon META Edition 2024 promises a rich agenda with topics ranging from the nuances of national cybersecurity strategies to the latest in threat intelligence and protection against advanced threats. Discussions will span a variety of crucial subjects including: 
  • Securing a Digital UAE: National Cybersecurity Strategy 
  • Predictive Cyber Threat Intelligence: Anticipating Tomorrow’s Attacks Today 
  • Navigating the Cyber Threat Terrain: Unveiling Innovative Approaches to Cyber Risk Scoring 
  • Fortifying Against Ransomware: Robust Strategies for Prevention, Mitigation, and Swift Recovery 
  • Strategic Investments in Cybersecurity: Leveraging AI and ML for Enhanced Threat Detection 
Who Should Attend?  The World CyberCon META Edition 2024 is tailored for CISOs, CIOs, CTOs, security auditors, heads of IT, cybercrime specialists, and network engineers. It’s an invaluable opportunity for those invested in the future of internet safety to gain insights, establish connections, and explore new business avenues.  Engage and Network  In addition to knowledge sessions, the conference will feature interactive workshops, an engaging exhibition zone, and plenty of networking opportunities. This event is set to honor the significant contributions of cybersecurity professionals and provide them with the recognition they deserve.  Secure Your Place  Don’t miss this unique chance to connect with leading professionals and gain insights from the forefront of cybersecurity. Reserve your spot at World CyberCon META Edition 2024 by visiting (https://thecyberexpress.com/cyber-security-events/world-cybercon-3rd-edition-meta/).  More Information  For more details on the event sponsorship opportunities and delegate passes, please contact Ashish Jaiswal at ashish.j@thecyberexpress.com.  About The Cyber Express  Stay informed with TheCyberExpress.com, your essential source for cybersecurity news, insights, and resources, dedicated to empowering you with the knowledge needed to protect your digital assets.   Join us in shaping the digital future at World CyberCon META Edition 2024 in Dubai. Let’s secure tomorrow together! 

Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools

By: Tom Eston
13 May 2024 at 00:00

In this first-ever in-person recording of Shared Security, Tom and Kevin, along with special guest Matt Johansen from Reddit, discuss their experience at the RSA conference in San Francisco, including their walk-through of ‘enhanced security’ and the humorous misunderstanding that ensued. The conversation moves to the ubiquity of AI and machine learning buzzwords at the […]

The post Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools appeared first on Shared Security Podcast.

The post Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools appeared first on Security Boulevard.

💾

❌
❌