Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com

averlon-emerges-from-stealth-mode-with-$8-million-in-funding-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Ionut Arghire Cloud security startup Averlon has emerged from stealth mode with $8 million in seed funding, which brings the total raised by the company to $10.5 million. The new investment round was led by Voyager Capital, with additional funding from Outpost Ventures, Salesforce Ventures, and angel investors. Co-founded by […]

La entrada Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

23 May 2024 at 14:27
Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023. (credit: Sean Zanni / Contributor | Patrick McMullan)

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson's voice when seeking an actor for ChatGPT's "Sky" voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman's defense against Johansson's claims that Sky was made to sound "eerily similar" to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

Read 40 remaining paragraphs | Comments

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Generative AI’s Game-Changing Impact on InsurTech

Generative AI

By Sachin Panicker, Chief AI Officer, Fulcrum Digital  Over the past year, Generative AI has gained prominence in discussions around Artificial Intelligence due to the emergence of advanced large multimodal models such as OpenAI's GPT-4, Google’s Gemini 1.5 Pro etc. Across verticals, organizations have been actively exploring Generative AI applications for their business functions. The excitement around the technology, and its vast untapped potential, is reflected in a prediction by Bloomberg that the Generative AI will become a USD 1.3 trillion market by 2032. Insurance is one of the key sectors where Generative AI is expected to have a revolutionary impact – enhancing operational efficiency and service delivery and elevating customer experience. From automating claims processing to predictive risk assessments, let us take a deeper look at some of the Generative AI use cases that will redefine InsurTech in the years ahead.

Automated and Efficient Claims Settlement

Lengthy and complex claims settlement processes have long been a pain point for insurance customers. Generative AI addresses this by streamlining the claims process through seamless automation. AI analyzes images or other visual data to generate damage assessments. It can extract and analyze relevant information from documents such as invoices, medical records, and insurance policies – enabling it to swiftly determine the validity of the claim, as well as the coverage, and expedite the settlement. This serves to improve process efficiency, reduce the administrative burden on staff, and significantly boost customer satisfaction.

Optimized Underwriting and Streamlining Risk Assessment

Underwriting is another key area where this technology can create immense value for insurance firms. With their ability to analyze vast amounts of data, Generative AI models build comprehensive risk assessment frameworks that enable them to swiftly identify patterns and highlight potential risks. It automates evaluation of a policy applicant’s data, including medical and financial records submitted, in order to determine the appropriate coverage and premium. Leveraging AI, underwriters are empowered to better assess risks and make more informed decisions. By reducing manual effort, minimizing the possibility of human error, and ensuring both accuracy and consistency in risk assessment, Generative AI is poised to play a pivotal role in optimizing underwriting processes.

Empowering Predictive Risk Assessment

Generative AI’s ability to process and analyze complex data is immensely valuable in terms of building capabilities for predictive risk assessment. Analyzing real-time and historical data, and identifying emerging patterns and trends, the technology enables insurers to develop more sophisticated models of risk assessment that factor in a wide range of parameters – past consumer behavior, economic indicators, and weather patterns, to name a few. These models allow insurers to assess the probability of specific claims, for instance, those related to property damage, or automobile accidents. Moreover, the predictive capabilities of Generative AI help insurers offer more tailored coverage and align their pricing strategies with a dynamic environment. The ongoing risk monitoring and early detection of potential issues that the technology facilitates can also prove highly effective when it comes to fraud prevention. Through continuous analysis of data streams, AI identifies subtle changes and anomalous patterns that might be indicative of fraudulent activity. This empowers insurers to take proactive measures to identify possible fraudsters, prevent fraud, and mitigate potential losses. The robust predictive risk assessment capabilities offered by Generative AI thus serve to strengthen insurer’s business models, secure their services against fraud and other risks, and enhance customer trust and confidence in the coverage provided.

Unlocking Personalized Customer Service

In a digitally driven world, personalization has emerged as a powerful tool to effectively engage customers and elevate their overall experience. By analyzing vast amounts of consumer data, including interactions across the insurer’s digital touchpoints, Generative AI gains insights into consumer behavior and preferences, which in turn enables it to personalize future customer service interactions. For instance, by analyzing customer profiles, historical data, and various other factors, AI can make personalized policy recommendations, tailored to an individual customer’s specific needs, circumstances, and risk profile. Simulating human-like conversation with near-perfection, Generative AI can also engage with customers across an insurer’s support channels, resolving queries and providing guidance or making recommendations based on their requirements. The personal touch that Generative AI brings to customer engagement, as compared to other more impersonal digital interfaces, coupled with the valuable tailored insights and offerings they provide, will go a long way towards helping insurers build long-term relationships with policyholders.

Charting a Responsible Course with Generative AI in Insurance

The outlook for Generative AI across sectors looks bright, and insurance is no exception to the trend. Insurance firms that embrace the technology, and effectively integrate it into their operations, will certainly gain a significant competitive advantage through providing innovative solutions, streamlining processes, and maximizing customer satisfaction. This optimism however must be tempered with an acknowledgment of concerns by industry stakeholders, and the public at large, around data privacy and the ethics of AI-driven decision-making. Given that insurance is a sector heavily reliant on sustained consumer trust, it is essential for leaders to address these concerns and chart a course towards responsible AI adoption, in order to truly reap the benefits of the technology and usher in a bold new era of InsurTech. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

User Outcry as Slack Scrapes Customer Data for AI Model Training

17 May 2024 at 12:43

Slack reveals it has been training AI/ML models on customer data, including messages, files and usage information. It's opt-in by default.

The post User Outcry as Slack Scrapes Customer Data for AI Model Training appeared first on SecurityWeek.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

16 May 2024 at 17:43
Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>

Enlarge / Screenshot from the documentary Who Is Bobby Kennedy? (credit: whoisbobbykennedy.com)

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy's documentary and making it harder to support Kennedy's candidacy. This allegedly has caused "substantial donation losses," while also violating the free speech rights of Kennedy, his supporters, and his film's production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms' automated spam filters.

Read 25 remaining paragraphs | Comments

Downranking won’t stop Google’s deepfake porn problem, victims say

14 May 2024 at 18:00
Downranking won’t stop Google’s deepfake porn problem, victims say

Enlarge (credit: imaginima | E+)

After backlash over Google's search engine becoming the primary traffic source for deepfake porn websites, Google has started burying these links in search results, Bloomberg reported.

Over the past year, Google has been driving millions to controversial sites distributing AI-generated pornography depicting real people in fake sex videos that were created without their consent, Similarweb found. While anyone can be targeted—police already are bogged down with dealing with a flood of fake AI child sex images—female celebrities are the most common victims. And their fake non-consensual intimate imagery is more easily discoverable on Google by searching just about any famous name with the keyword "deepfake," Bloomberg noted.

Google refers to this content as "involuntary fake" or "synthetic pornography." The search engine provides a path for victims to report that content whenever it appears in search results. And when processing these requests, Google also removes duplicates of any flagged deepfakes.

Read 20 remaining paragraphs | Comments

Google Debuts New Security Products, Hyping AI and Mandiant Expertise

6 May 2024 at 12:50

Google rolls out new threat-intel and security operations products and looks to the magic of AI to tap into the booming cybersecurity market.

The post Google Debuts New Security Products, Hyping AI and Mandiant Expertise appeared first on SecurityWeek.

❌
❌