Normal view

There are new articles available, click to refresh the page.
Before yesterdayCybersecurity

Netcraft Uses Its AI Platform to Trick and Track Online Scammers

13 June 2024 at 14:00
romance scams generative AI pig butchering

At the RSA Conference last month, Netcraft introduced a generative AI-powered platform designed to interact with cybercriminals to gain insights into the operations of the conversational scams they’re running and disrupt their attacks. At the time, Ryan Woodley, CEO of the London-based company that offers a range of services from phishing detection to brand, domain,..

The post Netcraft Uses Its AI Platform to Trick and Track Online Scammers appeared first on Security Boulevard.

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

Generative AI and Data Privacy: Navigating the Complex Landscape

Generative AI

By Neelesh Kripalani, Chief Technology Officer, Clover Infotech Generative AI, which includes technologies such as deep learning, natural language processing, and speech recognition for generating text, images, and audio, is transforming various sectors from entertainment to healthcare. However, its rapid advancement has raised significant concerns about data privacy. To navigate this intricate landscape, it is crucial to understand the intersection of AI capabilities, ethical considerations, legal frameworks, and technological safeguards.

Data Privacy Challenges Raised by Generative AI

Not securing data while collection or processing- Generative AI raises significant data privacy concerns due to its need for vast amounts of diverse data, often including sensitive personal information, collected without explicit consent and difficult to anonymize effectively. Model inversion attacks and data leakage risks can expose private information, while biases in training data can lead to unfair or discriminatory outputs. The risk of generated content - The ability of generative AI to produce highly realistic fake content raises serious concerns about its potential for misuse. Whether creating convincing deepfake videos or generating fabricated text and images, there is a significant risk of this content being used for impersonation, spreading disinformation, or damaging individuals' reputations. Lack of Accountability and transparency - Since GenAI models operate through complex layers of computation, it is difficult to get visibility and clarity into how these systems arrive at their outputs. This complexity makes it difficult to track the specific steps and factors that lead to a particular decision or output. This not only hinders trust and accountability but also complicates the tracing of data usage and makes it tedious to ensure compliance with data privacy regulations. Additionally, unidentified biases in the training data can lead to unfair outputs, and the creation of highly realistic but fake content, like deepfakes, poses risks to content authenticity and verification. Addressing these issues requires improved explainability, traceability, and adherence to regulatory frameworks and ethical guidelines. Lack of fairness and ethical considerations - Generative AI models can perpetuate or even exacerbate existing biases present in their training data. This can lead to unfair treatment or misrepresentation of certain groups, raising ethical issues.

Here’s How Enterprises Can Navigate These Challenges

Understand and map the data flow - Enterprises must maintain a comprehensive inventory of the data that their GenAI systems process, including data sources, types, and destinations. Also, they should create a detailed data flow map to understand how data moves through their systems. Implement strong data governance - As per the data minimization regulation, enterprises must collect, process, and retain only the minimum amount of personal data necessary to fulfill a specific purpose. In addition to this, they should develop and enforce robust data privacy policies and procedures that comply with relevant regulations. Ensure data anonymization and pseudonymization – Techniques such as anonymization and pseudonymization can be implemented to reduce the chances of data reidentification. Strengthen security measures – Implement other security measures such as encryption for data at rest and in transit, access controls for protecting against unauthorized access, and regular monitoring and auditing to detect and respond to potential privacy breaches. To summarize, organizations must begin by complying with the latest data protection laws and practices, and strive to use data responsibly and ethically. Further, they should regularly train employees on data privacy best practices to effectively manage the challenges posed by Generative AI while leveraging its benefits responsibly and ethically. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com

averlon-emerges-from-stealth-mode-with-$8-million-in-funding-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Ionut Arghire Cloud security startup Averlon has emerged from stealth mode with $8 million in seed funding, which brings the total raised by the company to $10.5 million. The new investment round was led by Voyager Capital, with additional funding from Outpost Ventures, Salesforce Ventures, and angel investors. Co-founded by […]

La entrada Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

User Outcry as Slack Scrapes Customer Data for AI Model Training

17 May 2024 at 12:43

Slack reveals it has been training AI/ML models on customer data, including messages, files and usage information. It's opt-in by default.

The post User Outcry as Slack Scrapes Customer Data for AI Model Training appeared first on SecurityWeek.

❌
❌