Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

AI trained on photos from kids’ entire childhood without their consent

10 June 2024 at 18:37
AI trained on photos from kids’ entire childhood without their consent

Enlarge (credit: RicardoImagen | E+)

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Read 34 remaining paragraphs | Comments

Generative AI and Data Privacy: Navigating the Complex Landscape

Generative AI

By Neelesh Kripalani, Chief Technology Officer, Clover Infotech Generative AI, which includes technologies such as deep learning, natural language processing, and speech recognition for generating text, images, and audio, is transforming various sectors from entertainment to healthcare. However, its rapid advancement has raised significant concerns about data privacy. To navigate this intricate landscape, it is crucial to understand the intersection of AI capabilities, ethical considerations, legal frameworks, and technological safeguards.

Data Privacy Challenges Raised by Generative AI

Not securing data while collection or processing- Generative AI raises significant data privacy concerns due to its need for vast amounts of diverse data, often including sensitive personal information, collected without explicit consent and difficult to anonymize effectively. Model inversion attacks and data leakage risks can expose private information, while biases in training data can lead to unfair or discriminatory outputs. The risk of generated content - The ability of generative AI to produce highly realistic fake content raises serious concerns about its potential for misuse. Whether creating convincing deepfake videos or generating fabricated text and images, there is a significant risk of this content being used for impersonation, spreading disinformation, or damaging individuals' reputations. Lack of Accountability and transparency - Since GenAI models operate through complex layers of computation, it is difficult to get visibility and clarity into how these systems arrive at their outputs. This complexity makes it difficult to track the specific steps and factors that lead to a particular decision or output. This not only hinders trust and accountability but also complicates the tracing of data usage and makes it tedious to ensure compliance with data privacy regulations. Additionally, unidentified biases in the training data can lead to unfair outputs, and the creation of highly realistic but fake content, like deepfakes, poses risks to content authenticity and verification. Addressing these issues requires improved explainability, traceability, and adherence to regulatory frameworks and ethical guidelines. Lack of fairness and ethical considerations - Generative AI models can perpetuate or even exacerbate existing biases present in their training data. This can lead to unfair treatment or misrepresentation of certain groups, raising ethical issues.

Here’s How Enterprises Can Navigate These Challenges

Understand and map the data flow - Enterprises must maintain a comprehensive inventory of the data that their GenAI systems process, including data sources, types, and destinations. Also, they should create a detailed data flow map to understand how data moves through their systems. Implement strong data governance - As per the data minimization regulation, enterprises must collect, process, and retain only the minimum amount of personal data necessary to fulfill a specific purpose. In addition to this, they should develop and enforce robust data privacy policies and procedures that comply with relevant regulations. Ensure data anonymization and pseudonymization – Techniques such as anonymization and pseudonymization can be implemented to reduce the chances of data reidentification. Strengthen security measures – Implement other security measures such as encryption for data at rest and in transit, access controls for protecting against unauthorized access, and regular monitoring and auditing to detect and respond to potential privacy breaches. To summarize, organizations must begin by complying with the latest data protection laws and practices, and strive to use data responsibly and ethically. Further, they should regularly train employees on data privacy best practices to effectively manage the challenges posed by Generative AI while leveraging its benefits responsibly and ethically. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

SecOps Teams Shift Strategy as AI-Powered Threats, Deepfakes Evolve 

4 June 2024 at 17:13
blurry hand

An escalation in AI-based attacks requires security operations leaders to change cybersecurity strategies to defend against them.

The study found 61% of respondents had experienced a deepfake incident in the past year, with 75% of those attacks impersonating CEOs or other C-suite members.

The post SecOps Teams Shift Strategy as AI-Powered Threats, Deepfakes Evolve  appeared first on Security Boulevard.

AI Threats, Cybersecurity Uses Outlined by Gartner Analyst

AI threats and defenses

AI is a long way from maturity, but there are still offensive and defensive uses of AI technology that cybersecurity professionals should be watching, according to a presentation today at the Gartner Security & Risk Management Summit in National Harbor, Maryland. Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that the large language models (LLMs) that have been getting so much attention are “not intelligent.” He cited one example where ChatGPT was recently asked what the most severe CVE (common vulnerabilities and exposures) of 2023 was – and the chatbot’s response was essentially nonsense (screenshot below). [caption id="attachment_74538" align="aligncenter" width="618"]ChatGPT AI security prompt ChatGPT security prompt - and response (source: Gartner)[/caption]

Deepfakes Top AI Threats

Despite the lack of sophistication in LLM tools thus far, D’Hoinne noted one area where AI threats should be taken seriously: deepfakes. “Security leaders should treat deepfakes as an area of immediate focus because the attacks are real, and there is no reliable detection technology yet,” D’Hoinne said. Deepfakes aren’t as easy to defend against as more traditional phishing attacks that can be addressed by user training. Stronger business controls are essential, he said, such as approval over spending and finances. He recommended stronger business workflows, a security behavior and culture program, biometrics controls, and updated IT processes.

AI Speeding Up Security Patching

One potential AI security use case D’Hoinne noted is patch management. He cited data that AI assistance could cut patching time in half by prioritizing patches by threat and probability of exploit and checking and updating code, among other tasks. Other areas where GenAI security tools could help include: alert enrichment and summarization; interactive threat intelligence; attack surface and risk overview; security engineering automation, and mitigation assistance and documentation. [caption id="attachment_74541" align="aligncenter" width="1024"]AI security code fixes AI code fixes (source: Gartner)[/caption]

AI Security Recommendations

“Generative AI will not save or ruin cybersecurity,” D’Hoinne concluded. “How cybersecurity programs adapt to it will shape its impact.” Among his recommendations to attendees was to “focus on deepfakes and social engineering as urgent problems to solve,” and to “experiment with AI assistants to augment, not replace staff.” And outcomes should be measured based on predefined metrics for the use case, “not ad hoc AI or productivity ones.” Stay tuned to The Cyber Express for more coverage this week from the Gartner Security Summit.
❌
❌