Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Netcraft Uses Its AI Platform to Trick and Track Online Scammers

13 June 2024 at 14:00
romance scams generative AI pig butchering

At the RSA Conference last month, Netcraft introduced a generative AI-powered platform designed to interact with cybercriminals to gain insights into the operations of the conversational scams they’re running and disrupt their attacks. At the time, Ryan Woodley, CEO of the London-based company that offers a range of services from phishing detection to brand, domain,..

The post Netcraft Uses Its AI Platform to Trick and Track Online Scammers appeared first on Security Boulevard.

Elon Musk is livid about new OpenAI/Apple deal

11 June 2024 at 16:50
Elon Musk is livid about new OpenAI/Apple deal

Enlarge (credit: Anadolu / Contributor | Anadolu)

Elon Musk is so opposed to Apple's plan to integrate OpenAI's ChatGPT with device operating systems that he's seemingly spreading misconceptions while heavily criticizing the partnership.

On X (formerly Twitter), Musk has been criticizing alleged privacy and security risks since the plan was announced Monday at Apple's annual Worldwide Developers Conference.

"If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies," Musk posted on X. "That is an unacceptable security violation." In another post responding to Apple CEO Tim Cook, Musk wrote, "Don't want it. Either stop this creepy spyware or all Apple devices will be banned from the premises of my companies."

Read 24 remaining paragraphs | Comments

Adobe to update vague AI terms after users threaten to cancel subscriptions

11 June 2024 at 13:06
Adobe to update vague AI terms after users threaten to cancel subscriptions

Enlarge (credit: bennymarty | iStock Editorial / Getty Images Plus)

Adobe has promised to update its terms of service to make it "abundantly clear" that the company will "never" train generative AI on creators' content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives' projects unless they immediately accepted them.

For any users unwilling to accept, canceling annual plans could trigger fees amounting to 50 percent of their remaining subscription cost. Adobe justifies collecting these fees because a "yearly subscription comes with a significant discount."

Read 25 remaining paragraphs | Comments

AI trained on photos from kids’ entire childhood without their consent

10 June 2024 at 18:37
AI trained on photos from kids’ entire childhood without their consent

Enlarge (credit: RicardoImagen | E+)

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW's report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed "less than 0.0001 percent" of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Read 34 remaining paragraphs | Comments

Meta uses “dark patterns” to thwart AI opt-outs in EU, complaint says

6 June 2024 at 17:25
Meta uses “dark patterns” to thwart AI opt-outs in EU, complaint says

Enlarge (credit: Boris Zhitkov | Moment)

The European Center for Digital Rights, known as Noyb, has filed complaints in 11 European countries to halt Meta's plan to start training vague new AI technologies on European Union-based Facebook and Instagram users' personal posts and pictures.

Meta's AI training data will also be collected from third parties and from using Meta's generative AI features and interacting with pages, the company has said. Additionally, Meta plans to collect information about people who aren't on Facebook or Instagram but are featured in users' posts or photos. The only exception from AI training is made for private messages sent between "friends and family," which will not be processed, Meta's blog said, but private messages sent to businesses and Meta are fair game. And any data collected for AI training could be shared with third parties.

"Unlike the already problematic situation of companies using certain (public) data to train a specific AI system (e.g. a chatbot), Meta's new privacy policy basically says that the company wants to take all public and non-public user data that it has collected since 2007 and use it for any undefined type of current and future 'artificial intelligence technology,'" Noyb alleged in a press release.

Read 41 remaining paragraphs | Comments

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

Can a technology called RAG keep AI models from making stuff up?

6 June 2024 at 07:00
Can a technology called RAG keep AI models from making stuff up?

Enlarge (credit: Aurich Lawson | Getty Images)

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a statistical gap-filling phenomenon AI language models produce when they are tasked with reproducing knowledge that wasn’t present in the training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

Read 30 remaining paragraphs | Comments

Top news app caught sharing “entirely false” AI-generated news

5 June 2024 at 16:57
Top news app caught sharing “entirely false” AI-generated news

Enlarge (credit: gmast3r | iStock / Getty Images Plus)

After the most downloaded local news app in the US, NewsBreak, shared an AI-generated story about a fake New Jersey shooting last Christmas Eve, New Jersey police had to post a statement online to reassure troubled citizens that the story was "entirely false," Reuters reported.

"Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the cops' Facebook post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers."

It took NewsBreak—which attracts over 50 million monthly users—four days to remove the fake shooting story, and it apparently wasn't an isolated incident. According to Reuters, NewsBreak's AI tool, which scrapes the web and helps rewrite local news stories, has been used to publish at least 40 misleading or erroneous stories since 2021.

Read 26 remaining paragraphs | Comments

Generative AI and Data Privacy: Navigating the Complex Landscape

Generative AI

By Neelesh Kripalani, Chief Technology Officer, Clover Infotech Generative AI, which includes technologies such as deep learning, natural language processing, and speech recognition for generating text, images, and audio, is transforming various sectors from entertainment to healthcare. However, its rapid advancement has raised significant concerns about data privacy. To navigate this intricate landscape, it is crucial to understand the intersection of AI capabilities, ethical considerations, legal frameworks, and technological safeguards.

Data Privacy Challenges Raised by Generative AI

Not securing data while collection or processing- Generative AI raises significant data privacy concerns due to its need for vast amounts of diverse data, often including sensitive personal information, collected without explicit consent and difficult to anonymize effectively. Model inversion attacks and data leakage risks can expose private information, while biases in training data can lead to unfair or discriminatory outputs. The risk of generated content - The ability of generative AI to produce highly realistic fake content raises serious concerns about its potential for misuse. Whether creating convincing deepfake videos or generating fabricated text and images, there is a significant risk of this content being used for impersonation, spreading disinformation, or damaging individuals' reputations. Lack of Accountability and transparency - Since GenAI models operate through complex layers of computation, it is difficult to get visibility and clarity into how these systems arrive at their outputs. This complexity makes it difficult to track the specific steps and factors that lead to a particular decision or output. This not only hinders trust and accountability but also complicates the tracing of data usage and makes it tedious to ensure compliance with data privacy regulations. Additionally, unidentified biases in the training data can lead to unfair outputs, and the creation of highly realistic but fake content, like deepfakes, poses risks to content authenticity and verification. Addressing these issues requires improved explainability, traceability, and adherence to regulatory frameworks and ethical guidelines. Lack of fairness and ethical considerations - Generative AI models can perpetuate or even exacerbate existing biases present in their training data. This can lead to unfair treatment or misrepresentation of certain groups, raising ethical issues.

Here’s How Enterprises Can Navigate These Challenges

Understand and map the data flow - Enterprises must maintain a comprehensive inventory of the data that their GenAI systems process, including data sources, types, and destinations. Also, they should create a detailed data flow map to understand how data moves through their systems. Implement strong data governance - As per the data minimization regulation, enterprises must collect, process, and retain only the minimum amount of personal data necessary to fulfill a specific purpose. In addition to this, they should develop and enforce robust data privacy policies and procedures that comply with relevant regulations. Ensure data anonymization and pseudonymization – Techniques such as anonymization and pseudonymization can be implemented to reduce the chances of data reidentification. Strengthen security measures – Implement other security measures such as encryption for data at rest and in transit, access controls for protecting against unauthorized access, and regular monitoring and auditing to detect and respond to potential privacy breaches. To summarize, organizations must begin by complying with the latest data protection laws and practices, and strive to use data responsibly and ethically. Further, they should regularly train employees on data privacy best practices to effectively manage the challenges posed by Generative AI while leveraging its benefits responsibly and ethically. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com

averlon-emerges-from-stealth-mode-with-$8-million-in-funding-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Ionut Arghire Cloud security startup Averlon has emerged from stealth mode with $8 million in seed funding, which brings the total raised by the company to $10.5 million. The new investment round was led by Voyager Capital, with additional funding from Outpost Ventures, Salesforce Ventures, and angel investors. Co-founded by […]

La entrada Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

23 May 2024 at 14:27
Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023. (credit: Sean Zanni / Contributor | Patrick McMullan)

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson's voice when seeking an actor for ChatGPT's "Sky" voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman's defense against Johansson's claims that Sky was made to sound "eerily similar" to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

Read 40 remaining paragraphs | Comments

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

User Outcry as Slack Scrapes Customer Data for AI Model Training

17 May 2024 at 12:43

Slack reveals it has been training AI/ML models on customer data, including messages, files and usage information. It's opt-in by default.

The post User Outcry as Slack Scrapes Customer Data for AI Model Training appeared first on SecurityWeek.

❌
❌