Normal view

There are new articles available, click to refresh the page.
Yesterday — 31 May 2024Main stream

Copilot+ Recall is ‘Dumbest Cybersecurity Move in a Decade’: Researcher

Copilot Recall privacy settings

A new Microsoft Windows feature dubbed Recall planned for Copilot+ PCs has been called a security and privacy nightmare by cybersecurity researchers and privacy advocates. Copilot Recall will be enabled by default and will capture frequent screenshots, or “snapshots,” of a user’s activity and store them in a local database tied to the user account. The potential for exposure of personal and sensitive data through the new feature has alarmed security and privacy advocates and even sparked a UK inquiry into the issue.

Copilot Recall Privacy and Security Claims Challenged

In a long Mastodon thread on the new feature, Windows security researcher Kevin Beaumont wrote, “I’m not being hyperbolic when I say this is the dumbest cybersecurity move in a decade. Good luck to my parents safely using their PC.” In a blog post on Recall security and privacy, Microsoft said that processing and storage are done only on the local device and encrypted, but even Microsoft’s own explanations raise concerns: “Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.” Security and privacy advocates take issue with assertions that the data is stored securely on the local device. If someone has a user’s password or if a court orders that data be turned over for legal or law enforcement purposes, the amount of data exposed could be much greater with Recall than would otherwise be exposed. Domestic abuse situations could be worsened. And hackers, malware and infostealers will have access to vastly more data than they would without Recall. Beaumont said the screenshots are stored in a SQLite database, “and you can access it as the user including programmatically. It 100% does not need physical access and can be stolen.” He posted a video (republished below) he said was of two Microsoft engineers gaining access to the Recall database folder with apparent ease, “with SQLite database right there.” [videopress izzNn3K5]

Does Recall Have Cloud Hooks?

Beaumont also questioned Microsoft’s assertion that all this is done locally. “So the code underpinning Copilot+ Recall includes a whole bunch of Azure AI backend code, which has ended up in the Windows OS,” he wrote on Mastodon.  “It also has a ton of API hooks for user activity monitoring. “It opens a lot of attack surface. ... They really went all in with this and it will have profound negative implications for the safety of people who use Microsoft Windows.”

Data May Not Be Completely Deleted

And sensitive data deleted by users will still be saved in Recall screenshots. “There's no feature to delete screenshots of things you delete while using your PC,” Beaumont said. “You would have to remember to go and purge screenshots that Recall makes every few seconds. If you or a friend use disappearing messages in WhatsApp, Signal etc, it is recorded regardless.” One commenter said Copilot Recall seems to raise compliance issues too, in part by creating additional unnecessary data that could survive deletion requests. “[T]his comprehensively fails PCI and GDPR immediately and the SOC2 controls list ain't looking so good either,” the commenter said. Leslie Carhart, Director of Incident Response at Dragos, replied that “the outrage and disbelief are warranted.” A second commenter noted, “GDPR has a very simple concept: Data Minimization. Quite simply, only store data that you actually have a legitimate, legal purpose for; and only for as long as necessary. Right there, this fails in spectacular fashion on both counts. It's going to store vast amounts of data for no specific purpose, potentially for far longer than any reasonable use of that data.” It remains to be seen if Microsoft will make any modifications to Recall to quell concerns before it officially ships. If not, security and privacy experts may find themselves busier than ever.

How to tell if a VPN app added your Windows device to a botnet

31 May 2024 at 12:37

On May 29, 2024, the US Department of Justice (DOJ) announced it had dismantled what was likely the world’s largest botnet ever. This botnet, called “911 S5,” infected systems at over 19 million IP addresses across more than 190 countries. The main sources of income for the operators, who stole a billions of dollars across a decade, came from committing pandemic and unemployment fraud, and by selling access to child exploitation materials.

The botnet operator generated millions of dollars by offering cybercriminals access to these infected IP addresses. As part of this operation, a Chinese national, YunHe Wang, was arrested. Wang is reportedly the proprietor of the popular service.

Of the infected Windows devices, 613,841 IP addresses were located in the United States. The DOJ also called the botnet a residential proxy service. Residential proxy networks allow someone in control to rent out a residential IP address which then can be used as a relay for their internet communications. This allows them to hide their true location behind the residential proxy. Cybercriminals used this service to engage in cyberattacks, large-scale fraud, child exploitation, harassment, bomb threats, and export violations.

To set up this botnet, Wang and his associates provided users with free, illegitimate VPN applications that were created to connect to the 911 S5 service. Unaware of the proxy backdoor, once users downloaded and installed these VPN applications, they unknowingly became part of the 911 S5 botnet.

Sometimes the VPN applications were bundled with games and other software and installed without user consent.

For this reason, the FBI has published a public service announcement (PSA) to help users find out if they have been affected by this botnet.

Users can start by going over this list of malicious VPN applications associated with the 911 S5 botnet:

  • MaskVPN
  • DewVPN
  • PaladinVPN
  • ProxyGate
  • ShieldVPN
  • ShineVPN

If you have one of these VPN applications installed, sometimes you can find an uninstaller located under the Start menu option of the VPN application. If present, you can use that uninstall option.

If the application doesn’t present you with an uninstall option, then follow the steps below to attempt to uninstall the application:

  • Click on the Start menu (Windows button) and type “Add or remove programs” to bring up the “Add and Remove Programs” menu.
  • Search for the name of the malicious VPN application.
  • Once you find the application in the list, click on the application name, and select the “Uninstall” option.

Once you have uninstalled the application, you will want to make sure it’s no longer active. To do that, open the Windows Task manager. Press Control+Alt+Delete on the keyboard and select the “Task Manager” option or right-click on the Start menu (Windows button) and select the “Task Manager” option.

In Task Manager look under the “Process” tab for the following processes:

  • MaskVPN (mask_svc.exe)
  • DewVPN (dew_svc.exe)
  • PaladinVPN (pldsvc.exe)
  • ProxyGate (proxygate.exe, cloud.exe)
  • ShieldVPN (shieldsvc.exe)
  • ShineVPN (shsvc.exe)
Example by FBI showing processes associated with ShieldVPN in Task Manager

If found, select the service related to one of the identified malicious software applications running in the process tab and select the option “End task” to attempt to stop the process from running.

Or, download Malwarebytes Premium (there is a free trial) and run a scan.

Whether you’re using the free or paid version of the app, you can manually run a scan to check for threats on your device. 

  1. Open the app.
  2. On the main dashboard, click the Scan button.
  3. A progress page appears while the scan runs.
  4. After the scan finishes, it displays the Threat scan summary.
    • If the scan detected no threats: Click Done.
    • If the scan detected threats on your device: Review the threats found on your computer. From here, you can manually quarantine threats by selecting a detection and clicking Quarantine.
  5. Click View Report or View Scan Report to see a history of prior scans. After viewing the threat report, close the scanner window.

If neither of these options, including the Malwarebytes scan, resolve the problem, the FBI has more elaborate instructions. You can also contact the Malwarebytes Support team to assist you.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Cooler Master Hit By Data Breach Exposing Customer Information

By: BeauHD
30 May 2024 at 20:45
Computer hardware manufacturer Cooler Master has confirmed that it suffered a data breach on May 19 after a threat actor breached the company's website, stealing the Fanzone member information of 500,000 customers. BleepingComputer reports: [A] threat actor known as 'Ghostr' told us they hacked the company's Fanzone website on May 18 and downloaded its linked databases. Cooler Master's Fanzone site is used to register a product's warranty, request an RMA, or open support tickets, requiring customers to fill in personal data, such as names, email addresses, addresses, phone numbers, birth dates, and physical addresses. Ghostr said they were able to download 103 GB of data during the Fanzone breach, including the customer information of over 500,000 customers. The threat actor also shared data samples, allowing BleepingComputer to confirm with numerous customers listed in the breach that their data was accurate and that they recently requested support or an RMA from Cooler Master. Other data in the samples included product information, employee information, and information regarding emails with vendors. The threat actor claimed to have partial credit card information, but BleepingComputer could not find this data in the data samples. The threat actor now says they will sell the leaked data on hacking forums but has not disclosed the price. Cooler Master said in a statement to BleepingComputer: "We can confirm on May 19, Cooler Master experienced a data breach involving unauthorized access to customer data. We immediately alerted the authorities, who are actively investigating the breach. Additionally, we have engaged top security experts to address the breach and implement new measures to prevent future incidents. These experts have successfully secured our systems and enhanced our overall security protocols. We are in the process of notifying affected customers directly and advising them on next steps. We are committed to providing timely updates and support to our customers throughout this process."

Read more of this story at Slashdot.

Before yesterdayMain stream

The Ticketmaster “breach”—what you need to know

30 May 2024 at 06:26

Earlier this week, a cybercriminal group posted an alleged database up for sale online which, it says, contains customer and card details of 560 million Live Nation/Ticketmaster users.

The data was offered for sale on one forum under the name “Shiny Hunters”. ShinyHunters is the online handle for a group of notorious cybercriminals associated with numerous data breaches, including the recent AT&T breach.

ShinyHunter offering Live Nation / TciketMaster data for sale
Post on BreachForums by ShinyHunters

The post says:

“Live Nation / Ticketmaster

Data includes

560 million customer full details (name, address, email, phone)

Ticket sales, event information, order details

CC detail – customer last 4 of card, expiration date

Customer fraud details

Much more

Price is $500k USD. One time sale.”

The same data set was offered for sale in an almost identical post on another forum by someone using the handle SpidermanData. This could be the same person or a member of the ShinyHunters group.

According to news outlet ABC, the Australian Department of Home Affairs said it is aware of a cyber incident impacting Ticketmaster customers and is “working with Ticketmaster to understand the incident.”

Some researchers expressed their doubts about the validity of the data set:

🚨🚨Thoughts on the alleged Ticketmaster Data Breach 🚨🚨

TLDR: Alert not Alarmed

The Ticketmaster data breach claim has provided BreachForums with the quick attention they need to boost their user numbers and reputation.

The claim has possibly been over-stated to boost… pic.twitter.com/WJsFkBfQbw

— CyberKnow (@Cyberknow20) May 29, 2024

While others judged it looks legitimate based on conversations with involved individuals, and studying samples of the data set:

Today we spoke with multiple individuals privy to and involved in the alleged TicketMaster breach.

Sometime in April an unidentified Threat Group was able to get access to TicketMaster AWS instances by pivoting from a Managed Service Provider. The TicketMaster breach was not…

— vx-underground (@vxunderground) May 30, 2024

Whether or not the data is real remains to be seen. However, there’s no doubt that scammers will use this opportunity to make a quick profit.

Ticketmaster users will need to be on their guard. Read our tips below for some helpful advice on what to do in the event of a data breach.

You can also check what personal information of yours has already been exposed online with our Digital Footprint portal. Just enter your email address (it’s best to submit the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report.

All parties involved have refrained from any further comments. We’ll keep you posted.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Hackers Claim To Have Breached Ticketmaster, Stealing Personal Data of 560 Million Users

By: BeauHD
29 May 2024 at 19:20
The notorious hacker group ShinyHunters has claimed to have breached the security of Ticketmaster-Live Nation, compromising the personal data more than half a billion users. "This massive 1.3 terabytes of data, is now being offered for sale on Breach Forums for a one-time sale for $500,000," reports Hackread. From the report: ShinyHunters has allegedly accessed a treasure trove of sensitive user information, including full names, addresses, email addresses, phone numbers, ticket sales and event details, order information, and partial payment card data. Specifically, the compromised payment data includes customer names, the last four digits of card numbers, expiration dates, and even customer fraud details. The data breach, if confirmed, could have severe implications for the affected users, leading to potential identity theft, financial fraud, and further cyber attacks. The hacker group's bold move to put this data on sale goes on to show the growing menace of cybercrime and the increasing sophistication of these cyber adversaries.

Read more of this story at Slashdot.

The Alaska Supreme Court Takes Aerial Surveillance’s Threat to Privacy Seriously, Other Courts Should Too

29 May 2024 at 18:16

In March, the Alaska Supreme Court held in State v. McKelvey that the Alaska Constitution required law enforcement to obtain a warrant before photographing a private backyard from an aircraft. In this case, the police took photographs of Mr. McKelvey’s property, including the constitutionally protected curtilage area, from a small aircraft using a zoom lens.

In arguing that Mr. McKelvey did not have a reasonable expectation of privacy, the government raised various factors which have been used to justify warrantless surveillance in other jurisdictions. These included the ubiquity of small aircrafts flying overhead in Alaska; the commercial availability of the camera and lens; the availability of aerial footage of the land elsewhere; and the alleged unobtrusive nature of the surveillance. 

In response, the Court divorced the ubiquity and availability of the technology from whether people would reasonably expect the government to use it to spy on them. The Court observed that the fact the government spent resources to take photos demonstrates that whatever available images were insufficient for law enforcement needs. Also, the inability or unlikelihood the spying was detected adds to, not detracts from, its pernicious nature because “if the surveillance technique cannot be detected, then one can never fully protect against being surveilled.” 

Throughout its analysis, the Alaska Supreme Court demonstrated a grounded understanding of modern technology—as well as its future—and its effect on privacy rights. At the outset, the Court pointed out that one might think that this warrantless aerial surveillance was not a significant threat to privacy rights because "aviation gas is expensive, officers are busy, and the likelihood of detecting criminal activity with indiscriminate surveillance flights is low." However, the Court added pointedly, “the rise of drones has the potential to change that equation." We made similar arguments and are glad to see that courts are taking the threat seriously. 

This is a significant victory for Alaskans and their privacy rights, and stands in contrast to a couple of U.S. Supreme Court cases from the 1980s, Ciraolo v. California and Florida v. Riley. In those cases, the justices found no violation of the federal constitution for aerial surveillance from low-flying manned aircrafts. But there have been seismic changes in the capabilities of surveillance technology since those decisions, and courts should consider these developments rather than merely applying precedents uncritically. 

With this decision, Alaska joins California, Hawaii, and Vermont in finding that warrantless aerial surveillance violates their state’s constitutional prohibition of unreasonable search and seizure. Other courts should follow suit to ensure that privacy rights do not fall victim to the advancement of technology.

Privacy Implications of Tracking Wireless Access Points – Source: securityboulevard.com

privacy-implications-of-tracking-wireless-access-points-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Bruce Schneier Brian Krebs reports on research into geolocating routers: Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geolocate devices. Researchers from the University of Maryland say they relied on publicly available data […]

La entrada Privacy Implications of Tracking Wireless Access Points – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Privacy Implications of Tracking Wireless Access Points

29 May 2024 at 07:01

Brian Krebs reports on research into geolocating routers:

Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geolocate devices. Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally—including non-Apple devices like Starlink systems—and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops.

Really fascinating implications to this research.

Research paper: “Surveilling the Masses with Wi-Fi-Based Positioning Systems:

Abstract: Wi-Fi-based Positioning Systems (WPSes) are used by modern mobile devices to learn their position using nearby Wi-Fi access points as landmarks. In this work, we show that Apple’s WPS can be abused to create a privacy threat on a global scale. We present an attack that allows an unprivileged attacker to amass a worldwide snapshot of Wi-Fi BSSID geolocations in only a matter of days. Our attack makes few assumptions, merely exploiting the fact that there are relatively few dense regions of allocated MAC address space. Applying this technique over the course of a year, we learned the precise
locations of over 2 billion BSSIDs around the world.

The privacy implications of such massive datasets become more stark when taken longitudinally, allowing the attacker to track devices’ movements. While most Wi-Fi access points do not move for long periods of time, many devices—like compact travel routers—are specifically designed to be mobile.

We present several case studies that demonstrate the types of attacks on privacy that Apple’s WPS enables: We track devices moving in and out of war zones (specifically Ukraine and Gaza), the effects of natural disasters (specifically the fires in Maui), and the possibility of targeted individual tracking by proxy—all by remotely geolocating wireless access points.

We provide recommendations to WPS operators and Wi-Fi access point manufacturers to enhance the privacy of hundreds of millions of users worldwide. Finally, we detail our efforts at responsibly disclosing this privacy vulnerability, and outline some mitigations that Apple and Wi-Fi access point manufacturers have implemented both independently and as a result of our work.

Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy

By: Tom Eston
27 May 2024 at 00:00

Episode 331 of the Shared Security Podcast discusses privacy and security concerns related to two major technological developments: the introduction of Windows PC’s new feature ‘Recall,’ part of Microsoft’s Copilot+, which captures desktop screenshots for AI-powered search tools, and Slack’s policy of using user data to train machine learning features with users opted in by […]

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Shared Security Podcast.

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Security Boulevard.

💾

How to Blur Your House on Google Maps Street View

24 May 2024 at 16:00

Google Maps Street View changed the game for digital exploration: Anywhere Google's cameras have been, you can check out for yourself, without ever needing to leave your house. Of course, if you're uncomfortable with your house appearing on Street View, you don't need to sit back and let the other explorers of the internet spy on your home. Whether you're famous, or you just value your privacy, Google lets you blur your house on Street View, although it isn't obvious how to do so.

A word of warning before proceeding: Requesting Google to blur your house on Street View is permanent. The company will not reverse this censorship once it goes through, so make sure you really want to blur your house before going through with the request. In addition, please only ask Google to blur your house: If you do it to someone else's house, they won't be able to undo the blur either.

Step one: Pull up your house on Google Maps. You can simply type and enter your address into the search bar, then click on the Street View image that appears at the bottom of the left-hand menu. Alternatively, you can click the icon of the person in the bottom-right corner of the window to activate Street View, then click on your part of the street to load up your home.

Either way, once you have your house pulled up on Street View, click the three dots next to the address, then choose Report a problem. Alternatively, you can click the Report a problem button in the bottom-right corner of the Street View window. This will pull up Google Maps' "Report Inappropriate Street View" form for your specific address. If for some reason the address or the image are not of your home, restart the process so you don't blur someone else's house.

Here, you can adjust the exact amount of blur you want to add to the image. You can zoom in and out with the + and - buttons to control the window size for the blur, or drag around the map to adjust the location of the window.

Below this, you'll need to choose what element of the photo you want to blur. You can choose from a face, a car/license plate, or a "different object," but in this case, choose My home. You can skip over the "Report image quality" section. Next, enter your email address, as Google requires you to attach it to your request, then complete the reCAPTCHA to prove you're not a robot. We don't want ChatGPT blurring all our homes, after all.

reporting a problem to street view
Full disclosure: This is not my house. Credit: Jake Peterson

There's no telling exactly how long this process will take, or if Google will contact you at the email address you provided for more information. But assuming everything is working as intended, Google will eventually blur your home from Street View.

Almost all citizens of city of Eindhoven have their personal data exposed – Source: www.bitdefender.com

almost-all-citizens-of-city-of-eindhoven-have-their-personal-data-exposed-–-source:-wwwbitdefender.com

Source: www.bitdefender.com – Author: Graham Cluley A data breach involving the Dutch city of Eindhoven left the personal information related to almost all of its citizens exposed. As Eindhovens Dagblad reports, two files containing the personal data of 221,511 inhabitants of Eindhoven were accessible to unauthorised parties for a period of time last year. Everyone […]

La entrada Almost all citizens of city of Eindhoven have their personal data exposed – Source: www.bitdefender.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Recall feature in Microsoft Copilot+ PCs raises privacy and security concerns – Source: securityaffairs.com

recall-feature-in-microsoft-copilot+-pcs-raises-privacy-and-security-concerns-–-source:-securityaffairs.com

Source: securityaffairs.com – Author: Pierluigi Paganini Recall feature in Microsoft Copilot+ PCs raises privacy and security concerns UK data watchdog is investigating Microsoft regarding the new Recall feature in Copilot+ PCs that captures screenshots of the user’s laptop every few seconds. The UK data watchdog, the Information Commissioner’s Office (ICO), is investigating a new feature, […]

La entrada Recall feature in Microsoft Copilot+ PCs raises privacy and security concerns – Source: securityaffairs.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Black Basta Ascension Attack Redux — can Patients Die of Ransomware?

24 May 2024 at 13:45
Psychedelic doctor image, titled “Bad Medicine”

Inglorious Basta(rds): 16 days on, huge hospital system continues to be paralyzed by ransomware—and patient safety is at risk.

The post Black Basta Ascension Attack Redux — can Patients Die of Ransomware? appeared first on Security Boulevard.

The Rise and Risks of Shadow AI

24 May 2024 at 13:20

 

Shadow AI, the internal
use of AI tools and services without the enterprise oversight teams expressly
knowing about it (ex. IT, legal,
cybersecurity, compliance, and privacy teams, just to name a few), is becoming a problem!

Workers are flocking to use 3rd party AI services
(ex. websites like ChatGPT) but also there are often savvy technologists who
are importing models and building internal AI systems (it really is not that
difficult) without telling the enterprise ops teams. Both situations are
increasing and many organizations are blind to the risks.

According to a recent Cyberhaven
report
:

  • AI is Accelerating:  Corporate data
    input into AI tools surged by 485%
  • Increased Data Risks:  Sensitive data
    submission jumped 156%, led by customer support data
  • Threats are Hidden:  Majority of AI use
    on personal accounts lacks enterprise safeguards
  • Security Vulnerabilities:  Increased
    risk of data breaches and exposure through AI tool use.


The risks are real and
the problem is growing.

Now is the time to get ahead of this problem.
1. Establish policies for use and
development/deployment

2. Define and communicate an AI Ethics posture
3. Incorporate cybersecurity/privacy/compliance
teams early into such programs

4. Drive awareness and compliance by including
these AI topics in the employee/vendor training


Overall, the goal is to build awareness and
collaboration. Leveraging AI can bring tremendous benefits, but should be done
in a controlled way that aligns with enterprise oversight requirements.


"Do what is great, while it is small" -
A little effort now can help avoid serious mishaps in the future!

The post The Rise and Risks of Shadow AI appeared first on Security Boulevard.

Personal AI Assistants and Privacy – Source: www.schneier.com

personal-ai-assistants-and-privacy-–-source:-wwwschneier.com

Source: www.schneier.com – Author: Bruce Schneier Microsoft is trying to create a personal digital assistant: At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records […]

La entrada Personal AI Assistants and Privacy – Source: www.schneier.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Personal AI Assistants and Privacy

23 May 2024 at 07:00

Microsoft is trying to create a personal digital assistant:

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.

I wrote about this AI trust problem last year:

One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—­or at least like an animal. It will interact with the whole of your existence, just like another person would.

[…]

And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.

It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.

We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.

[…]

All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.

The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.

We are going to need some sort of public AI to counterbalance all of these corporate AIs.

EDITED TO ADD (5/24): Lots of comments about Microsoft Recall and security:

This:

Because Recall is “default allow” (it relies on a list of things not to record) … it’s going to vacuum up huge volumes and heretofore unknown types of data, most of which are ephemeral today. The “we can’t avoid saving passwords if they’re not masked” warning Microsoft included is only the tip of that iceberg. There’s an ocean of data that the security ecosystem assumes is “out of reach” because it’s either never stored, or it’s encrypted in transit. All of that goes out the window if the endpoint is just going to…turn around and write it to disk. (And local encryption at rest won’t help much here if the data is queryable in the user’s own authentication context!)

This:

The fact that Microsoft’s new Recall thing won’t capture DRM content means the engineers do understand the risk of logging everything. They just chose to preference the interests of corporates and money over people, deliberately.

This:

Microsoft Recall is going to make post-breach impact analysis impossible. Right now IR processes can establish a timeline of data stewardship to identify what information may have been available to an attacker based on the level of access they obtained. It’s not trivial work, but IR folks can do it. Once a system with Recall is compromised, all data that has touched that system is potentially compromised too, and the ML indirection makes it near impossible to confidently identify a blast radius.

This:

You may be in a position where leaders in your company are hot to turn on Microsoft Copilot Recall. Your best counterargument isn’t threat actors stealing company data. It’s that opposing counsel will request the recall data and demand it not be disabled as part of e-discovery proceedings.

How AI will change your credit card behind the scenes

23 May 2024 at 06:09

Many companies are starting to implement Artificial Intelligence (AI) within their services. Whenever there are large amounts of data involved, AI offers a way to turn that pile of data into actionable insights.

And there’s a big chance that our data are somewhere in that pile, whether they can be traced back to us or not. In this blog we’ll look at the different ways in which credit card companies are planning to use AI.

Two of the major credit card companies, MasterCard and Visa, made announcements this month on how they will use AI in the near future.

Mastercard announced the introduction of generative AI for earlier detection of credit card fraud.

Johan Gerber, executive vice president of security and cyber innovation at Mastercard, said:

“Generative AI is going to allow to figure out where did you perhaps get your credentials compromised, how do we identify how it possibly happened, and how do we very quickly remedy that situation not only for you, but the other customers who don’t know they are compromised yet.”

Generative AI models learn the patterns and structure of their input training data and then generate new data with similar characteristics.

There’s an enormous amount of stolen credit and debit card details available on various marketplaces, some of which aren’t even on the dark web. These details come from many different data breaches, and they can go unnoticed for extended periods of time. Analyzing the data and spotting patterns in the abuse can help the credit card company identify and inform affected customers before the criminals actually use the card.

VISA, on the other hand, said it will use AI to tailor a better shopping experience. This, it says, will allow it to share more information about customers’ preferences based on their shopping history with retailers.

VISA will require consumer consent for sharing the required information. According to VISA CEO Ryan McInerney, consumers will have the option, through their bank app, to revoke access to their information.

And last but not least, American Express Global Business Travel revealed in February that it started an AI initiative to improve efficiency. As one of the early results it reported it has reduced customer call times by about a minute.

All in all, credit card companies are gathering data to predict our behavior. They are not the only ones, for sure, but they do have access to some information that most people are not prone to share freely, our finances.

Sure, less time spent being held up by that slightly less annoying chatbot, or a warning about a compromised credit card before the abuse happens, that sounds great. But an online store guessing what I am likely to purchase isn’t something I’m so keen on—about the same level of spooky as targeted ads.

Does increased efficiency outweigh the cost of handing over our data? What we’d like to see are improved security AND ease of use. Let us know how you feel in the comments below.


We don’t just talk about credit cards—we help monitor them

Cybersecurity risks should never spread beyond a headline. Keep an eye on your finances with identity and credit monitoring.

‘I would prefer to know’: Kerri has up to 600 siblings. They may finally learn the truth of their origins

22 May 2024 at 11:00

Queensland minister says donor-conceived people’s right to know genetic history outweighs a donor’s competing right to privacy

For decades Kerri Favarato has been blocked from accessing her earliest medical records.

They’re either stored at the back of the Queensland Fertility Group, where she was born in 1982, or in the garage of her treating doctor. But, legally, they are not hers. She was a product of the fertility clinic, not a patient of it.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...

💾

© Photograph: Randy Duchaine/Alamy

💾

© Photograph: Randy Duchaine/Alamy

Criminal record database of millions of Americans dumped online

22 May 2024 at 06:32

A cybercriminal going by the names of EquationCorp and USDoD has released an enormous database containing the criminal records of millions of Americans. The database is said to contain 70 million rows of data.

Post on breach forum to download the criminal database
Post by USDoD on a breach forum

The leaked database is said to include full names, dates of birth, known aliases, addresses, arrest and conviction dates, sentences, and much more. Dates reportedly range from 2020 to 2024.

The exact source of the database is as yet unknown.

USDoD is a high-profile player in this field, closely associated with “Pompompurin”, the operator of the first iteration of data leak site BreachForums. USDoD is said to have plans to set up a successor to the second iteration of BreachForums which was recently seized by law enforcement. Releasing this database may be USDoD’s way to round up some interested users.

USDoD is also believed to be involved in a breach at TransUnion, the data of which was (partly) dumped in September, 2023.

Needless to say, having the criminal information leaked could have a tremendous impact, not only for the listed individuals but also for the justice system. We’ll keep you updated.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

If you want to find out how much of your own data has been exposed online, including your criminal record data, you can try our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you most frequently use) and we’ll give you a free report, along with tips on what to do next.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Police Found Ways to Use Facial Recognition Tech After Their Cities Banned It

20 May 2024 at 07:34
An anonymous reader shared this report from the Washington Post: As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs, according to a Washington Post review of police documents... Austin police officers received the results of at least 13 face searches from a neighboring police department since the city's 2020 ban — and appeared to get hits on some of them, according to documents obtained by The Post through public records requests and sources who shared them on the condition of anonymity. "That's him! Thank you very much," one Austin police officer wrote in response to an array of photos sent to him by an officer in Leander, Tex., who ran a facial recognition search, documents show. The man displayed in the pictures, John Curry Jr., was later charged with aggravated assault for allegedly charging toward someone with a knife, and is currently in jail awaiting trial. Curry's attorney declined to comment. "Police officers' efforts to skirt these bans have not been previously reported and highlight the challenge of reining in police use of facial recognition," the article concludes. It also points out that the technology "has played a role in the wrongful arrests of at least seven innocent Americans," according to the lawsuits they filed after charges against them were dismissed.

Read more of this story at Slashdot.

Slack Is Using Your Private Conversations to Train Its AI

17 May 2024 at 18:30

Slack users across the web—on Mastodon, on Threads, and on Hackernews—have responded with alarm to an obscure privacy page that outlines the ways in which their Slack conversations, including DMs, are used to train what the Salesforce-owned company calls "Machine Learning" (ML) and "Artificial Intelligence" (AI) systems. The only way to opt out of these features is for the admin of your company's Slack setup to send an email to Slack requesting it be turned off.

The policy, which applies to all Slack instances—not just those that have opted into the Slack AI add-on—states that Slack systems "analyze Customer Data (e.g. messages, content and files) submitted to Slack as well as Other Information (including usage information) as defined in our privacy policy and in your customer agreement."

So, basically, everything you type into Slack is used to train these systems. Slack states that data "will not leak across workspaces" and that there are "technical controls in place to prevent access." Even so, we all know that conversations with AI chatbots are not private, and it's not hard to imagine this going wrong somehow. Given the risk, the company must be offering something extremely compelling in return...right?

What are the benefits of letting Slack use your data to train AI?

The section outlining the potential benefits of Slack feeding all of your conversations into a large language model says this will allow the company to provide improved search results, better autocomplete suggestions, better channel recommendations, and (I wish I was kidding) improved emoji suggestions. If this all sounds useful to you, great! I personally don't think any of these things—except possibly better search—will do much to make Slack more useful for getting work done.

The emoji thing, particularly, is absurd. Slack is literally saying that they need to feed your conversations into an AI system so that they can provide better emoji recommendations. Consider this actual quote, which I promise you is from Slack's website and not The Onion:

Slack might suggest emoji reactions to messages using the content and sentiment of the message, the historic usage of the emoji and the frequency of use of the emoji in the team in various contexts. For instance, if 🎉 is a common reaction to celebratory messages in a particular channel, we will suggest that users react to new, similarly positive messages with 🎉.

I am overcome with awe just thinking about the implications of this incredible technology, and am no longer concerned about any privacy implications whatsoever. AI is truly the future of communication.

How to opt your company out of Slack's AI training

The bad news is that you, as an individual user, cannot opt out of Slack using your conversation history to train its large language model. That can only be done by a Slack admin, which in most cases is going to be someone in the IT department of your company. And there's no button in the settings for opting out—admins need to send an email asking for it to happen.

Here's Slack exact language on the matter:

If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line ‘Slack global model opt-out request’. We will process your request and respond once the opt-out has been completed.

This smells like a dark pattern—making something annoying to do in order to discourage people from doing it. Hopefully the company makes the opt-out process easier in the wake of the current earful they're getting from customers.

A reminder that Slack DMs aren't private

I'll be honest, I'm a little amused at the prospect of my Slack data being used to improve search and emoji suggestions for my former employers. At previous jobs, I frequently sent DMs to work friends filled with negativity about my manager and the company leadership. I can just picture Slack recommending certain emojis every time a particular CEO is mentioned.

Funny as that idea is, though, the whole situation serves as a good reminder to employees everywhere: Your Slack DMs aren't actually private. Nothing you say on Slack—even in a direct message—is private. Slack uses that information to train tools like this, yes, but the company you work for can also access those private messages pretty easily. I highly recommend using something not controlled by your company if you need to shit talk said company. Might I suggest Signal?

Update: After this article was published, a spokesperson shared with Lifehacker that Slack doesn't develop its own LLMs or other generative models using customer data, and pointed to the company's recently updated privacy principles (which are quoted from extensively above) to better explain how they use customer data and generative AI. 

EFF to Court: Electronic Ankle Monitoring Is Bad. Sharing That Data Is Even Worse.

17 May 2024 at 13:59

The government violates the privacy rights of individuals on pretrial release when it continuously tracks, retains, and shares their location, EFF explained in a friend-of-the-court brief filed in the Ninth Circuit Court of Appeals.

In the case, Simon v. San Francisco, individuals on pretrial release are challenging the City and County of San Francisco’s electronic ankle monitoring program. The lower court ruled the program likely violates the California and federal constitutions. We—along with Professor Kate Weisburd and the Cato Institute—urge the Ninth Circuit to do the same.

Under the program, the San Francisco County Sheriff collects and indefinitely retains geolocation data from people on pretrial release and turns it over to other law enforcement entities without suspicion or a warrant. The Sheriff shares both comprehensive geolocation data collected from individuals and the results of invasive reverse location searches of all program participants’ location data to determine whether an individual on pretrial release was near a specified location at a specified time.

Electronic monitoring transforms individuals’ homes, workplaces, and neighborhoods into digital prisons, in which devices physically attached to people follow their every movement. All location data can reveal sensitive, private information about individuals, such as whether they were at an office, union hall, or house of worship. This is especially true for the GPS data at issue in Simon, given its high degree of accuracy and precision. Both federal and state courts recognize that location data is sensitive, revealing information in which one has a reasonable expectation of privacy. And, as EFF’s brief explains, the Simon plaintiffs do not relinquish this reasonable expectation of privacy in their location information merely because they are on pretrial release—to the contrary, their privacy interests remain substantial.

Moreover, as EFF explains in its brief, this electronic monitoring is not only invasive, but ineffective and (contrary to its portrayal as a detention alternative) an expansion of government surveillance. Studies have not found significant relationships between electronic monitoring of individuals on pretrial release and their court appearance rates or  likelihood of arrest. Nor do studies show that law enforcement is employing electronic monitoring with individuals they would otherwise put in jail. To the contrary, studies indicate that law enforcement is using electronic monitoring to surveil and constrain the liberty of those who wouldn’t otherwise be detained.

We hope the Ninth Circuit affirms the trial court and recognizes the rights of individuals on pretrial release against invasive electronic monitoring.

User Outcry as Slack Scrapes Customer Data for AI Model Training

17 May 2024 at 12:43

Slack reveals it has been training AI/ML models on customer data, including messages, files and usage information. It's opt-in by default.

The post User Outcry as Slack Scrapes Customer Data for AI Model Training appeared first on SecurityWeek.

Mental Health Apps are Likely Collecting and Sharing Your Data

15 May 2024 at 12:00

May is mental health awareness month! In pursuing help or advice for mental health struggles (beyond just this month, of course), users may download and use mental health apps. Mental health apps are convenient and may be cost effective for many people.

However, while these apps may provide mental health resources and benefits, they may be harvesting considerable amounts of information and sharing health-related data with third parties for advertising and tracking purposes.

Disclaimer: This post is not meant to serve as legal or medical advice. This post is for informational purposes only. If you are experiencing an emergency, please contact emergency services in your jurisdiction.

Understanding HIPAA

Many people have misconceptions about the Health Insurance Portability and Accountability Act (HIPAA) and disclosure/privacy.


white paper with words "hipaa compliance" on top half

According to the US Department of Health and Human Services (HHS), HIPAA is a "federal law that required the creation of national standards to protect sensitive patient health information from being disclosed without the patient's consent or knowledge." There is a HIPAA Privacy Rule and a HIPAA Security Rule.

The Centers for Disease Control and Prevention (CDC) states the Privacy Rule standards "address the use and disclosure of individuals' health information by entities subject to the Privacy Rule." It's important to understand that the Privacy Rule covers entities subject to it.

Entities include healthcare providers, health plans, health clearing houses, and business associates (such as billing specialists or data analysts). Many mental health apps aren't classified as either; also, though there are a few subject to HIPAA, some have been documented not to actually be compliant with HIPAA rules.


white paper on a brown desk surface with "hipaa requirements" on top

What does this mean? Many mental health apps are not considered covered entities and are therefore "exempt" (for lack of better word) from HIPAA. As such, these apps appear to operate in a legal "gray area," but that doesn't mean their data practices are ethical or even follow proper basic information security principles for safeguarding data...

Even apps that collect PHI information protected by HIPAA may still share/use your information that doesn't fall under HIPAA protections.

Mental health apps collect a wealth of personal information

Naturally, data collected by apps falling under the "mental health" umbrella varies widely (as do the apps that fall under this umbrella.)

However, most have users create accounts and fill out some version of an "intake" questionnaire prior to using/enrolling in services. These questionnaires vary by service, but may collect information such as:

  • name
  • address
  • email
  • phone number
  • employer information

Account creation generally and at minimum requires user email and a password, which is indeed routine.


render of molecules on dark blue faded background

It's important to note your email address can serve as a particularly unique identifier - especially if you use the same email address everywhere else in your digital life. If you use the same email address everywhere, it's easier to track and connect your accounts and activities across the web and your digital life.

Account creation may also request alternative contact information, such as a phone number, or supplemental personal information such as your legal name. These can and often do serve as additional data points and identifiers.

It's also important to note that on the backend (usually in a database), your account may be assigned identifiers as well. In some cases, your account may also be assigned external identifiers - especially if information is shared with third parties.

Intake questionnaires can collect particularly sensitive information, such as (but not necessarily limited to):

  • past mental health experiences
  • age (potentially exact date of birth)
  • gender identity information
  • sexual orientation information
  • other demographic information
  • health insurance information (if relevant)
  • relationship status


betterhelp intake questionnaire asking if user takes medication currently

Question from BetterHelp intake questionnaire found in FTC complaint against BetterHelp

These points of sensitive information are rather intimate and can easily be used to identify users - and could be disasters if disclosed in a data breach or to third party platforms.

These unique and rather intimate data points can be used to exploit users in highly targeted marketing and advertising campaigns - or perhaps even used to facilitate scams and malware via advertising tools third parties who may receive such information provide to advertisers.

Note: If providing health insurance information, many services require an image of the card. Images can contain EXIF data that could expose a user's location and device information if not scrubbed prior to upload.

Information collection extends past user disclosure


globe turned to america on black background with code

Far more often than not, information collected by mental health apps extends past information a user may disclose in processes such as account creation or completing intake forms - these apps often harvest device information, frequently sending it off the device and to their own servers.

For example, here is a screenshot of the BetterHelp app's listing on the Apple App Store in MAY 2024:


betterhelp app privacy in the apple app store

The screenshot indicates BetterHelp uses your location and app usage data to "track you across apps and websites owned by other companies." We can infer from this statement that BetterHelp shares your location information and how you use the app with third parties, likely for targeted advertising and tracking purposes.

The screenshot also indicates your contact information, location information, usage data, and other identifiers are linked to your identity.

Note: Apple Privacy Labels in the App Store are self-reported by the developers of the app.

This is all reinforced in their updated privacy policy (25 APR 2024), where BetterHelp indicates they use external and internal identifiers, collect app and platform errors, and collect usage data of the app and platform:


betterhelp privacy policy excerpt

In February 2020, an investigation revealed BetterHelp also harvested the metadata of messages exchanged between clients and therapists, sharing them with platforms like Facebook for advertising purposes. This was despite BetterHelp "encrypting communications between client and therapist" - they may have encrypted the actual message contents, but it appears information such as when a message was went, the receiver/recipient, and location information was available to the servers... and actively used/shared.

While this may not seem like a big deal at first glance - primarily because BetterHelp is not directly accessing/reading message contents - users should be aware that message metadata can give away a lot of information.

Cerebral, a mental health app that does fall under the HIPAA rules, also collects device-related data and location data, associating them with your identity:

cerebral app privacy in the apple app store

According to this screenshot, Cerebral shares/uses app usage data with third parties, likely for marketing and advertising purposes. Specifically, they...

The post Mental Health Apps are Likely Collecting and Sharing Your Data appeared first on Security Boulevard.

Connected cars’ illegal data collection and use now on FTC’s “radar”

15 May 2024 at 12:06
An image of cars in traffic, with computer-generated bounding boxes over each one, representing the idea of data collection

Enlarge (credit: Getty Images)

The Federal Trade Commission's Office of Technology has issued a warning to automakers that sell connected cars. Companies that offer such products "do not have the free license to monetize people’s information beyond purposes needed to provide their requested product or service," it wrote in a blog post on Tuesday. Just because executives and investors want recurring revenue streams, that does not "outweigh the need for meaningful privacy safeguards," the FTC wrote.

Based on your feedback, connected cars might be one of the least-popular modern inventions among the Ars readership. And who can blame them? Last January, a security researcher revealed that a vehicle identification number was sufficient to access remote services for multiple different makes, and yet more had APIs that were easily hackable.

Later, in 2023, the Mozilla Foundation published an extensive report examining the various automakers' policies regarding the use of data from connected cars; the report concluded that "cars are the worst product category we have ever reviewed for privacy."

Read 5 remaining paragraphs | Comments

Tornado Cash Co-Founder Gets Over 5 Years for Laundering $1.2Bn

Tornado Cash Co-Founder, Tornado Cash

A Dutch court ruling on Tuesday found one of the co-founders of the now-sanctioned Tornado Cash cryptocurrency mixer service guilty of laundering $1.2 billion illicit cybercriminal proceeds. He was handed down a sentence of 5 years and 4 months in prison, as a result. Alexey Pertsev, a 31-year-old Russian national and the developer of Tornado Cash, awaited trial in the Netherlands on money laundering charges after his arrest in Amsterdam in August 2022, just days after the U.S. Treasury Department sanctioned the service for facilitating malicious actors like the Lazarus Group in laundering their illicit proceeds from cybercriminal activities. “The defendant declared that it was never his intention to break the law or to facilitate criminal activities,” according to a machine translated summary of the judgement. Instead Pertsev intended to offer a legitimate solution with Tornado Cash to a growing crypto community that craved privacy. He argued that “it is up to the users not to abuse Tornado Cash.” Pertsev also said that given the technical specifications of the cryptocurrency mixer service, it was impossible for him to prevent the abuse. However, the District Court of East Brabant disagreed, asserting that the responsibility for Tornado Cash's operations lay solely with its founders and lacked adequate mechanisms to prevent abuse. “Tornado Cash functions in the way the defendant and his cofounders developed Tornado Cash. So, the operation is completely their responsibility,” the Court said. “If the defendant had wanted to have the possibility to take action against abuse, then he should have built it in. But he did not.”
“Tornado Cash does not pose any barrier for people with criminal assets who want to launder them. That is why the court regards the defendant guilty of the money laundering activities as charged.”
Tornado Cash functioned as a decentralized cryptocurrency mixer, also known as a tumbler, allowing users to obscure the blockchain transaction trail by mixing illegally and legitimately obtained funds, making it an appealing option for adversaries seeking to cover their illicit money links. Tornado Cash laundered $1.2 billion worth of cryptocurrency stolen through at least 36 hacks including the theft of $625 million from the Axie Infinity hack in March 2022 by North Korea’s Lazarus Group hackers. The Court used certain undisclosed parameters in selecting these hacks due to which only 36 of them were taken into consideration. Without these parameters, more than $2.2 billion worth of illicit proceeds from Ether cryptocurrency were likely laundered. The Court also did not rule out the possibility of Tornado Cash laundering cryptocurrency derived from other crimes. The Court further described Tornado Cash as combining “maximum anonymity and optimal concealment techniques” without incorporating provisions to “make identification, control or investigation possible.” It failed to implement Know Your Customer (KYC) or anti-money laundering (AML) programs as mandated by U.S. federal law and was not registered with the U.S. Financial Crimes Enforcement Network (FinCEN) as a money-transmitting entity. "Tornado Cash is not a legitimate tool that has unintentionally been abused by criminals," it concluded. "The defendant and his co-perpetrators developed the tool in such a manner that it automatically performs the concealment acts that are needed for money laundering." In addition to the prison term, Pertsev was ordered to forfeit cryptocurrency assets valued at €1.9 million (approximately $2.05 million) and a Porsche car previously seized.

Other Tornado Cash Co-Founders Face Trials Too

A year after Pertsev’s arrest, the U.S. Department of Justice unsealed an indictment where the two other co-founders, Roman Storm and Roman Semenov, were charged with conspiracy to commit money laundering, conspiracy to operate an unlicensed money-transmitting business and conspiracy to violate the International Emergency Economic Powers Act. Storm goes to trial in the Southern District of New York later in September, while Semenov remains at large. The case has drawn a debate amongst two sides – privacy advocates and the governments. Privacy advocates argue against the criminalization of anonymity tools like Tornado Cash as it gives users a right to avoid financial surveillance, while governments took a firm stance against unregulated offerings susceptible to exploitation by bad actors for illicit purposes. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Vermont Legislature Passes One of the Strongest Data Privacy Measures in the Country

14 May 2024 at 22:05

Vermont legislature passed a bill that prohibits the sale of sensitive data, such as social security and drivers’ license numbers, financial or health information.

The post Vermont Legislature Passes One of the Strongest Data Privacy Measures in the Country appeared first on SecurityWeek.

Understanding CUI: What It Is and Guidelines for Its Management

13 May 2024 at 15:44

It sounds official — like it might be the subject of the next action-packed, government espionage, Jason Bourne-style thriller. Or maybe put it before the name of a racy city and have your next hit crime series. A history of mysterious aliases like “official use only,” “law enforcement sensitive,” and “sensitive but unclassified” only adds...

The post Understanding CUI: What It Is and Guidelines for Its Management appeared first on Hyperproof.

The post Understanding CUI: What It Is and Guidelines for Its Management appeared first on Security Boulevard.

FBI/CISA Warning: ‘Black Basta’ Ransomware Gang vs. Ascension Health

13 May 2024 at 13:08
Closeup photo of street go and stop signage displaying Stop

Будет! Russian ransomware rascals riled a Roman Catholic healthcare organization.

The post FBI/CISA Warning: ‘Black Basta’ Ransomware Gang vs. Ascension Health appeared first on Security Boulevard.

Why car location tracking needs an overhaul

13 May 2024 at 06:48

Across America, survivors of domestic abuse and stalking are facing a unique location tracking crisis born out of policy failure, unclear corporate responsibility, and potentially risky behaviors around digital sharing that are now common in relationships.

No, we’re not talking about stalkerware. Or hidden Apple AirTags. We’re talking about cars.

Modern cars are the latest consumer “device” to undergo an internet-crazed overhaul, as manufacturers increasingly stuff their automobiles with the types of features you’d expect from a smartphone, not a mode of transportation.

There are cars with WiFi, cars with wireless charging, cars with cameras that not only help while you reverse out of a driveway, but which can detect whether you’re drowsy while on a long haul. Many cars now also come with connected apps that allow you to, through your smartphone, remotely start your vehicle, schedule maintenance, and check your tire pressure.

But one feature in particular, which has legitimate uses in responding to stolen and lost vehicles, is being abused: Location tracking.

It’s time car companies do something about it.  

In December, The New York Times revealed the story of a married woman whose husband was abusing the location tracking capabilities of her Mercedes-Benz sedan to harass her. The woman tried every avenue she could to distance herself from her husband. After her husband became physically violent in an argument, she filed a domestic abuse report. Once she fled their home, she got a restraining order. She ignored his calls and texts.

But still her husband could follow her whereabouts by tracking her car—a level of access that Mercedes representatives reportedly could not turn off, as he was considered the rightful owner of the vehicle (according to The New York Times, the husband’s higher credit score convinced the married couple to have the car purchased in his name alone).

As reporter Kashmir Hill wrote of the impasse:

“Even though she was making the payments, had a restraining order against her husband and had been granted sole use of the car during divorce proceedings, Mercedes representatives told her that her husband was the customer so he would be able to keep his access. There was no button she could press to take away the app’s connection to the vehicle.”

This was far from an isolated incident.

In 2023, Reuters reported that a San Francisco woman sued her husband in 2020 for allegations of “assault and sexual battery.” But some months later, the woman’s allegations of domestic abuse grew into allegations of negligence—this time, against the carmaker Tesla.

Tesla, the woman claimed in legal filings, failed to turn off her husband’s access to the location tracking capabilities in their shared Model X SUV, despite the fact that she had obtained a restraining order against her husband, and that she was a named co-owner of the vehicle.

When The New York Times retrieved filings from the San Francisco lawsuit above, attorneys for Tesla argued that the automaker could not realistically play a role in this matter:

“Virtually every major automobile manufacturer offers a mobile app with similar functions for their customers,” the lawyers wrote. “It is illogical and impractical to expect Tesla to monitor every vehicle owner’s mobile app for misuse.”

Tesla was eventually removed from the lawsuit.

In the Reuters story, reporters also spoke with a separate woman who made similar allegations that her ex-husband had tracked her location by using the Tesla app associated with her vehicle. Because the separate woman was a “primary” account owner, she was able to remove the car’s access to the internet, Reuters reported.

A better path

Location tracking—and the abuse that can come with it—is a much-discussed topic for Malwarebytes Labs. But the type of location tracking abuse that is happening with shared cars is different because of the value that cars hold in situations of domestic abuse.

A car is an opportunity to physically leave an abusive partner. A car is a chance to start anew in a different, undisclosed location. In harrowing moments, cars have also served as temporary shelter for those without housing.

So when a survivor’s car is tracked by their abuser, it isn’t just a matter of their location and privacy being invaded, it is a matter of a refuge being robbed.

In speaking with the news outlet CalMatters, Yenni Rivera, who works on domestic violence cases, explained the stressful circumstances of exactly this dynamic.

“I hear the story over and over from survivors about being located by their vehicle and having it taken,” Rivera told CalMatters. “It just puts you in a worst case situation because it really triggers you thinking, ‘Should I go back and give in?’ and many do. And that’s why many end up being murdered in their own home. The law should make it easier to leave safely and protected.”

Though the state of California is considering legislative solutions to this problem, national lawmaking is slow.

Instead, we believe that the companies that have the power to do something act on that power. Much like how Malwarebytes and other cybersecurity vendors banded together to launch the Coalition Against Stalkerware, automakers should work together to help users.

Fortunately, an option may already exist.

When the Alliance for Automobile Innovation warned that consumer data collection requests could be weaponized by abusers who want to comb through the car location data of their partners and exes, the automaker General Motors already had a protection built in.

According to Reuters, the roadside assistance service OnStar, which is owned by General Motors, allows any car driver—be they a vehicle’s owner or not—to hide location data from other people who use the same vehicle. Rivian, a new electric carmaker, is reportedly working on a similar feature, said senior vice president of software development Wassym Bensaid in speaking with Reuters.

Though Reuters reported that Rivian had not heard of their company’s technology being leveraged in a situation of domestic abuse, Wassym believed that “users should have a right to control where that information goes.”

We agree.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools

By: Tom Eston
13 May 2024 at 00:00

In this first-ever in-person recording of Shared Security, Tom and Kevin, along with special guest Matt Johansen from Reddit, discuss their experience at the RSA conference in San Francisco, including their walk-through of ‘enhanced security’ and the humorous misunderstanding that ensued. The conversation moves to the ubiquity of AI and machine learning buzzwords at the […]

The post Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools appeared first on Shared Security Podcast.

The post Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools appeared first on Security Boulevard.

💾

Dell notifies customers about data breach

10 May 2024 at 10:04

Dell is warning its customers about a data breach after a cybercriminal offered a 49 million-record database of information about Dell customers on a cybercrime forum.

A cybercriminal called Menelik posted the following message on the “Breach Forums” site:

“The data includes 49 million customer and other information of systems purchased from Dell between 2017-2024.

It is up to date information registered at Dell servers.

Feel free to contact me to discuss use cases and opportunities.

I am the only person who has the data.”

Data Breach forums post by Menelik
Screenshot taken from the Breach Forums

According to Menelik the data includes:

  • The full name of the buyer or company name
  • Address including postal code and country
  • Unique seven digit service tag of the system
  • Shipping date of the system
  • Warranty plan
  • Serial number
  • Dell customer number
  • Dell order number

Most of the affected systems were sold in the US, China, India, Australia, and Canada.

Users on Reddit reported getting an email from Dell which was apparently sent to customers whose information was accessed during this incident:

“At this time, our investigation indicates limited types of customer information was accessed, including:

  • Name
  • Physical address
  • Dell hardware and order information, including service tag, item description, date of order and related warranty information.

The information involved does not include financial or payment information, email address, telephone number or any highly sensitive customer information.”

Although Dell might be trying to play down the seriousness of the situation by claiming that there is not a significant risk to its customers given the type of information involved, it is reassuring that there were no email addresses included. Email addresses are a unique identifier that can allow data brokers to merge and enrich their databases.

So, this is another big data breach that leaves us with more questions than answers. We have to be careful that we don’t shrug these data breaches away with comments like “they already know everything there is to know.”

This kind of information is exactly what scammers need in order to impersonate Dell support.

Protecting yourself from a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify any contacts using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

If you want to find out how much of your data has been exposed online, you can try our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you most frequently use) and we’ll send you a free report.

❌
❌