Normal view

There are new articles available, click to refresh the page.
Today — 18 May 2024Main stream

Slack Is Using Your Private Conversations to Train Its AI

17 May 2024 at 18:30

Slack users across the web—on Mastodon, on Threads, and on Hackernews—have responded with alarm to an obscure privacy page that outlines the ways in which their Slack conversations, including DMs, are used to train what the Salesforce-owned company calls "Machine Learning" (ML) and "Artificial Intelligence" (AI) systems. The only way to opt out of these features is for the admin of your company's Slack setup to send an email to Slack requesting it be turned off.

The policy, which applies to all Slack instances—not just those that have opted into the Slack AI add-on—states that Slack systems "analyze Customer Data (e.g. messages, content and files) submitted to Slack as well as Other Information (including usage information) as defined in our privacy policy and in your customer agreement."

So, basically, everything you type into Slack is used to train these systems. Slack states that data "will not leak across workspaces" and that there are "technical controls in place to prevent access." Even so, we all know that conversations with AI chatbots are not private, and it's not hard to imagine this going wrong somehow. Given the risk, the company must be offering something extremely compelling in return...right?

What are the benefits of letting Slack use your data to train AI?

The section outlining the potential benefits of Slack feeding all of your conversations into a large language model says this will allow the company to provide improved search results, better autocomplete suggestions, better channel recommendations, and (I wish I was kidding) improved emoji suggestions. If this all sounds useful to you, great! I personally don't think any of these things—except possibly better search—will do much to make Slack more useful for getting work done.

The emoji thing, particularly, is absurd. Slack is literally saying that they need to feed your conversations into an AI system so that they can provide better emoji recommendations. Consider this actual quote, which I promise you is from Slack's website and not The Onion:

Slack might suggest emoji reactions to messages using the content and sentiment of the message, the historic usage of the emoji and the frequency of use of the emoji in the team in various contexts. For instance, if 🎉 is a common reaction to celebratory messages in a particular channel, we will suggest that users react to new, similarly positive messages with 🎉.

I am overcome with awe just thinking about the implications of this incredible technology, and am no longer concerned about any privacy implications whatsoever. AI is truly the future of communication.

How to opt your company out of Slack's AI training

The bad news is that you, as an individual user, cannot opt out of Slack using your conversation history to train its large language model. That can only be done by a Slack admin, which in most cases is going to be someone in the IT department of your company. And there's no button in the settings for opting out—admins need to send an email asking for it to happen.

Here's Slack exact language on the matter:

If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line ‘Slack global model opt-out request’. We will process your request and respond once the opt-out has been completed.

This smells like a dark pattern—making something annoying to do in order to discourage people from doing it. Hopefully the company makes the opt-out process easier in the wake of the current earful they're getting from customers.

A reminder that Slack DMs aren't private

I'll be honest, I'm a little amused at the prospect of my Slack data being used to improve search and emoji suggestions for my former employers. At previous jobs, I frequently sent DMs to work friends filled with negativity about my manager and the company leadership. I can just picture Slack recommending certain emojis every time a particular CEO is mentioned.

Funny as that idea is, though, the whole situation serves as a good reminder to employees everywhere: Your Slack DMs aren't actually private. Nothing you say on Slack—even in a direct message—is private. Slack uses that information to train tools like this, yes, but the company you work for can also access those private messages pretty easily. I highly recommend using something not controlled by your company if you need to shit talk said company. Might I suggest Signal?

Yesterday — 17 May 2024Main stream

User Outcry As Slack Scrapes Customer Data For AI Model Training

By: msmash
17 May 2024 at 14:02
New submitter txyoji shares a report: Enterprise workplace collaboration platform Slack has sparked a privacy backlash with the revelation that it has been scraping customer data, including messages and files, to develop new AI and ML models. By default, and without requiring users to opt-in, Slack said its systems have been analyzing customer data and usage information (including messages, content and files) to build AI/ML models to improve the software. The company insists it has technical controls in place to block Slack from accessing the underlying content and promises that data will not lead across workplaces but, despite these assurances, corporate Slack admins are scrambling to opt-out of the data scraping. This line in Slack's communication sparked a social media controversy with the realization that content in direct messages and other sensitive content posted to Slack was being used to develop AI/ML models and that opting out world require sending e-mail requests: "If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line 'Slack global model opt-out request'. We will process your request and respond once the opt-out has been completed."

Read more of this story at Slashdot.

EFF to Court: Electronic Ankle Monitoring Is Bad. Sharing That Data Is Even Worse.

17 May 2024 at 13:59

The government violates the privacy rights of individuals on pretrial release when it continuously tracks, retains, and shares their location, EFF explained in a friend-of-the-court brief filed in the Ninth Circuit Court of Appeals.

In the case, Simon v. San Francisco, individuals on pretrial release are challenging the City and County of San Francisco’s electronic ankle monitoring program. The lower court ruled the program likely violates the California and federal constitutions. We—along with Professor Kate Weisburd and the Cato Institute—urge the Ninth Circuit to do the same.

Under the program, the San Francisco County Sheriff collects and indefinitely retains geolocation data from people on pretrial release and turns it over to other law enforcement entities without suspicion or a warrant. The Sheriff shares both comprehensive geolocation data collected from individuals and the results of invasive reverse location searches of all program participants’ location data to determine whether an individual on pretrial release was near a specified location at a specified time.

Electronic monitoring transforms individuals’ homes, workplaces, and neighborhoods into digital prisons, in which devices physically attached to people follow their every movement. All location data can reveal sensitive, private information about individuals, such as whether they were at an office, union hall, or house of worship. This is especially true for the GPS data at issue in Simon, given its high degree of accuracy and precision. Both federal and state courts recognize that location data is sensitive, revealing information in which one has a reasonable expectation of privacy. And, as EFF’s brief explains, the Simon plaintiffs do not relinquish this reasonable expectation of privacy in their location information merely because they are on pretrial release—to the contrary, their privacy interests remain substantial.

Moreover, as EFF explains in its brief, this electronic monitoring is not only invasive, but ineffective and (contrary to its portrayal as a detention alternative) an expansion of government surveillance. Studies have not found significant relationships between electronic monitoring of individuals on pretrial release and their court appearance rates or  likelihood of arrest. Nor do studies show that law enforcement is employing electronic monitoring with individuals they would otherwise put in jail. To the contrary, studies indicate that law enforcement is using electronic monitoring to surveil and constrain the liberty of those who wouldn’t otherwise be detained.

We hope the Ninth Circuit affirms the trial court and recognizes the rights of individuals on pretrial release against invasive electronic monitoring.

User Outcry as Slack Scrapes Customer Data for AI Model Training

17 May 2024 at 12:43

Slack reveals it has been training AI/ML models on customer data, including messages, files and usage information. It's opt-in by default.

The post User Outcry as Slack Scrapes Customer Data for AI Model Training appeared first on SecurityWeek.

Before yesterdayMain stream

Mental Health Apps are Likely Collecting and Sharing Your Data

15 May 2024 at 12:00

May is mental health awareness month! In pursuing help or advice for mental health struggles (beyond just this month, of course), users may download and use mental health apps. Mental health apps are convenient and may be cost effective for many people.

However, while these apps may provide mental health resources and benefits, they may be harvesting considerable amounts of information and sharing health-related data with third parties for advertising and tracking purposes.

Disclaimer: This post is not meant to serve as legal or medical advice. This post is for informational purposes only. If you are experiencing an emergency, please contact emergency services in your jurisdiction.

Understanding HIPAA

Many people have misconceptions about the Health Insurance Portability and Accountability Act (HIPAA) and disclosure/privacy.


white paper with words "hipaa compliance" on top half

According to the US Department of Health and Human Services (HHS), HIPAA is a "federal law that required the creation of national standards to protect sensitive patient health information from being disclosed without the patient's consent or knowledge." There is a HIPAA Privacy Rule and a HIPAA Security Rule.

The Centers for Disease Control and Prevention (CDC) states the Privacy Rule standards "address the use and disclosure of individuals' health information by entities subject to the Privacy Rule." It's important to understand that the Privacy Rule covers entities subject to it.

Entities include healthcare providers, health plans, health clearing houses, and business associates (such as billing specialists or data analysts). Many mental health apps aren't classified as either; also, though there are a few subject to HIPAA, some have been documented not to actually be compliant with HIPAA rules.


white paper on a brown desk surface with "hipaa requirements" on top

What does this mean? Many mental health apps are not considered covered entities and are therefore "exempt" (for lack of better word) from HIPAA. As such, these apps appear to operate in a legal "gray area," but that doesn't mean their data practices are ethical or even follow proper basic information security principles for safeguarding data...

Even apps that collect PHI information protected by HIPAA may still share/use your information that doesn't fall under HIPAA protections.

Mental health apps collect a wealth of personal information

Naturally, data collected by apps falling under the "mental health" umbrella varies widely (as do the apps that fall under this umbrella.)

However, most have users create accounts and fill out some version of an "intake" questionnaire prior to using/enrolling in services. These questionnaires vary by service, but may collect information such as:

  • name
  • address
  • email
  • phone number
  • employer information

Account creation generally and at minimum requires user email and a password, which is indeed routine.


render of molecules on dark blue faded background

It's important to note your email address can serve as a particularly unique identifier - especially if you use the same email address everywhere else in your digital life. If you use the same email address everywhere, it's easier to track and connect your accounts and activities across the web and your digital life.

Account creation may also request alternative contact information, such as a phone number, or supplemental personal information such as your legal name. These can and often do serve as additional data points and identifiers.

It's also important to note that on the backend (usually in a database), your account may be assigned identifiers as well. In some cases, your account may also be assigned external identifiers - especially if information is shared with third parties.

Intake questionnaires can collect particularly sensitive information, such as (but not necessarily limited to):

  • past mental health experiences
  • age (potentially exact date of birth)
  • gender identity information
  • sexual orientation information
  • other demographic information
  • health insurance information (if relevant)
  • relationship status


betterhelp intake questionnaire asking if user takes medication currently

Question from BetterHelp intake questionnaire found in FTC complaint against BetterHelp

These points of sensitive information are rather intimate and can easily be used to identify users - and could be disasters if disclosed in a data breach or to third party platforms.

These unique and rather intimate data points can be used to exploit users in highly targeted marketing and advertising campaigns - or perhaps even used to facilitate scams and malware via advertising tools third parties who may receive such information provide to advertisers.

Note: If providing health insurance information, many services require an image of the card. Images can contain EXIF data that could expose a user's location and device information if not scrubbed prior to upload.

Information collection extends past user disclosure


globe turned to america on black background with code

Far more often than not, information collected by mental health apps extends past information a user may disclose in processes such as account creation or completing intake forms - these apps often harvest device information, frequently sending it off the device and to their own servers.

For example, here is a screenshot of the BetterHelp app's listing on the Apple App Store in MAY 2024:


betterhelp app privacy in the apple app store

The screenshot indicates BetterHelp uses your location and app usage data to "track you across apps and websites owned by other companies." We can infer from this statement that BetterHelp shares your location information and how you use the app with third parties, likely for targeted advertising and tracking purposes.

The screenshot also indicates your contact information, location information, usage data, and other identifiers are linked to your identity.

Note: Apple Privacy Labels in the App Store are self-reported by the developers of the app.

This is all reinforced in their updated privacy policy (25 APR 2024), where BetterHelp indicates they use external and internal identifiers, collect app and platform errors, and collect usage data of the app and platform:


betterhelp privacy policy excerpt

In February 2020, an investigation revealed BetterHelp also harvested the metadata of messages exchanged between clients and therapists, sharing them with platforms like Facebook for advertising purposes. This was despite BetterHelp "encrypting communications between client and therapist" - they may have encrypted the actual message contents, but it appears information such as when a message was went, the receiver/recipient, and location information was available to the servers... and actively used/shared.

While this may not seem like a big deal at first glance - primarily because BetterHelp is not directly accessing/reading message contents - users should be aware that message metadata can give away a lot of information.

Cerebral, a mental health app that does fall under the HIPAA rules, also collects device-related data and location data, associating them with your identity:

cerebral app privacy in the apple app store

According to this screenshot, Cerebral shares/uses app usage data with third parties, likely for marketing and advertising purposes. Specifically, they...

The post Mental Health Apps are Likely Collecting and Sharing Your Data appeared first on Security Boulevard.

Connected cars’ illegal data collection and use now on FTC’s “radar”

15 May 2024 at 12:06
An image of cars in traffic, with computer-generated bounding boxes over each one, representing the idea of data collection

Enlarge (credit: Getty Images)

The Federal Trade Commission's Office of Technology has issued a warning to automakers that sell connected cars. Companies that offer such products "do not have the free license to monetize people’s information beyond purposes needed to provide their requested product or service," it wrote in a blog post on Tuesday. Just because executives and investors want recurring revenue streams, that does not "outweigh the need for meaningful privacy safeguards," the FTC wrote.

Based on your feedback, connected cars might be one of the least-popular modern inventions among the Ars readership. And who can blame them? Last January, a security researcher revealed that a vehicle identification number was sufficient to access remote services for multiple different makes, and yet more had APIs that were easily hackable.

Later, in 2023, the Mozilla Foundation published an extensive report examining the various automakers' policies regarding the use of data from connected cars; the report concluded that "cars are the worst product category we have ever reviewed for privacy."

Read 5 remaining paragraphs | Comments

Tornado Cash Co-Founder Gets Over 5 Years for Laundering $1.2Bn

Tornado Cash Co-Founder, Tornado Cash

A Dutch court ruling on Tuesday found one of the co-founders of the now-sanctioned Tornado Cash cryptocurrency mixer service guilty of laundering $1.2 billion illicit cybercriminal proceeds. He was handed down a sentence of 5 years and 4 months in prison, as a result. Alexey Pertsev, a 31-year-old Russian national and the developer of Tornado Cash, awaited trial in the Netherlands on money laundering charges after his arrest in Amsterdam in August 2022, just days after the U.S. Treasury Department sanctioned the service for facilitating malicious actors like the Lazarus Group in laundering their illicit proceeds from cybercriminal activities. “The defendant declared that it was never his intention to break the law or to facilitate criminal activities,” according to a machine translated summary of the judgement. Instead Pertsev intended to offer a legitimate solution with Tornado Cash to a growing crypto community that craved privacy. He argued that “it is up to the users not to abuse Tornado Cash.” Pertsev also said that given the technical specifications of the cryptocurrency mixer service, it was impossible for him to prevent the abuse. However, the District Court of East Brabant disagreed, asserting that the responsibility for Tornado Cash's operations lay solely with its founders and lacked adequate mechanisms to prevent abuse. “Tornado Cash functions in the way the defendant and his cofounders developed Tornado Cash. So, the operation is completely their responsibility,” the Court said. “If the defendant had wanted to have the possibility to take action against abuse, then he should have built it in. But he did not.”
“Tornado Cash does not pose any barrier for people with criminal assets who want to launder them. That is why the court regards the defendant guilty of the money laundering activities as charged.”
Tornado Cash functioned as a decentralized cryptocurrency mixer, also known as a tumbler, allowing users to obscure the blockchain transaction trail by mixing illegally and legitimately obtained funds, making it an appealing option for adversaries seeking to cover their illicit money links. Tornado Cash laundered $1.2 billion worth of cryptocurrency stolen through at least 36 hacks including the theft of $625 million from the Axie Infinity hack in March 2022 by North Korea’s Lazarus Group hackers. The Court used certain undisclosed parameters in selecting these hacks due to which only 36 of them were taken into consideration. Without these parameters, more than $2.2 billion worth of illicit proceeds from Ether cryptocurrency were likely laundered. The Court also did not rule out the possibility of Tornado Cash laundering cryptocurrency derived from other crimes. The Court further described Tornado Cash as combining “maximum anonymity and optimal concealment techniques” without incorporating provisions to “make identification, control or investigation possible.” It failed to implement Know Your Customer (KYC) or anti-money laundering (AML) programs as mandated by U.S. federal law and was not registered with the U.S. Financial Crimes Enforcement Network (FinCEN) as a money-transmitting entity. "Tornado Cash is not a legitimate tool that has unintentionally been abused by criminals," it concluded. "The defendant and his co-perpetrators developed the tool in such a manner that it automatically performs the concealment acts that are needed for money laundering." In addition to the prison term, Pertsev was ordered to forfeit cryptocurrency assets valued at €1.9 million (approximately $2.05 million) and a Porsche car previously seized.

Other Tornado Cash Co-Founders Face Trials Too

A year after Pertsev’s arrest, the U.S. Department of Justice unsealed an indictment where the two other co-founders, Roman Storm and Roman Semenov, were charged with conspiracy to commit money laundering, conspiracy to operate an unlicensed money-transmitting business and conspiracy to violate the International Emergency Economic Powers Act. Storm goes to trial in the Southern District of New York later in September, while Semenov remains at large. The case has drawn a debate amongst two sides – privacy advocates and the governments. Privacy advocates argue against the criminalization of anonymity tools like Tornado Cash as it gives users a right to avoid financial surveillance, while governments took a firm stance against unregulated offerings susceptible to exploitation by bad actors for illicit purposes. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Vermont Legislature Passes One of the Strongest Data Privacy Measures in the Country

14 May 2024 at 22:05

Vermont legislature passed a bill that prohibits the sale of sensitive data, such as social security and drivers’ license numbers, financial or health information.

The post Vermont Legislature Passes One of the Strongest Data Privacy Measures in the Country appeared first on SecurityWeek.

Threat Actor Scraped Dell Support Tickets, Including Customer Phone Numbers

By: msmash
14 May 2024 at 12:54
The person who claimed to have stolen the physical addresses of 49 million Dell customers appears to have taken more data from a different Dell portal, TechCrunch reported Tuesday. From the report: The newly compromised data includes names, phone numbers and email addresses of Dell customers. This personal data is contained in customer "service reports," which also include information on replacement hardware and parts, comments from on-site engineers, dispatch numbers, and in some cases diagnostic logs uploaded from the customer's computer. Several reports seen by TechCrunch contain pictures apparently taken by customers and uploaded to Dell for seeking technical support. Some of these pictures contain metadata revealing the precise GPS coordinates of the location where the customer took the photos, according to a sample of the scraped data obtained by TechCrunch.

Read more of this story at Slashdot.

Understanding CUI: What It Is and Guidelines for Its Management

13 May 2024 at 15:44

It sounds official — like it might be the subject of the next action-packed, government espionage, Jason Bourne-style thriller. Or maybe put it before the name of a racy city and have your next hit crime series. A history of mysterious aliases like “official use only,” “law enforcement sensitive,” and “sensitive but unclassified” only adds...

The post Understanding CUI: What It Is and Guidelines for Its Management appeared first on Hyperproof.

The post Understanding CUI: What It Is and Guidelines for Its Management appeared first on Security Boulevard.

FBI/CISA Warning: ‘Black Basta’ Ransomware Gang vs. Ascension Health

13 May 2024 at 13:08
Closeup photo of street go and stop signage displaying Stop

Будет! Russian ransomware rascals riled a Roman Catholic healthcare organization.

The post FBI/CISA Warning: ‘Black Basta’ Ransomware Gang vs. Ascension Health appeared first on Security Boulevard.

Why car location tracking needs an overhaul

13 May 2024 at 06:48

Across America, survivors of domestic abuse and stalking are facing a unique location tracking crisis born out of policy failure, unclear corporate responsibility, and potentially risky behaviors around digital sharing that are now common in relationships.

No, we’re not talking about stalkerware. Or hidden Apple AirTags. We’re talking about cars.

Modern cars are the latest consumer “device” to undergo an internet-crazed overhaul, as manufacturers increasingly stuff their automobiles with the types of features you’d expect from a smartphone, not a mode of transportation.

There are cars with WiFi, cars with wireless charging, cars with cameras that not only help while you reverse out of a driveway, but which can detect whether you’re drowsy while on a long haul. Many cars now also come with connected apps that allow you to, through your smartphone, remotely start your vehicle, schedule maintenance, and check your tire pressure.

But one feature in particular, which has legitimate uses in responding to stolen and lost vehicles, is being abused: Location tracking.

It’s time car companies do something about it.  

In December, The New York Times revealed the story of a married woman whose husband was abusing the location tracking capabilities of her Mercedes-Benz sedan to harass her. The woman tried every avenue she could to distance herself from her husband. After her husband became physically violent in an argument, she filed a domestic abuse report. Once she fled their home, she got a restraining order. She ignored his calls and texts.

But still her husband could follow her whereabouts by tracking her car—a level of access that Mercedes representatives reportedly could not turn off, as he was considered the rightful owner of the vehicle (according to The New York Times, the husband’s higher credit score convinced the married couple to have the car purchased in his name alone).

As reporter Kashmir Hill wrote of the impasse:

“Even though she was making the payments, had a restraining order against her husband and had been granted sole use of the car during divorce proceedings, Mercedes representatives told her that her husband was the customer so he would be able to keep his access. There was no button she could press to take away the app’s connection to the vehicle.”

This was far from an isolated incident.

In 2023, Reuters reported that a San Francisco woman sued her husband in 2020 for allegations of “assault and sexual battery.” But some months later, the woman’s allegations of domestic abuse grew into allegations of negligence—this time, against the carmaker Tesla.

Tesla, the woman claimed in legal filings, failed to turn off her husband’s access to the location tracking capabilities in their shared Model X SUV, despite the fact that she had obtained a restraining order against her husband, and that she was a named co-owner of the vehicle.

When The New York Times retrieved filings from the San Francisco lawsuit above, attorneys for Tesla argued that the automaker could not realistically play a role in this matter:

“Virtually every major automobile manufacturer offers a mobile app with similar functions for their customers,” the lawyers wrote. “It is illogical and impractical to expect Tesla to monitor every vehicle owner’s mobile app for misuse.”

Tesla was eventually removed from the lawsuit.

In the Reuters story, reporters also spoke with a separate woman who made similar allegations that her ex-husband had tracked her location by using the Tesla app associated with her vehicle. Because the separate woman was a “primary” account owner, she was able to remove the car’s access to the internet, Reuters reported.

A better path

Location tracking—and the abuse that can come with it—is a much-discussed topic for Malwarebytes Labs. But the type of location tracking abuse that is happening with shared cars is different because of the value that cars hold in situations of domestic abuse.

A car is an opportunity to physically leave an abusive partner. A car is a chance to start anew in a different, undisclosed location. In harrowing moments, cars have also served as temporary shelter for those without housing.

So when a survivor’s car is tracked by their abuser, it isn’t just a matter of their location and privacy being invaded, it is a matter of a refuge being robbed.

In speaking with the news outlet CalMatters, Yenni Rivera, who works on domestic violence cases, explained the stressful circumstances of exactly this dynamic.

“I hear the story over and over from survivors about being located by their vehicle and having it taken,” Rivera told CalMatters. “It just puts you in a worst case situation because it really triggers you thinking, ‘Should I go back and give in?’ and many do. And that’s why many end up being murdered in their own home. The law should make it easier to leave safely and protected.”

Though the state of California is considering legislative solutions to this problem, national lawmaking is slow.

Instead, we believe that the companies that have the power to do something act on that power. Much like how Malwarebytes and other cybersecurity vendors banded together to launch the Coalition Against Stalkerware, automakers should work together to help users.

Fortunately, an option may already exist.

When the Alliance for Automobile Innovation warned that consumer data collection requests could be weaponized by abusers who want to comb through the car location data of their partners and exes, the automaker General Motors already had a protection built in.

According to Reuters, the roadside assistance service OnStar, which is owned by General Motors, allows any car driver—be they a vehicle’s owner or not—to hide location data from other people who use the same vehicle. Rivian, a new electric carmaker, is reportedly working on a similar feature, said senior vice president of software development Wassym Bensaid in speaking with Reuters.

Though Reuters reported that Rivian had not heard of their company’s technology being leveraged in a situation of domestic abuse, Wassym believed that “users should have a right to control where that information goes.”

We agree.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools

By: Tom Eston
13 May 2024 at 00:00

In this first-ever in-person recording of Shared Security, Tom and Kevin, along with special guest Matt Johansen from Reddit, discuss their experience at the RSA conference in San Francisco, including their walk-through of ‘enhanced security’ and the humorous misunderstanding that ensued. The conversation moves to the ubiquity of AI and machine learning buzzwords at the […]

The post Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools appeared first on Shared Security Podcast.

The post Live at RSA: AI Hype, Enhanced Security, and the Future of Cybersecurity Tools appeared first on Security Boulevard.

💾

Dell Data Breach Could Affect 49 Million Customers – Source: securityboulevard.com

dell-data-breach-could-affect-49-million-customers-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Jeffrey Burt Dell is sending emails to as many as 49 million people about a data breach that exposed their names, physical addresses, and product order information. According to the brief message, bad actors breached a Dell portal that contains a database “with limited types of customer information related to purchases […]

La entrada Dell Data Breach Could Affect 49 Million Customers – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Dell notifies customers about data breach

10 May 2024 at 10:04

Dell is warning its customers about a data breach after a cybercriminal offered a 49 million-record database of information about Dell customers on a cybercrime forum.

A cybercriminal called Menelik posted the following message on the “Breach Forums” site:

“The data includes 49 million customer and other information of systems purchased from Dell between 2017-2024.

It is up to date information registered at Dell servers.

Feel free to contact me to discuss use cases and opportunities.

I am the only person who has the data.”

Data Breach forums post by Menelik
Screenshot taken from the Breach Forums

According to Menelik the data includes:

  • The full name of the buyer or company name
  • Address including postal code and country
  • Unique seven digit service tag of the system
  • Shipping date of the system
  • Warranty plan
  • Serial number
  • Dell customer number
  • Dell order number

Most of the affected systems were sold in the US, China, India, Australia, and Canada.

Users on Reddit reported getting an email from Dell which was apparently sent to customers whose information was accessed during this incident:

“At this time, our investigation indicates limited types of customer information was accessed, including:

  • Name
  • Physical address
  • Dell hardware and order information, including service tag, item description, date of order and related warranty information.

The information involved does not include financial or payment information, email address, telephone number or any highly sensitive customer information.”

Although Dell might be trying to play down the seriousness of the situation by claiming that there is not a significant risk to its customers given the type of information involved, it is reassuring that there were no email addresses included. Email addresses are a unique identifier that can allow data brokers to merge and enrich their databases.

So, this is another big data breach that leaves us with more questions than answers. We have to be careful that we don’t shrug these data breaches away with comments like “they already know everything there is to know.”

This kind of information is exactly what scammers need in order to impersonate Dell support.

Protecting yourself from a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify any contacts using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

If you want to find out how much of your data has been exposed online, you can try our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you most frequently use) and we’ll send you a free report.

Massive Online Shopping Scam Racks Up 850,000 Victims – Source: securityboulevard.com

massive-online-shopping-scam-racks-up-850,000-victims-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Jeffrey Burt A group of bad actors — likely from China — is running a global cybercrime-as-a-service operation. It oversees a massive network of fake shopping websites that has conned more than 850,000 people in the United States and Europe into purchasing items, over the past three years, and the organization […]

La entrada Massive Online Shopping Scam Racks Up 850,000 Victims – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Maryland Passes Two Bills Limiting Tech Platforms' Ability To Track Users

By: BeauHD
9 May 2024 at 15:26
An anonymous reader quotes a report from The Verge: The Maryland legislature passed two bills over the weekend limiting tech platforms' ability to collect and use consumers' data. Maryland Governor Wes Moore is expected to sign one of those bills, the Maryland Kids Code, on Thursday, MoCo360 reports. If signed into law, the other bill, the Maryland Online Privacy Act, will go into effect in October 2025. The legislation would limit platforms' ability to collect user data and let users opt out of having their data used for targeted advertising and other purposes. Together, the bills would significantly limit social media and other platforms' ability to track their users -- but tech companies, including Amazon, Google, and Meta, have opposed similar legislation. Lawmakers say the goal is to protect children, but tech companies say the bills are a threat to free speech. Part of the Maryland Kids Code -- the Maryland Age-Appropriate Design Code Act -- will go into effect much sooner, on October 1st. It bans platforms from using "system design features to increase, sustain, or extend the use of the online product," including autoplaying media, rewarding users for spending more time on the platform, and spamming users with notifications. Another part of the legislation prohibits certain video game, social media, and other platforms from tracking users who are younger than 18. "It's meant to rein in some of the worst practices with sensible regulation that allows companies to do what's right and what is wonderful about the internet and tech innovation, while at the same time saying, 'You can't take advantage of our kids,'" Maryland state Delegate Jared Solomon, one of the bill's sponsors, said in a press conference Wednesday. "We are technically the second state to pass a kids code," Solomon told The New York Times. "But we are hoping to be the first state to withstand the inevitable court challenge that we know is coming."

Read more of this story at Slashdot.

Dell Says Data Breach Involved Customers' Physical Addresses

By: msmash
9 May 2024 at 11:27
Technology giant Dell notified customers on Thursday that it experienced a data breach involving customers' names and physical addresses. TechCrunch: In an email seen by TechCrunch and shared by several people on social media, the computer maker wrote that it was investigating "an incident involving a Dell portal, which contains a database with limited types of customer information related to purchases from Dell." Dell wrote that the information accessed in the breach included customer names, physical addresses, and "Dell hardware and order information, including service tag, item description, date of order and related warranty information." Dell did not say if the incident was caused by malicious outsiders or inadvertent error. The breached data did not include email addresses, telephone numbers, financial or payment information, or "any highly sensitive customer information," according to the company. The company downplayed the impact of the breach in the message.

Read more of this story at Slashdot.

DocGo patient health data stolen in cyberattack

9 May 2024 at 06:46

Medical health care provider DocGo has disclosed in a form 8-K that it experienced a cybersecurity incident involving some of the company’s systems. As part of the investigation of the incident, the company says it has determined that the attacker accessed and acquired data, including certain protected health information.

DocGo is a healthcare provider that offers mobile health services, ambulance services, and remote monitoring for patients in 30 US states, and across the United Kingdom. On its company website it touts over 7,000,000 patient interactions.

In the same form, DocGo says the breach concerns a limited number of healthcare records within the company’s US-based ambulance transportation business, and that no other business lines have been involved.

DocGo says it is actively reaching out to those individuals who had their data compromised in the attack.  

So far, we have no indication what the nature of the cyberattack was, but it is almost standard procedure nowadays for ransomware groups to use stolen data as extra leverage to get the victim to pay the ransom.

Protecting yourself from a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify any contacts using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

Malwarebytes has a new free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

Ransomware Attacks are Up, but Profits are Down: Chainalysis

8 May 2024 at 15:40
ransomware payments

In the ever-evolving world of ransomware, it’s getting easier for threat groups to launch attacks – as evidence by the growing number of incidents – but more difficult to make a profit. Organizations’ cyber-defenses are getting more resilient, decryptors that enable victims to regain control of their data, and law enforcement crackdowns on high-profile cybercrime..

The post Ransomware Attacks are Up, but Profits are Down: Chainalysis appeared first on Security Boulevard.

How to Block Companies From Tracking You Online

7 May 2024 at 18:00

On April 24, President Joe Biden signed a bill that could see TikTok banned in the United States if it does not divest its American operations to a U.S.-owned company. Among the reasons for this: data privacy. Like any social media app, TikTok collects a treasure trove of data and personal information, and as a Chinese-owned company, there are concerns that it could be forced to supply that data to the Chinese government.

“I don’t have a TikTok account,” you might think. “I’m fine.” But the modern internet is more complicated than that. Through ads and deals, data brokers are able to hide cookies, scripts, and  “tracking pixels” on completely unrelated sites and even emails, which they can then use to find out your purchase history and other valuable data. And the perpetrators include more than TikTok—Meta is perhaps the most well-known, going so far as to publicize how it scrapes your data.

That means you could be vulnerable to tracking from services like TikTok and Facebook even if you've never once used them. Luckily, there are tools in place that can find out when you’re being tracked and who’s doing it.

How do companies track me?

Currently, there are two major methods of data-tracking online: The first, cookies, is on the way out, but pixel trackers are a bit more complicated.

You’ve probably heard the term cookies before. These are little packets of information that allow websites to store data like your password, so you don’t need to log in every single time you access a website. But in addition to these “necessary” cookies, there are also third-party cookies that can track your browsing session, information that can be sold to data firms later.

These are probably the most obvious way you might get tracked online. If you’ve recently visited a website that operates in the EU (or certain states), you’ve probably noticed a form asking you to consent to cookies. These are what those forms are talking about, and while clicking through them can be a brief annoyance, they’ve gone a long way to making cookies less sneaky and far easier to block.

Throw in Google’s oft-delayed but still planned attempt to kill the cookie outright, and data brokers have had to get more clever.

Enter the tracking pixel. These operate in a similar fashion to cookies, but use images rather than text. Essentially, companies can hide transparent or otherwise invisible pixels on your screen, and get pinged when your browser loads them, allowing them to track which parts of a website you’re accessing and when.

It’s a real letter vs. spirit of the law thing, as while the principle remains the same, there’s little legislation on tracking pixels, meaning users who had gotten used to the government crackdown on cookies now have to go back to square one when it comes to data vigilance. Nowadays, some site elements even come bundled with their own scripts that can go further than cookies ever did.

How do I know when I’m being tracked?

There’s a benefit to how tracking pixels and scripts integrate directly with a website's code: With enough elbow grease, you can know when you’re being watched.

When tracking pixels are loaded into a site, you can actually see their tags in that site’s code. If you know what to look for, just right click and select Inspect from the drop down menu to begin investigating. This will work on Chrome, Firefox, and Microsoft Edge, although Safari takes a bit more work.

Generally, though, you don’t want to do this manually. There are tools that automate the process for you, plus give context for what you're looking at.

A graphic demonstrating Feroot PageScanner
Credit: Feroot

The most recent and robust is Feroot PageScanner, a free Chrome extension developed by some of the voices who testified on TikTok for Congress.

Feroot PageScanner has perhaps the most immediate interface for informing you when your data is being tracked. While it won’t do anything to block trackers, it places notifications on your screen in real time that tell you when your data is being tracked and by whom. Its menu also gives you a detailed list of active trackers, who they’re run by, and what purpose they serve. Plus, you’ll be able to sort through any scripts being run on the webpage you’re visiting, all without having to enter the Inspect menu.

It’s intended for enterprise clients running security analyses on their sites, especially those looking to meet PCI compliance. But it’s a great place for anyone to start, as it gives an in-depth, if somewhat scary, look at the scope of the problem.

“TikTok is not the biggest problem by far,” said Feroot CEO Ivan Tsarynny, who had previously testified on TikTok for Congress.

How to block online trackers

Once you know the scope of the problem, there are multiple tools that can help you take control of your privacy online.

A graphic demonstrating Ghostery
Credit: Ghostery

Ghostery works like PageScanner, except it can go a step further and actually restrict trackers. The counterpoint is that its information isn’t as in-depth as PageScanner's, so while it will tell you where trackers come from and what purpose they serve, you won’t get those pop-up notifications or be able to sort scripts. According to Tsarynny, Ghostery also has conflicts with PageScanner, so it’s best used to act on threats once you’ve already identified them.

Ghostery is available both as an extension for most browsers, or as its own standalone browser that comes with its features built-in. It also runs a privacy-focused search engine that is similarly available as a browser extension or as its own website.

If you’d rather not install anything, you can also see which trackers are active where by going to Ghostery’s whotracks.me site.

But while Ghostery is open-source, it has come under fire in the past for selling user data and replacing the ads it blocks with its own. Since an acquisition in 2017, Ghostery is working on repairing its reputation, now operating "fully on user donations/contributions" according to a representative who spoke with me over email.

A graphic representing uBlock Origin
Credit: Raymond Hill and Nik Rolls

uBlock Origin is another open source ad blocker, and while it can be a touch harder to understand and use than Ghostery, there’s no doubt that it’s the most powerful of your options. It can block pretty much any element on any site with laser precision, and while it comes with block lists built-in, you can also create and import your own. The downside is that it gives you less information on how and when you’re being tracked compared to PageScanner or Ghostery, as it simply prints out blocked tags and ads and expects you to know how to parse them. It is available as an extension on Chromium and Firefox browsers.

A screenshot demonstrating Privacy Badger
Credit: EFF

Privacy Badger has a similar function and interface to uBlock Origin, but is focused more on trackers than ads. Also open source, its interface doesn’t provide much detail on how you’re being tracked, and there’s no ad-blocking here unless an ad is tracking you. What Privacy Badger does do is learn to block trackers over time. You have two choices here. First, Privacy Badger’s developers are continuously testing tags and scripts for invasive techniques, and regularly update the extension with new trackers to block. Second, and disabled by default, is local learning. Local learning allows Privacy Badger to learn from your own browsing habits, and while it can make you more identifiable to trackers, it can be useful if you regularly visit unpopular websites. Privacy Badger is available on Chromium and Firefox browsers. Local learning can be toggled on and off via the Options page.

Finally, outside of the realm of extensions and websites that block tracking, there are VPNs. A VPN essentially hides your browsing data by filtering it through other sources, obscuring your IP. The best VPNs are paid services, but a few will encrypt your data for free. Don’t trust every free VPN you come across, but names like Proton Pass and Tunnelbear are as reputable as the big guys, if less robust.

Note that tracking pixels can also show up in emails. To protect yourself from these, follow our guide on how to stop email images from loading by default.

Six Ways to Give Away Less of Your Personal Data

4 May 2024 at 09:30

Sometimes it feels like privacy, as a concept, has vanished from the world. Advertisers certainly seem to know everything about you, serving up frighteningly accurate ads that make you think your phone’s microphone has been turned on and marketers are actively listening to your every mumble.

They’re not—yet. But they are engaged in something called “data mining,” which is the process of collecting enormous amounts of anonymous data from your every connected activity and then analyzing that data to infiltrate your life with advertisements and other influences. And it’s not just corporate America—criminals can mine your data in order to rip you off.

If that bugs you—and it should—you can take some steps to minimize data mining in your life. You can’t completely escape it unless you plan to live off-grid with zero Internet connection, but you can reduce your exposure. After all, it’s your data, you’re not being compensated for it, and it’s creepy that some anonymous marketing team knows you’re really into RPGs and craft beer.

Read those EULAs

One of the biggest vectors for mining your data is your smartphone, especially the apps you’ve installed on it. Every time you install an app you agree to its terms—the end user license agreement (EULA) and other requirements.

A first line of defense against data mining is to take the time to review those EULAs. You can’t negotiate, but if you see you’re being asked for blanket permission to send data back to the mothership, you might at least look for an alternative. The key warning signs that the app is just a data-mining vessel are granting permission to monitor your Internet activity, to explicitly collect personal information, or to use your computer or device for their own purposes. If you see anything that gives you pause, think twice before agreeing.

Check settings

When you install an app on your device, you probably click through a series of permissions that grant that app access to everything it needs to gather data about you. This is a data-mining goldmine.

A few years ago, for example, an investigation found that about 5,400 apps were siphoning data from just one person’s smartphone—1.5 gigs of data in all. And back in 2017, an app maker called Alphonso was caught tracking what people were watching on TV by activating the microphone on their smartphones.

If an app requires a lot of unnecessary permissions—does a game really need access to your microphone, location, and camera?—you should assume it’s more of a data-mining app than anything else. Your next line of defense: Stop installing garbage free apps and spend that dollar. Every app wants to make money from you, and if you’re not paying up front, you’re paying in some other way, most likely by having your data stripmined.

Be boring on social

Social media is very obviously a dumpster fire when it comes to privacy. You’re literally posting a photo of you at the store with the hashtag #LiveToShop, so you shouldn’t be surprised when that store’s ads start popping up all over your life.

If you’re concerned about data mining, you can take a few simple steps to reduce the access that data miners have to your social media:

  • Set your profile to private. If your main goal on social media is to connect with friends or colleagues, restrict the reach of your posts to just those folks.

  • Be a snob. Don’t accept every request you receive to connect—if you don’t know that person, they don’t need to be let in to your inner circle.

  • Discretion. Don’t blast your travel plans, spending habits, or product reviews out into the universe.

Using social media compromises your privacy, but if you’re mindful of the information data miners want, you can at least refuse to make it easy.

Log out

When you log into platforms like Google or Facebook, that platform can pretty easily track what you’re doing. And as long as you’re signed in, that ability persists—even if you leave the site. These companies are really data mining companies, and they have perfected the art of following you around.

It’s a pain in the butt, but logging out of those services when you’re not actively using them (and clearing cookies and browsing history regularly) can slow down the vacuuming of data. It’s inconvenient to do so by design, but it has a real impact on how much information is being mined from your online activities.

Avoid memes

Data mining isn’t just about advertisers selling your stuff. It can also be weaponized by scammers to get personal info they can use to rob you blind, steal your identity, or steal your identity and then rob you blind.

One easy way they can do this is to just wait for you to respond to a phishing meme. These memes look like innocent fun quizzes where you supply some seemingly innocuous bits of personal information and receive a chuckle in response. Common examples include posting your “porn name” (a combination of common security question answers like your middle name or the model of your first car or something similar) or using the last digits of your phone number to do some math magic.

Luckily, there’s an easy way to avoid data mining via phishing memes: Ignore the memes. Your life will actually be incrementally better anyway.

Tech solutions

One of the most effective ways to cut down your exposure to data mining requires a bit more effort. Various privacy tools exist that can really stem the flow of your data to the unappeasable black hole of marketing:

  • VPNs. Virtual Private Networks are useful for privacy because they obscure your location and IP address, which makes it a lot harder for data miners to collate the data they get. Since your data appears to come from a wide range of random locations, it’s impossible to build a coherent profile of your preferences and habits. Installing a VPN on your computer, phone, and devices will go a long way towards cutting off the flow of private information.

  • Tor. The Tor Browser routes your web surfing traffic through many encrypted nodes, making it basically impossible to track your travels on the Internet. If you really want to go dark, combine Tor with a VPN and you’ll be practically invisible. If you’re not ready to use Tor as your everyday browser, use a privacy-focused browser like DuckDuckGo or Brave, or at least adjust the privacy settings in your browser to make it as secure as possible.

  • Ad blockers. Almost every single website you visit tracks your activities and gathers data about you. While using a privacy browser is an effective way to stifle that, ad-blocking plugins can go the extra mile by denying intrusive access to your browsing experience altogether.

The U.S. House Version of KOSA: Still a Censorship Bill

3 May 2024 at 12:48

A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the "duty of care" requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or "high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA's enforcement by reducing some of the burden on websites that aren't primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last "must-pass" legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Biden Signed the TikTok Ban. What's Next for TikTok Users?

Over the last month, lawmakers moved swiftly to pass legislation that would effectively ban TikTok in the United States, eventually including it in a foreign aid package that was signed by President Biden. The impact of this legislation isn’t entirely clear yet, but what is clear: whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. 

What Happens Next?

At the moment, TikTok isn’t “banned.” The law gives ByteDance 270 days to divest TikTok before the ban would take effect, which would be on January 19th, 2025. In the meantime, we expect courts to determine that the bill is unconstitutional. Though there is no lawsuit yet, one on behalf of TikTok itself is imminent.

There are three possible outcomes. If the law is struck down, as it should be, nothing will change. If ByteDance divests TikTok by selling it, then the platform would still likely be usable. However, there’s no telling whether the app’s new owners would change its functionality, its algorithms, or other aspects of the company. As we’ve seen with other platforms, a change in ownership can result in significant changes that could impact its audience in unexpected ways. In fact, that’s one of the given reasons to force the sale: so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation. This is despite the fact that it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. 

Lastly, if ByteDance refuses to sell, users in the U.S. will likely see it disappear from app stores sometime between now and that January 19, 2025 deadline. 

How Will the Ban Be Implemented? 

The law limits liability to intermediaries—entities that “provide services to distribute, maintain, or update” TikTok by means of a marketplace, or that provide internet hosting services to enable the app’s distribution, maintenance, or updating. The law also makes intermediaries responsible for its implementation. 

The law explicitly denies to the Attorney General the authority to enforce it against an individual user of a foreign adversary controlled application, so users themselves cannot be held liable for continuing to use the application, if they can access it. 

Will I Be Able to Download or Use TikTok If ByteDance Doesn’t Sell? 

It’s possible some U.S. users will find routes around the ban. But the vast majority will probably not, significantly shifting the platform's user base and content. If ByteDance itself assists in the distribution of the app, it could also be found liable, so even if U.S. users continue to use the platform, the company’s ability to moderate and operate the app in the U.S. would likely be impacted. Bottom line: for a period of time after January 19, it’s possible that the app would be usable, but it’s unlikely to be the same platform—or even a very functional one in the U.S.—for very long.

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.  Enacting this legislation has undermined this long standing, democratic principle. It has also undermined the U.S. government’s moral authority to call out other nations for when they shut down internet access or ban social media apps and other online communications tools. 

There are a few reasons legislators have given to ban TikTok. One is to change the type of content on the app—a clear First Amendment violation. The second is to protect data privacy. Our lawmakers should work to protect data privacy, but this was the wrong approach. They should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries. They should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation. Instead, as happens far too often, our government’s actions are vastly overreaching while also deeply underserving the public. 

Dropbox Sign customer data accessed in breach

2 May 2024 at 16:44

Dropbox is reporting a recent “security incident” in which an attacker gained unauthorized access to the Dropbox Sign (formerly HelloSign) production environment. During this access, the attacker had access to Dropbox Sign customer information.

Dropbox Sign is a platform that allows customers to digitally sign, edit, and track documents. The accessed customer information includes email addresses, usernames, phone numbers, and hashed passwords, in addition to general account settings and certain authentication information such as API keys, OAuth tokens, and multi-factor authentication. The access is limited to Dropbox Sign customers and does not affect users of other Dropbox services because the environments are largely separate.

“We believe that this incident was isolated to Dropbox Sign infrastructure and did not impact any other Dropbox products.”

Even if you never created a Dropbox Sign account but received or signed a document through Dropbox Sign, your email addresses and names were exposed. In a government (K-8) filing about the incident, Dropbox says it found no evidence of unauthorized access to the contents of customers’ accounts (i.e. their documents or agreements), or their payment information. 

The attacker compromised a back-end service account that acted as an automated system configuration tool for the Dropbox Sign environment. The attacker used the privileges of the service account for the production environment to gain access to the customer database.

To limit the aftermath of the incident, Dropbox’s security team reset users’ passwords, logged users out of any devices they had connected to Dropbox Sign, and is coordinating the rotation of all API keys and OAuth tokens.

For customers with API access to Dropbox Sign, the company said new API keys will need to be generated and warned that certain functionality will be restricted while they deal with the breach.

Dropbox says it has reported this event to data protection regulators and law enforcement.

Recommendations

Dropbox expired affected passwords and logged users out of any devices they had connected to Dropbox Sign for further protection. The next time these users log in to their Sign account, they’ll be sent an email to reset the password. Dropbox recommends users do this as soon as possible.

If you’re an API customer, to ensure the security of your account, you’ll need to rotate your API key by generating a new one, configuring it with your application, and deleting your current one. Here is how you can easily create a new key.

API customers should be aware that names and email addresses for those who received or signed a document through Dropbox Sign, even if they never created an account, were exposed. So, this may impact their customers.

Customers who use an authenticator app for multi-factor authentication should reset it. Please delete your existing entry and then reset it. If you use SMS you do not need to take any action.

If you reused your Dropbox Sign password on any other services, we strongly recommend that you change your password on those accounts and use multi-factor authentication when available.

Protecting yourself from a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims and verify any contacts using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

Malwarebytes has a new free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection

Psychotherapy practice hacker gets jail time after extorting patients, publishing personal therapy notes online

2 May 2024 at 09:28

On October 30, 2020, I started a article with the words:

“Hell is too nice a place for these people.”

The subject of this outrage focused on the cybercriminals behind an attack on Finnish psychotherapy practice Vastaamo. Because it was a psychotherapy practice, the records contained extremely sensitive and confidential information about some of the most vulnerable people.

Sadly, the attacker did not stop at extorting the clinic but also sent extortion messages to the patients, asking them to pay around $240 to prevent their data from being published online. And that was a first, as far as we know—not just demanding a ransom from the breached organization, but also from all those that were unlucky enough to have their data on record there.

The attacker demanded a €400,000 ($425,000) ransom from the company. When it refused to pay, he emailed thousands of patients asking for €200 and threatening to publish their therapy notes and personal details on the dark web if they didn’t pay. He ended up publishing it anyway.

As a result of this cyberattack and the extortion attempts:

  • Vastaamo’s board fired the CEO because they held him responsible for knowing about the breaches and of the shortcomings in the psychotherapy provider’s data security systems.
  • Vastaamo’s owner, who bought the practice a few months after the second breach but was not informed about it, began legal proceedings related to its purchase.
  • Vastaamo had to shut its doors because it could not meet its financial obligations.
  • The Finnish government contemplated expanding the options for individuals to change their social security number in certain circumstances, such as the aftermath of a hacking incident.
  • At least one suicide has been linked to the case.

Now the attacker has been convicted. 26-year-old Julius Kivimäki has been sentenced to six years and three months in prison. Kivimäki, known online as Zeekill, was one of the leading members of several groups of teenage cybercriminals which caused chaos between 2009-2015. One of those groups was the infamous Lizard Squad.

At the age of 17, Kivimäki was convicted of more than 50,000 computer hacks and sentenced to a two-year prison sentence, which was suspended because he was 15 and 16 when he carried out the crimes in 2012 and 2013.

Despite the conviction, the Vastaamo case is not over as civil court cases are now likely to begin to seek compensation for the victims of the hack.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection

Wireless carriers fined $200 million after illegally sharing customer location data

1 May 2024 at 05:35

After four years of investigation, the Federal Communications Commission (FCC) has concluded that four of the major wireless carriers in the US violated the law in sharing access to customers’ location data.

The FCC fined AT&T, Sprint, T-Mobile, and Verizon a total of almost $200 million for “illegally sharing access to customers’ location information without consent and without taking reasonable measures to protect that information against unauthorized disclosure.”

The fines are divided up into $12 million for Sprint, $80 million for T-Mobile (which has now merged with Sprint), more than $57 million for AT&T, and an almost $47 million for Verizon.

From the press release it becomes apparent that the FCC considers real-time location data some of the most sensitive data in a carrier’s possession. Each of the four major carriers was found to be selling its customers’ location information to “aggregators,” who then resold access to such information to third-party location-based service providers.

The investigation by the FCC was set in motion by public reports like the ones in the New York Times, Vice.com, and a letter from Sen. Ron Wyden to the FCC. All pointed out that anyone could get location information about almost any US phone if they were willing to pay an unauthorized source.

The FCC press release specifically mentions a location-finding service operated by Securus, a provider of communications services to correctional facilities, as a source that provided the possibility to track people’s location.

The US law, including section 222 of the Communications Act, requires carriers to take reasonable measures to protect certain customer information, including location information.

The wireless carriers attempted to offload their obligation to obtain customer consent onto the downstream recipients of the location information. The end result was a failure in which no valid customer consent was obtained. And even though the carriers were aware of this, they continued to sell access to location information without taking reasonable measures to protect it from unauthorized access.

As reported by Krebs on Security, one of the data aggregation firms, LocationSmart, had a free, unsecured demo of its service online that anyone could abuse to find the near-exact location of virtually any mobile phone in North America.

Spokespersons of Verizon and AT&T both indicated to BleepingComputer that they felt as if they were taking the blame for another company’s failure to obtain consent.

T-Mobile said in a statement to CNN that it discontinued the location data-sharing program over five years ago. The company wanted to make sure first that critical services like roadside assistance, fraud protection, and emergency response would not suffer any negative consequences if it did.

All three companies indicated they will appeal the order. We’ll keep you posted on any new developments.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

‘Smartphones on Wheels’ Draw Attention From Regulators

30 April 2024 at 10:03
Modern cars are internet-connected and have hundreds of sensors. Lawmakers and regulators have concerns about what’s happening with all that data.

© Mark Abramson for The New York Times

Government attention to the car industry is intensifying, experts say, because of the increased technological sophistication of modern cars.

Kaiser Permanente Data Breach Impacts 13.4 Million Patients

29 April 2024 at 10:43

US healthcare giant is warning millions of current and former patients that their personal information was exposed to third-party advertisers.

The post Kaiser Permanente Data Breach Impacts 13.4 Million Patients appeared first on SecurityWeek.

Kaiser health insurance leaked patient data to advertisers

29 April 2024 at 06:44

Health insurance giant Kaiser has announced it will notify millions of patients about a data breach after sharing patients’ data with advertisers.

Kaiser said that an investigation led to the discovery that “certain online technologies, previously installed on its websites and mobile applications, may have transmitted personal information to third-party vendors.”

In the required notice with the US government, Kaiser lists 13.4 million affected individuals. Among these third-party ad vendors are Google, Microsoft, and X. Kaiser said it subsequently removed the tracking code from its websites and mobile apps.

A tracking pixel is a piece of code that website owners can place on their website. The pixel collects data that helps businesses track people and target adverts at them. That’s nice for the advertisers, but the information gathered by these pixels tells them a lot about your browsing behavior, and a lot about you.

This kind of data leak normally happens when a website includes sensitive information in its URLs (web addresses). The URLs you visit are shared with the company that provides the tracking pixel, so if the URL contains sensitive information it will end up in the hands of the tracking company. The good news is that while it’s easy for websites to leak information like this, there is no suggestion that tracking pixel operators are aware of it, or acting on it, and it would probably be hugely impractical for them to do so.

The leaked data includes member names and IP addresses, as well as information that could indicate if members were signed into a Kaiser Permanente account or service, how they interacted with it, how they navigated through the website and mobile applications, and what search terms they used in the health encyclopedia.

A spokesperson said that Kaiser intends to begin notifying the affected current and former members and patients who accessed its websites and mobile apps in May.

Not so long ago, we reported how mental health company Cerebral failed to protect sensitive personal data, and ended up having to pay $7 million. Also due to tracking pixels, so this is a recurring problem we are likely to see lots more of. Research done by TheMarkup in June of 2022 showed that Meta’s pixel could be found on the websites of 33 of the top 100 hospitals in America.

Check your digital footprint

Malwarebytes has a new free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

Corporate greed from Apple and Google has destroyed the passkey future

26 April 2024 at 05:56

William Brown, developer of webauthn-rs, has written a scathing blog post detailing how corporate interests – namely, Apple and Google – have completely and utterly destroyed the concept of passkeys. The basic gist is that Apple and Google were more interested in control and locking in users than in providing a user-friendly passwordless future, and in doing so have made passkeys effectively a worse user experience than just using passwords in a password manager.

Since then Passkeys are now seen as a way to capture users and audiences into a platform. What better way to encourage long term entrapment of users then by locking all their credentials into your platform, and even better, credentials that can’t be extracted or exported in any capacity.

Both Chrome and Safari will try to force you into using either hybrid (caBLE) where you scan a QR code with your phone to authenticate – you have to click through menus to use a security key. caBLE is not even a good experience, taking more than 60 seconds work in most cases. The UI is beyond obnoxious at this point. Sometimes I think the password game has a better ux.

The more egregious offender is Android, which won’t even activate your security key if the website sends the set of options that are needed for Passkeys. This means the IDP gets to choose what device you enroll without your input. And of course, all the developer examples only show you the options to activate “Google Passkeys stored in Google Password Manager”. After all, why would you want to use anything else?

↫ William Brown

The whole post is a sobering read of how a dream of passwordless, and even usernameless, authentication was right within our grasp, usable by everyone, until Apple and Google got involved and enshittified the standards and tools to promote lock-in and their own interests above the user experience. If even someone as knowledgeable about this subject as Brown, who writes actual software to make these things work, is advising against using passkeys, you know something’s gone horribly wrong.

I also looked into possibly using passkeys, including using things like a Yubikey, but the process seems so complex and unpleasant that I, too, concluded just sticking to Bitwarden and my favourite open source TFA application was a far superior user experience.

Ring agrees to pay $5.6 million after cameras were used to spy on customers

25 April 2024 at 10:05

Amazon’s Ring has settled with the Federal Trade Commission (FTC) over charges that the company allowed employees and contractors to access customers’ private videos, and failed to implement security protections which enabled hackers to take control of customers’ accounts, cameras, and videos.

The FTC is now sending refunds totaling more than $5.6 million to US consumers as a result of the settlement.

Ring LLC, which was purchased by Amazon in February 2018, sells internet-connected, home security cameras and video doorbells.

However, in a shocking lapse of security protection, it turned out that every single person working for Amazon Ring, whether they were an employee or a contractor, was able to access every single customer video, even when it wasn’t necessary for their jobs.

But that wasn’t the only issue. In May 2023, the FTC stated that:

“Ring deceived its customers by failing to restrict employees’ and contractors’ access to its customers’ videos, using its customer videos to train algorithms without consent, and failing to implement security safeguards. These practices led to egregious violations of users’ privacy.”

The FTC gave the example of one employee who, over several months, viewed thousands of video recordings belonging to female users of Ring cameras that were pointed at intimate spaces in their homes such as their bathrooms or bedrooms. This didn’t stop until another employee discovered the misconduct.

The FTC is now sending 117,044 PayPal payments to US customers who had certain types of Ring devices, such as indoor cameras, during periods when the FTC alleges unauthorized users may have had access to customer videos. Customers should redeem their PayPal payment within 30 days.

“The FTC identified eligible Ring customers based on data provided by the company,” the agency told BleepingComputer, clarifying that Ring users “were eligible for a payment if their account was vulnerable because of privacy and security problems alleged in the complaint.”

Consumers who have questions about their payment should contact the refund administrator, Rust Consulting, Inc., at 1-833-637-4884, or visit the FTC website to view frequently asked questions about the refund process.

Beware of scammers

As always, you can expect scammers to take advantage of this news. So, it’s important to know that the FTC never asks people to pay money or provide account information to get a refund.

A payment or claim form sent as part of an FTC settlement will include an explanation of, and details about, the case. The case will be listed at ftc.gov/refunds, along with the name of the company issuing payments and a phone number for questions.

The FTC only works with four private companies to handle the refund process:

  • Analytics Consulting, LLC
  • Epiq Systems
  • JND Legal Administration
  • Rust Consulting, Inc.

Before sending any PayPal payment, the FTC will send an email from the subscribe@subscribe.ftc.gov address to issue a payment recipient. Once payments have been issued, PayPal will send an email telling recipients about their refund.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection

Dan Solove on Privacy Regulation

24 April 2024 at 07:05

Law professor Dan Solove has a new article on privacy regulation. In his email to me, he writes: “I’ve been pondering privacy consent for more than a decade, and I think I finally made a breakthrough with this article.” His mini-abstract:

In this Article I argue that most of the time, privacy consent is fictitious. Instead of futile efforts to try to turn privacy consent from fiction to fact, the better approach is to lean into the fictions. The law can’t stop privacy consent from being a fairy tale, but the law can ensure that the story ends well. I argue that privacy consent should confer less legitimacy and power and that it be backstopped by a set of duties on organizations that process personal data based on consent.

Full abstract:

Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic”—it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems ­ people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary—an on/off switch—but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.

❌
❌