Normal view

There are new articles available, click to refresh the page.
Yesterday — 17 May 2024Main stream

Slack users horrified to discover messages used for AI training

17 May 2024 at 14:10
Slack users horrified to discover messages used for AI training

Enlarge (credit: Tim Robberts | DigitalVision)

After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers' data—including messages, content, and files—to train Slack's global AI models.

According to Slack engineer Aaron Maurer, Slack has explained in a blog that the Salesforce-owned chat service does not train its large language models (LLMs) on customer data. But Slack's policy may need updating "to explain more carefully how these privacy principles play with Slack AI," Maurer wrote on Threads, partly because the policy "was originally written about the search/recommendation work we've been doing for years prior to Slack AI."

Maurer was responding to a Threads post from engineer and writer Gergely Orosz, who called for companies to opt out of data sharing until the policy is clarified, not by a blog, but in the actual policy language.

Read 34 remaining paragraphs | Comments

Black Basta Ransomware Struck More Than 500 Organizations Worldwide – Source: www.techrepublic.com

black-basta-ransomware-struck-more-than-500-organizations-worldwide-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Cedric Pernet A joint cybersecurity advisory from the Federal Bureau of Investigation, Cybersecurity and Infrastructure Security Agency, Department of Health and Human services and Multi-State Information Sharing and Analysis Center was recently released to provide more information about the Black Basta ransomware. Black Basta affiliates have targeted organizations in the U.S., […]

La entrada Black Basta Ransomware Struck More Than 500 Organizations Worldwide – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

SEC Updates 24-Year-Old Rule to Scale Customers’ Financial Data Protection

Financial data, financial data protection, SEC

The SEC is tightening its focus on financial data breach response mechanisms of very specific set of financial institutions, with an update to a 24-year-old rule. The amendments announced on Thursday mandate that broker-dealers, funding portals, investment companies, registered investment advisers and transfer agents develop comprehensive plans for detecting and addressing data breaches involving customers’ financial information. Under the new rules, covered institutions are required to formulate, implement, and uphold written policies and procedures specifically tailored to identifying and mitigating breaches affecting customer data. Additionally, firms must establish protocols for promptly notifying affected customers in the event of a breach, ensuring transparency and facilitating swift remedial actions. “Over the last 24 years, the nature, scale, and impact of data breaches has transformed substantially,” said SEC Chair Gary Gensler. “These amendments to Regulation S-P will make critical updates to a rule first adopted in 2000 and help protect the privacy of customers’ financial data. The basic idea for covered firms is if you’ve got a breach, then you’ve got to notify. That’s good for investors.” According to the amendments, organizations subject to the regulations must notify affected individuals expeditiously with a deadline of no later than 30 days following the discovery of a data breach. The notification must include comprehensive details regarding the incident, the compromised data and actionable steps for affected parties to safeguard their information. While the amendments are set to take effect two months after publication in the Federal Register, larger entities will have an 18-month grace period to achieve compliance, whereas smaller organizations will be granted a two-year window. However, the SEC has not provided explicit criteria for distinguishing between large and small entities, leaving room for further clarification.

The Debate on SEC's Tight Guidelines

The introduction of these amendments coincides with the implementation of new incident reporting regulations for public companies, compelling timely disclosure of “material“ cybersecurity incidents to the SEC. Public companies in the U.S. now have four days to disclose cybersecurity breaches that could impact their financial standing. SEC’s interest in the matter stems from a major concern: breach information leads to a stock market activity called informed trading, currently a grey area in the eyes of law. Several prominent companies including Hewlett Packard and Frontier, have already submitted requisite filings under these regulations, highlighting the increasing scrutiny on cybersecurity disclosures. Despite pushback from some quarters, including efforts by Rep. Andrew Garbarino to The SEC’s incident reporting rule has however received pushback from close quarters including Congressman Andrew Garbarino, Chairman of the Cybersecurity and Infrastructure Protection Subcommittee of the House Homeland Security Committee and a Member of the House Financial Services Committee. Garbarino in November introduced a joint resolution with Senator Thom Tillis to disapprove SEC’s new rules. “This cybersecurity disclosure rule is a complete overreach on the part of the SEC and one that is in direct conflict with congressional intent. CISA, as the lead civilian cybersecurity agency, has been tasked with developing and issuing regulations for cyber incident reporting as it relates to covered entities. Despite this, the SEC took it upon itself to create duplicative requirements that not only further burden an understaffed cybersecurity workforce with additional and unnecessary reporting requirements, but also increase cybersecurity risk without a congressional mandate and in direct contradiction to public law that is intended to secure the homeland,” Garbarino said, at the time. Senator Tillis added to it saying the SEC was doing its “best to hurt market participants by overregulating firms into oblivion.” Businesses and industry leaders across the spectrum have expressed intense opposition to the new rules but the White House has signaled its commitment to upholding the regulatory framework. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.
Before yesterdayMain stream

Mental Health Apps are Likely Collecting and Sharing Your Data

15 May 2024 at 12:00

May is mental health awareness month! In pursuing help or advice for mental health struggles (beyond just this month, of course), users may download and use mental health apps. Mental health apps are convenient and may be cost effective for many people.

However, while these apps may provide mental health resources and benefits, they may be harvesting considerable amounts of information and sharing health-related data with third parties for advertising and tracking purposes.

Disclaimer: This post is not meant to serve as legal or medical advice. This post is for informational purposes only. If you are experiencing an emergency, please contact emergency services in your jurisdiction.

Understanding HIPAA

Many people have misconceptions about the Health Insurance Portability and Accountability Act (HIPAA) and disclosure/privacy.


white paper with words "hipaa compliance" on top half

According to the US Department of Health and Human Services (HHS), HIPAA is a "federal law that required the creation of national standards to protect sensitive patient health information from being disclosed without the patient's consent or knowledge." There is a HIPAA Privacy Rule and a HIPAA Security Rule.

The Centers for Disease Control and Prevention (CDC) states the Privacy Rule standards "address the use and disclosure of individuals' health information by entities subject to the Privacy Rule." It's important to understand that the Privacy Rule covers entities subject to it.

Entities include healthcare providers, health plans, health clearing houses, and business associates (such as billing specialists or data analysts). Many mental health apps aren't classified as either; also, though there are a few subject to HIPAA, some have been documented not to actually be compliant with HIPAA rules.


white paper on a brown desk surface with "hipaa requirements" on top

What does this mean? Many mental health apps are not considered covered entities and are therefore "exempt" (for lack of better word) from HIPAA. As such, these apps appear to operate in a legal "gray area," but that doesn't mean their data practices are ethical or even follow proper basic information security principles for safeguarding data...

Even apps that collect PHI information protected by HIPAA may still share/use your information that doesn't fall under HIPAA protections.

Mental health apps collect a wealth of personal information

Naturally, data collected by apps falling under the "mental health" umbrella varies widely (as do the apps that fall under this umbrella.)

However, most have users create accounts and fill out some version of an "intake" questionnaire prior to using/enrolling in services. These questionnaires vary by service, but may collect information such as:

  • name
  • address
  • email
  • phone number
  • employer information

Account creation generally and at minimum requires user email and a password, which is indeed routine.


render of molecules on dark blue faded background

It's important to note your email address can serve as a particularly unique identifier - especially if you use the same email address everywhere else in your digital life. If you use the same email address everywhere, it's easier to track and connect your accounts and activities across the web and your digital life.

Account creation may also request alternative contact information, such as a phone number, or supplemental personal information such as your legal name. These can and often do serve as additional data points and identifiers.

It's also important to note that on the backend (usually in a database), your account may be assigned identifiers as well. In some cases, your account may also be assigned external identifiers - especially if information is shared with third parties.

Intake questionnaires can collect particularly sensitive information, such as (but not necessarily limited to):

  • past mental health experiences
  • age (potentially exact date of birth)
  • gender identity information
  • sexual orientation information
  • other demographic information
  • health insurance information (if relevant)
  • relationship status


betterhelp intake questionnaire asking if user takes medication currently

Question from BetterHelp intake questionnaire found in FTC complaint against BetterHelp

These points of sensitive information are rather intimate and can easily be used to identify users - and could be disasters if disclosed in a data breach or to third party platforms.

These unique and rather intimate data points can be used to exploit users in highly targeted marketing and advertising campaigns - or perhaps even used to facilitate scams and malware via advertising tools third parties who may receive such information provide to advertisers.

Note: If providing health insurance information, many services require an image of the card. Images can contain EXIF data that could expose a user's location and device information if not scrubbed prior to upload.

Information collection extends past user disclosure


globe turned to america on black background with code

Far more often than not, information collected by mental health apps extends past information a user may disclose in processes such as account creation or completing intake forms - these apps often harvest device information, frequently sending it off the device and to their own servers.

For example, here is a screenshot of the BetterHelp app's listing on the Apple App Store in MAY 2024:


betterhelp app privacy in the apple app store

The screenshot indicates BetterHelp uses your location and app usage data to "track you across apps and websites owned by other companies." We can infer from this statement that BetterHelp shares your location information and how you use the app with third parties, likely for targeted advertising and tracking purposes.

The screenshot also indicates your contact information, location information, usage data, and other identifiers are linked to your identity.

Note: Apple Privacy Labels in the App Store are self-reported by the developers of the app.

This is all reinforced in their updated privacy policy (25 APR 2024), where BetterHelp indicates they use external and internal identifiers, collect app and platform errors, and collect usage data of the app and platform:


betterhelp privacy policy excerpt

In February 2020, an investigation revealed BetterHelp also harvested the metadata of messages exchanged between clients and therapists, sharing them with platforms like Facebook for advertising purposes. This was despite BetterHelp "encrypting communications between client and therapist" - they may have encrypted the actual message contents, but it appears information such as when a message was went, the receiver/recipient, and location information was available to the servers... and actively used/shared.

While this may not seem like a big deal at first glance - primarily because BetterHelp is not directly accessing/reading message contents - users should be aware that message metadata can give away a lot of information.

Cerebral, a mental health app that does fall under the HIPAA rules, also collects device-related data and location data, associating them with your identity:

cerebral app privacy in the apple app store

According to this screenshot, Cerebral shares/uses app usage data with third parties, likely for marketing and advertising purposes. Specifically, they...

The post Mental Health Apps are Likely Collecting and Sharing Your Data appeared first on Security Boulevard.

Banco Santander Confirms Data Breach, Assures Customers’ Transactions Remain Secure

By: Alan J
15 May 2024 at 06:30

Santander Data Breach

Santander, one of the largest banks in the eurozone, confirmed that an unauthorized party had gained access to a database containing customer and employee information. The Banco Santander data breach is stated to stem from the database of a third-party provider and limited to the only some of the bank's customers in specific regions where it operated, as well as some of its current and former employees. However, the bank's own operations and systems are reportedly unaffected. Banco Santander is a banking services provider founded on March 21, 1857 and headquartered in Madrid, Spain. The provider operates across Europe, North America, and South America. It's services include global payments services, online bank and digital assets.

Customer and Employee Data Compromised in Santander Data Breach

The bank reported that upon becoming aware of the data breach, it had immediately implemented measures to contain the incident, such as blocking access to its database from the compromised source as well as establishing additional fraud prevention mechanisms to protect impacted customers and affected parties. After conducting an investigation, the bank had determined that the leaked information stemmed from a thid-party database and consisted of details of customers from Santander Chile, Spain and Uruguay regions along with some data on some current and former Santander employees. Despite the third-party database breach, customer data from Santander markets and businesses operating in different regions were not affected. [caption id="attachment_68444" align="alignnone" width="2422"]Santander Data Breach Bank Source: santander.com[/caption] The bank apologized for the incident and acknowledged concerns arising from the data breach, taking action to directly notify the affected customers and employees. The security team also informed regulators and law enforcement of the incident details, stating that the bank would continue to work with them during the investigation. Santander assured its customers that no transactional data, nor transaction-facilitating credentials such as banking details and passwords were contained in the database. The statement reported that neither the bank's operations nor systems were affected, and that customers could continue with secure transaction operations. Along with the official statement in response to the data breach, the bank had provided additional advice on its site on dealing with the data breach:
  • Santander will never ask you for codes, OTPs or passwords.
  • Always verify information your receive and contact us through official bank channels.
  • If you receive any suspicious message, email or SMS report it to your bank directly or by contacting reportphishing@gruposantander.com.
  • Never access your online banking via links from suspicious emails or unsolicited emails.
  • Never ignore security notifications or alerts from Santander related to your accounts.

Financial and Banking Sector Hit By Data Breaches

Increased cyber threats or third-party database exposure as in the Santander data breach pose serious concerns for stability within the financial and banking. The International Monetary Fund noted in a blog post last months that these incidents could erode confidence in the financial system, disrupt critical services, or cause spillovers to other institutions. In March, the European Central Bank instructed banks within the European region to implement stronger measures in anticipation of cyber attacks. Earlier, the body had stated that it would conduct a  resilience stest on at least 109 of its directly supervised banks in 2024. The initiatives come as part of broader concern about the security of European banks. Last year, data from the Deutsche Bank AG, Commerzbank AG and ING Groep NV were compromised after the CL0P ransomware group had exploited a security vulnerability in the MOVEit file transfer tool. The European Central Bank's site states that its banking supervisors rely on the stress tests to gather information on and assess how well the banks would able to cope, respond to and recover from a cyberattack, rather than just their ability to prevent attacks. The response and recovery assessments are described to include the activation of emergency procedures and contingency plans as well as the restoration of usual operations. The site states that these test results would then be used to aid supervisors in identifying weaknesses to be discussed in dialogue with the banks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Why car location tracking needs an overhaul

13 May 2024 at 06:48

Across America, survivors of domestic abuse and stalking are facing a unique location tracking crisis born out of policy failure, unclear corporate responsibility, and potentially risky behaviors around digital sharing that are now common in relationships.

No, we’re not talking about stalkerware. Or hidden Apple AirTags. We’re talking about cars.

Modern cars are the latest consumer “device” to undergo an internet-crazed overhaul, as manufacturers increasingly stuff their automobiles with the types of features you’d expect from a smartphone, not a mode of transportation.

There are cars with WiFi, cars with wireless charging, cars with cameras that not only help while you reverse out of a driveway, but which can detect whether you’re drowsy while on a long haul. Many cars now also come with connected apps that allow you to, through your smartphone, remotely start your vehicle, schedule maintenance, and check your tire pressure.

But one feature in particular, which has legitimate uses in responding to stolen and lost vehicles, is being abused: Location tracking.

It’s time car companies do something about it.  

In December, The New York Times revealed the story of a married woman whose husband was abusing the location tracking capabilities of her Mercedes-Benz sedan to harass her. The woman tried every avenue she could to distance herself from her husband. After her husband became physically violent in an argument, she filed a domestic abuse report. Once she fled their home, she got a restraining order. She ignored his calls and texts.

But still her husband could follow her whereabouts by tracking her car—a level of access that Mercedes representatives reportedly could not turn off, as he was considered the rightful owner of the vehicle (according to The New York Times, the husband’s higher credit score convinced the married couple to have the car purchased in his name alone).

As reporter Kashmir Hill wrote of the impasse:

“Even though she was making the payments, had a restraining order against her husband and had been granted sole use of the car during divorce proceedings, Mercedes representatives told her that her husband was the customer so he would be able to keep his access. There was no button she could press to take away the app’s connection to the vehicle.”

This was far from an isolated incident.

In 2023, Reuters reported that a San Francisco woman sued her husband in 2020 for allegations of “assault and sexual battery.” But some months later, the woman’s allegations of domestic abuse grew into allegations of negligence—this time, against the carmaker Tesla.

Tesla, the woman claimed in legal filings, failed to turn off her husband’s access to the location tracking capabilities in their shared Model X SUV, despite the fact that she had obtained a restraining order against her husband, and that she was a named co-owner of the vehicle.

When The New York Times retrieved filings from the San Francisco lawsuit above, attorneys for Tesla argued that the automaker could not realistically play a role in this matter:

“Virtually every major automobile manufacturer offers a mobile app with similar functions for their customers,” the lawyers wrote. “It is illogical and impractical to expect Tesla to monitor every vehicle owner’s mobile app for misuse.”

Tesla was eventually removed from the lawsuit.

In the Reuters story, reporters also spoke with a separate woman who made similar allegations that her ex-husband had tracked her location by using the Tesla app associated with her vehicle. Because the separate woman was a “primary” account owner, she was able to remove the car’s access to the internet, Reuters reported.

A better path

Location tracking—and the abuse that can come with it—is a much-discussed topic for Malwarebytes Labs. But the type of location tracking abuse that is happening with shared cars is different because of the value that cars hold in situations of domestic abuse.

A car is an opportunity to physically leave an abusive partner. A car is a chance to start anew in a different, undisclosed location. In harrowing moments, cars have also served as temporary shelter for those without housing.

So when a survivor’s car is tracked by their abuser, it isn’t just a matter of their location and privacy being invaded, it is a matter of a refuge being robbed.

In speaking with the news outlet CalMatters, Yenni Rivera, who works on domestic violence cases, explained the stressful circumstances of exactly this dynamic.

“I hear the story over and over from survivors about being located by their vehicle and having it taken,” Rivera told CalMatters. “It just puts you in a worst case situation because it really triggers you thinking, ‘Should I go back and give in?’ and many do. And that’s why many end up being murdered in their own home. The law should make it easier to leave safely and protected.”

Though the state of California is considering legislative solutions to this problem, national lawmaking is slow.

Instead, we believe that the companies that have the power to do something act on that power. Much like how Malwarebytes and other cybersecurity vendors banded together to launch the Coalition Against Stalkerware, automakers should work together to help users.

Fortunately, an option may already exist.

When the Alliance for Automobile Innovation warned that consumer data collection requests could be weaponized by abusers who want to comb through the car location data of their partners and exes, the automaker General Motors already had a protection built in.

According to Reuters, the roadside assistance service OnStar, which is owned by General Motors, allows any car driver—be they a vehicle’s owner or not—to hide location data from other people who use the same vehicle. Rivian, a new electric carmaker, is reportedly working on a similar feature, said senior vice president of software development Wassym Bensaid in speaking with Reuters.

Though Reuters reported that Rivian had not heard of their company’s technology being leveraged in a situation of domestic abuse, Wassym believed that “users should have a right to control where that information goes.”

We agree.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

How Can Businesses Defend Themselves Against Common Cyberthreats? – Source: www.techrepublic.com

how-can-businesses-defend-themselves-against-common-cyberthreats?-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Fiona Jackson TechRepublic consolidated expert advice on how businesses can defend themselves against the most common cyberthreats, including zero-days, ransomware and deepfakes. Today, all businesses are at risk of cyberattack, and that risk is constantly growing. Digital transformations are resulting in more sensitive and valuable data being moved onto online systems […]

La entrada How Can Businesses Defend Themselves Against Common Cyberthreats? – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Elon Musk’s X can’t invent its own copyright law, judge says

10 May 2024 at 17:20
Elon Musk’s X can’t invent its own copyright law, judge says

Enlarge (credit: Apu Gomes / Stringer | Getty Images News)

US District Judge William Alsup has dismissed Elon Musk's X Corp lawsuit against Bright Data, a data-scraping company accused of improperly accessing X (formerly Twitter) systems and violating both X terms and state laws when scraping and selling data.

X sued Bright Data to stop the company from scraping and selling X data to academic institutes and businesses, including Fortune 500 companies.

According to Alsup, X failed to state a claim while arguing that companies like Bright Data should have to pay X to access public data posted by X users.

Read 22 remaining paragraphs | Comments

Big Three carriers pay $10M to settle claims of false “unlimited” advertising

10 May 2024 at 14:36
The word,

Enlarge (credit: Verizon)

T-Mobile, Verizon, and AT&T will pay a combined $10.2 million in a settlement with US states that alleged the carriers falsely advertised wireless plans as "unlimited" and phones as "free." The deal was announced yesterday by New York Attorney General Letitia James.

"A multistate investigation found that the companies made false claims in advertisements in New York and across the nation, including misrepresentations about 'unlimited' data plans that were in fact limited and had reduced quality and speed after a certain limit was reached by the user," the announcement said.

T-Mobile and Verizon agreed to pay $4.1 million each while AT&T agreed to pay a little over $2 million. The settlement includes AT&T subsidiary Cricket Wireless and Verizon subsidiary TracFone.

Read 10 remaining paragraphs | Comments

Hacker Duo Allegedly Strikes HSBC, Barclays in Cyberattacks

Barclays and HSBC Bank data breach

Hackers IntelBroker and Sanggiero have claimed a data breach allegedly impacting HSBC Bank and Barclays Bank. The HSBC Bank data breach, along with the breach at Barclays reportedly occurred in April 2024, involving a security incident through a third-party contractor, ultimately leading to the leak of sensitive data.  The compromised data, which was being offered for sale on Breachforums, allegedly includes a wide array of files such as database files, certificate files, source code, SQL files, JSON configuration files, and compiled JAR files. Preliminary analysis suggests that the data may have been sourced from the services provided by Baton Systems Inc., a post-trade processing platform, potentially impacting both HSBC Bank and Barclays Bank. However, Baton Systems has not shared any update on this alleged attack or any connection with the sample data provided by the threat actor.

Hacker Duo Claims Barclays and HSBC Bank Data Breach

Barclays Bank PLC and The Hong Kong and Shanghai Banking Corporation Limited (HSBC) are the primary organizations reportedly affected by this breach. With operations spanning across the United Kingdom, United States, and regions including Europe and North America, the threat actor threatens the banking systems and probably targets customers' data, however, there has been no evidence of such data getting leaked.  [caption id="attachment_67347" align="alignnone" width="2084"]Barclays and HSBC Bank data breach Source: Dark Web[/caption] In a post on Breachforums, one of the threat actors, IntelBroker, shared details of the Barclays and HSBC Bank data breach, offering the compromised data for download. The post, dated May 8, 2024, outlined the nature of the breach and the types of data compromised, including database files, certificate files, source code, and more. The post also provided a sample of the leaked data, revealing a mixture of CSV data representing financial transactions across different systems or entities.
While talking about the stolen data, IntelBroker denoted that he is "uploading the HSBC & Barclays data breach for you to download. Thanks for reading and enjoy! In April 2024, HSBC & Barclays suffered a data breach when a direct contractor of the two banks was breached. Breached by @IntelBroker & @Sanggiero".

A Closer Look at the Sample Data 

A closer look at the sample data reveals three distinct datasets, each containing transaction records with detailed information about financial activities. These records encompass a range of information, from transaction IDs and timestamps to descriptions and account numbers involved. The datasets provide a comprehensive view of various transactions, offering valuable insights for financial analysis and tracking. The Cyber Express has reached out to both the banks to learn more about these alleged data breaches. HSBC Bank has denied these allegations about the breach, stating, "We are aware of these reports and confirm HSBC has not experienced a cybersecurity incident and no HSBC data has been compromised.” However, at the time of writing this, no official statement or response has been shared by Barclays, leaving the claims of the data breach related to Barclays stand unverified. Moreover, the two hackers in question, IntelBroker and Sanggiero, have claimed similar attacks in the past, targeting various global organizations. In an exclusive interview with The Cyber Express, one of the hackers, IntelBroker shed light on their hacking activities and the motivations behind their operations. IntelBroker had also praised Sanggiero from BreachForums for “his exceptional intellect and understated contributions to the field are deserving of far greater recognition and respect.” Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

That inequality lies at the heart of what we call "data colonialism"

By: kmt
7 May 2024 at 04:26
"The term might be unsettling, but we believe it is appropriate. Pick up any business textbook and you will never see the history of the past thirty years described this way. A title like Thomas Davenport's Big Data at Work spends more than two hundred pages celebrating the continuous extraction of data from every aspect of the contemporary workplace, without once mentioning the implications for those workers. EdTech platforms and the tech giants like Microsoft that service them talk endlessly about the personalisation of the educational experience, without ever noting the huge informational power that accrues to them in the process." (Today's colonial "data grab" is deepening global inequalities, LSE)

The book: Data Grab - The new Colonialism of Big Tech and how to fight back (Penguin) Interview with the authors: Q and A with Nick Couldry and Ulises A Mejias on Data Grab

User Behavior Analytics: Why False Positives are NOT the Problem

7 May 2024 at 00:00

The axiom “garbage in, garbage out” has been around since the early days of computer science and remains apropos today to the data associated with user behavior analytics and insider risk management (IRM). During a recent Conversations from the Inside (CFTI) episode, Mohan Koo, DTEX President and Co-Founder, spoke about how organizations are often quick … Continued

The post User Behavior Analytics: Why False Positives are NOT the Problem appeared first on DTEX Systems Inc.

The post User Behavior Analytics: Why False Positives are NOT the Problem appeared first on Security Boulevard.

Massive Data Breach Affects Victims of Family Violence and Sexual Assault in Victoria

Monash Health Data Breach

A cyberattack targeting a Victorian company has resulted in the exposure of personal data belonging to thousands of victims of family violence and sexual assault, as well as about 60,000 current and former students at Melbourne Polytechnic.

Monash Health Data Breach

Monash Health, the state's largest health service, confirmed it was caught in the cross-hairs of a data breach, which also affected government entities that were clients of the company ZircoDATA.
Monash Health, Victoria's largest health service, found itself entangled in the aftermath of a data breach, which compromised sensitive information collected by family violence and sexual support units between 1970 and 1993. The breach, attributed to an unauthorized third party gaining access to the systems of document-scanning business ZircoDATA, impacted approximately 4000 individuals who had sought support from these vital services. The disclosure of details about the sexual violence and assault support units has been deeply distressing for affected victim-survivors. The breach, which involved personal data collected over decades, has raised concerns about the safety and privacy of those who relied on these support services during times of vulnerability. Amid the fallout from the breach, efforts have been underway to mitigate the risks and support those affected. Monash Health, in collaboration with relevant authorities, has been diligently verifying the identities and addresses of the impacted individuals before initiating contact, ensuring that victims are not inadvertently exposed to further harm.
“The majority of these entities are still in the process of working with ZircoDATA to identify impacted data and any victims, and are yet to begin notifying impacted individuals,” newly appointed coordinator Lieutenant-General Michelle McGuinness said in a statement on X.
In addition to Monash Health, other government entities that were clients of ZircoDATA have also been affected by the breach but “the impact for most government entities is likely to be minimal,” the National Cyber Security Coordinator said. The breach has prompted federal authorities, including the Australian Federal Police, to launch investigations and coordinate responses to address the scope of the incident and safeguard affected individuals.

ZircoDATA Breach Also Impacts Melbourne Polytechnic

Meanwhile, Melbourne Polytechnic, a prominent educational institution, announced that enrollment information for 60,000 past and present students, stored by ZircoDATA, had been accessed in the breach. Although the breach primarily involved "low-risk identity attributes," the institution has taken proactive steps to offer affected individuals access to cyber support and identity services. The cybersecurity landscape continues to evolve rapidly, with healthcare emerging as one of the sectors most vulnerable to cyberattacks. A recent report by cybersecurity firm Sophos revealed that healthcare was one of only five sectors to report an increase in cyberattacks over the last year, highlighting the urgent need for heightened vigilance and resilience in safeguarding sensitive data and critical infrastructure. As organizations grapple with the aftermath of data breaches, there is a pressing need to strengthen cybersecurity measures and response protocols to effectively mitigate risks and protect individuals' privacy and security. Collaborative efforts between government agencies, healthcare providers, educational institutions, and cybersecurity experts are essential in addressing the complex challenges posed by cyber threats and ensuring the resilience of our digital infrastructure. In the wake of this cyberattack, authorities have emphasized the importance of transparency, accountability, and support for those affected. By prioritizing the safety and well-being of individuals impacted by data breaches, we can collectively work towards building a more secure and resilient digital ecosystem that safeguards the privacy and security of all stakeholders. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

FTC Fines Cerebral $7 Million for Sharing Millions of Patients’ Data

17 April 2024 at 04:38

Cerebral Consumer Data

The Federal Trade Commission (FTC) has proposed a $7 million fine against Cerebral Inc for what it sees as a mishandling of consumer data. Cerebral, allegedly not only mishandled the data but actively shared it with third parties for advertising purposes. The complaint alleges that Cerebral Inc consumer data consisting of sensitive information of nearly 3.2 million individuals had been to various third-party agencies, such as Google, Meta (Facebook), TikTok, among other advertising giants. This sharing of consumer data reportedly occurred through Cerebral's platforms by utilizing tracking tools on its website or apps, such as tracking pixels.
Cerebral, Inc. agreed to comply with a settlement with the FTC, which includes restrictions on the company's use or disclosure of sensitive consumer data.  In the statement, the FTC reaffirmed its fight against the poor data handling of consumers’ sensitive health data in some health companies.

FTC Cites Poor Handling and Malpractices Behind Cerebral Inc Consumer Data Collection

[caption id="attachment_63212" align="alignnone" width="1000"]Cerebral Inc Consumer Data Source: Shutterstock[/caption]
The data being mishandled reportedly included not only typical contact and payment information but also detailed medical histories, prescriptions, health insurance details, and even sensitive personal beliefs and orientations. The publication cites various examples of Cerebral's poor practices including a failure to restrict former employees from accessing confidential medical records, promotional postcards that disclosed patient health details, and relying on insecure access methods for its patient portal, which allowed users to access others' the confidential health information of other patients. Furthermore, the lawsuit accused Cerebral Inc. of violating the 'Restore Online Shoppers’ Confidence Act' (ROSCA) by making it difficult for consumers to cancel subscriptions. The complaint outlined a convoluted cancellation process that involved staff contacting consumers to dissuade them from canceling, keeping subscriptions active until staff "confirmed" cancellation demands, and even removing a simplified cancellation button after observing an increase in cancellations.

Mental Health Firm Issued Data Breach Notice Last Month

[caption id="attachment_63214" align="alignnone" width="1494"]Cerebral Data Breach Source: cerebral.com[/caption] Cerebral Inc. disclosed in a breach notice published on its website that company data had been shared through invisible pixel trackers from Google, Meta (Facebook), TikTok, and other third parties on its online services since 2019, without adequate patient permission. The breach had been reported on the U.S. Department of Health and Human Services breach portal, mentioning the personal details of 3,179,835 people being exposed as part of this breach. The data breach was stated to include details such as full name, phone number, email address, date of birth, IP address, Cerebral client ID number, and demographic information. However, the firm stated that the shared information did not include Social Security numbers, credit card information, or bank account information. The firm indicated that it had 'enhanced' its information security practices and technology vetting processes to mitigate the sharing of such information in the future. The firm claimed that it was among several others across industries such as health systems, traditional brick-and-mortar providers, and other telehealth companies who had resorted to the use of pixel and other common tracking technologies. Cerebral stated that it would provide free credit monitoring to help affected users. The data breach incident as well as FTC's proposed fine highlight the importance of safeguarding consumer data and ensuring transparent and accessible cancellation processes, particularly in sensitive industries such as mental health care. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Communities Should Reject Surveillance Products Whose Makers Won't Allow Them to be Independently Evaluated

6 March 2024 at 10:05
pAmerican communities are being confronted by a lot of new police technology these days, a lot of which involves surveillance or otherwise raises the question: “Are we as a community comfortable with our police deploying this new technology?” A critical question when addressing such concerns is: “Does it even work, and if so, how well?” It’s hard for communities, their political leaders, and their police departments to know what to buy if they don’t know what works and to what degree./p pOne thing I’ve learned from following new law enforcement technology for over 20 years is that there is an awful lot of snake oil out there. When a new capability arrives on the scene—whether it’s a href=https://www.aclu.org/wp-content/uploads/publications/drawing_blank.pdfface recognition/a, a href=https://www.aclu.org/blog/privacy-technology/surveillance-technologies/experts-say-emotion-recognition-lacks-scientific/emotion recognition/a, a href=https://www.aclu.org/wp-content/uploads/publications/061819-robot_surveillance.pdfvideo analytics/a, or “a href=https://www.aclu.org/news/privacy-technology/chicago-police-heat-list-renews-old-fears-aboutbig data/a” pattern analysis—some companies will rush to promote the technology long before it is good enough for deployment, which sometimes a href=https://www.aclu.org/blog/privacy-technology/surveillance-technologies/experts-say-emotion-recognition-lacks-scientific/never happens/a. That may be even more true today in the age of artificial intelligence. “AI” is a term that often amounts to no more than trendy marketing jargon./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technology target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/03/a573aa109804db74bfef11f8a6f475e7-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technology target=_blank Six Questions to Ask Before Accepting a Surveillance Technology /a /div div class=wp-link__description a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technology target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletCommunity members, policymakers, and political leaders can make better decisions about new technology by asking these questions./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technology target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pGiven all this, communities and city councils should not adopt new technology that has not been subject to testing and evaluation by an independent, disinterested party. That’s true for all types of technology, but doubly so for technologies that have the potential to change the balance of power between the government and the governed, like surveillance equipment. After all, there’s no reason to get a href=https://www.aclu.org/news/privacy-technology/six-questions-to-ask-before-accepting-a-surveillance-technologywrapped up in big debates/a about privacy, security, and government power if the tech doesn’t even work./p pOne example of a company refusing to allow independent review of its product is the license plate recognition company Flock, which is pushing those surveillance devices into many American communities and tying them into a centralized national network. (We wrote more about this company in a 2022 a href=https://www.aclu.org/publications/fast-growing-company-flock-building-new-ai-driven-mass-surveillance-systemwhite paper/a.) Flock has steadfastly refused to allow the a href=https://www.aclu.org/news/privacy-technology/are-gun-detectors-the-answer-to-mass-shootingsindependent/a security technology reporting and testing outlet a href=https://ipvm.com/IPVM/a to obtain one of its license plate readers for testing, though IPVM has tested all of Flock’s major competitors. That doesn’t stop Flock from a href=https://ipvm.com/reports/flock-lpr-city-sued?code=lfgsdfasd543453boasting/a that “Flock Safety technology is best-in-class, consistently performing above other vendors.” Claims like these are puzzling and laughable when the company doesn’t appear to have enough confidence in its product to let IPVM test it./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/privacy-technology/experts-say-emotion-recognition-lacks-scientific target=_blank tabindex=-1 img width=1160 height=768 src=https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e.jpg 1160w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-768x508.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-400x265.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-600x397.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-800x530.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/03/f0cab632e1da8a25e9a54ba8019ef74e-1000x662.jpg 1000w sizes=(max-width: 1160px) 100vw, 1160px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/privacy-technology/experts-say-emotion-recognition-lacks-scientific target=_blank Experts Say 'Emotion Recognition' Lacks Scientific Foundation /a /div div class=wp-link__description a href=https://www.aclu.org/news/privacy-technology/experts-say-emotion-recognition-lacks-scientific target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tablet/p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/privacy-technology/experts-say-emotion-recognition-lacks-scientific target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pCommunities considering installing Flock cameras should take note. That is especially the case when errors by Flock and other companies’ license plate readers can lead to innocent drivers finding themselves with their a href=https://ipvm.com/reports/flock-lpr-city-sued?code=lfgsdfasd543453hands behind their heads/a, facing jittery police pointing guns at them. Such errors can also expose police departments and cities to lawsuits./p pEven worse is when a company pretends that its product has been subject to independent review when it hasn’t. The metal detector company Evolv, which sells — wait for it — emAI/em metal detectors, submitted its technology to testing by a supposedly independent lab operated by the University of Southern Mississippi, and publicly touted the results of the tests. But a href=https://ipvm.com/reports/bbc-evolvIPVM/a and the a href=https://www.bbc.com/news/technology-63476769BBC/a reported that the lab, the National Center for Spectator Sports Safety and Security (a href=https://ncs4.usm.edu/NCS4/a), had colluded with Evolv to manipulate the report and hide negative findings about the effectiveness of the company’s product. Like Flock, Evolv refuses to allow IPVM to obtain one of its units for testing. (We wrote about Evolv and its product a href=https://www.aclu.org/news/privacy-technology/are-gun-detectors-the-answer-to-mass-shootingshere/a.)/p pOne of the reasons these companies can prevent a tough, independent reviewer such as IPVM from obtaining their equipment is their subscription and/or cloud-based architecture. “Most companies in the industry still operate on the more traditional model of having open systems,” IPVM Government Research Director Conor Healy told me. “But there’s a rise in demand for cloud-based surveillance, where people can store things in cloud, access them on their phone, see the cameras. Cloud-based surveillance by definition involves central control by the company that’s providing the cloud services.” Cloud-based architectures can a href=https://www.aclu.org/news/civil-liberties/major-hack-of-camera-company-offers-four-key-lessons-on-surveillanceworsen the privacy risks/a created by a surveillance system. Another consequence of their centralized control is increasing the ability of a company to control who can carry out an independent review./p pWe’re living in an era where a lot of new technology is emerging, with many companies trying to be the first to put them on the market. As Healy told me, “We see a lot of claims of AI, all the time. At this point, almost every product I see out there that gets launched has some component of AI.” But like other technologies before them, these products often come in highly immature, premature, inaccurate, or outright deceptive forms, relying little more than on the use of “AI” as a buzzword./p pIt’s vital for independent reviewers to contribute to our ongoing local and national conversations about new surveillance and other police technologies. It’s unclear why a company that has faith in its product would attempt to block independent review, which is all the more reason why buyers should know this about those companies./p
❌
❌