Normal view

There are new articles available, click to refresh the page.
Today — 18 June 2024Main stream
Before yesterdayMain stream

Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk

13 June 2024 at 13:28
Life360 CEO Chris Hulls

Location tracking service leaks PII, because—incompetence? Seems almost TOO easy.

The post Tile/Life360 Breach: ‘Millions’ of Users’ Data at Risk appeared first on Security Boulevard.

Connecticut Has Highest Rate of Health Care Data Breaches: Study

13 June 2024 at 09:19
health care data breaches cybersecurity

It’s no secret that hospitals and other health care organizations are among the top targets for cybercriminals. The ransomware attacks this year on UnitedHealth Group’s Change Healthcare subsidiary, nonprofit organization Ascension, and most recently the National Health Service in England illustrate not only the damage to these organizations’ infrastructure and the personal health data that’s..

The post Connecticut Has Highest Rate of Health Care Data Breaches: Study appeared first on Security Boulevard.

Ticketmaster is Tip of Iceberg: 165+ Snowflake Customers Hacked

11 June 2024 at 11:15
Snowflake CISO Brad Jones

Not our fault, says CISO: “UNC5537” breached at least 165 Snowflake instances, including Ticketmaster, LendingTree and, allegedly, Advance Auto Parts.

The post Ticketmaster is Tip of Iceberg: 165+ Snowflake Customers Hacked appeared first on Security Boulevard.

Ticketmaster Data Breach and Rising Work from Home Scams

By: Tom Eston
10 June 2024 at 00:00

In episode 333 of the Shared Security Podcast, Tom and Scott discuss a recent massive data breach at Ticketmaster involving the data of 560 million customers, the blame game between Ticketmaster and third-party provider Snowflake, and the implications for both companies. Additionally, they discuss Live Nation’s ongoing monopoly investigation. In the ‘Aware Much’ segment, the […]

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Shared Security Podcast.

The post Ticketmaster Data Breach and Rising Work from Home Scams appeared first on Security Boulevard.

💾

Meta uses “dark patterns” to thwart AI opt-outs in EU, complaint says

6 June 2024 at 17:25
Meta uses “dark patterns” to thwart AI opt-outs in EU, complaint says

Enlarge (credit: Boris Zhitkov | Moment)

The European Center for Digital Rights, known as Noyb, has filed complaints in 11 European countries to halt Meta's plan to start training vague new AI technologies on European Union-based Facebook and Instagram users' personal posts and pictures.

Meta's AI training data will also be collected from third parties and from using Meta's generative AI features and interacting with pages, the company has said. Additionally, Meta plans to collect information about people who aren't on Facebook or Instagram but are featured in users' posts or photos. The only exception from AI training is made for private messages sent between "friends and family," which will not be processed, Meta's blog said, but private messages sent to businesses and Meta are fair game. And any data collected for AI training could be shared with third parties.

"Unlike the already problematic situation of companies using certain (public) data to train a specific AI system (e.g. a chatbot), Meta's new privacy policy basically says that the company wants to take all public and non-public user data that it has collected since 2007 and use it for any undefined type of current and future 'artificial intelligence technology,'" Noyb alleged in a press release.

Read 41 remaining paragraphs | Comments

Microsoft Recall is a Privacy Disaster

6 June 2024 at 13:20
Microsoft CEO Satya Nadella, with superimposed text: “Security”

It remembers everything you do on your PC. Security experts are raging at Redmond to recall Recall.

The post Microsoft Recall is a Privacy Disaster appeared first on Security Boulevard.

Generative AI and Data Privacy: Navigating the Complex Landscape

Generative AI

By Neelesh Kripalani, Chief Technology Officer, Clover Infotech Generative AI, which includes technologies such as deep learning, natural language processing, and speech recognition for generating text, images, and audio, is transforming various sectors from entertainment to healthcare. However, its rapid advancement has raised significant concerns about data privacy. To navigate this intricate landscape, it is crucial to understand the intersection of AI capabilities, ethical considerations, legal frameworks, and technological safeguards.

Data Privacy Challenges Raised by Generative AI

Not securing data while collection or processing- Generative AI raises significant data privacy concerns due to its need for vast amounts of diverse data, often including sensitive personal information, collected without explicit consent and difficult to anonymize effectively. Model inversion attacks and data leakage risks can expose private information, while biases in training data can lead to unfair or discriminatory outputs. The risk of generated content - The ability of generative AI to produce highly realistic fake content raises serious concerns about its potential for misuse. Whether creating convincing deepfake videos or generating fabricated text and images, there is a significant risk of this content being used for impersonation, spreading disinformation, or damaging individuals' reputations. Lack of Accountability and transparency - Since GenAI models operate through complex layers of computation, it is difficult to get visibility and clarity into how these systems arrive at their outputs. This complexity makes it difficult to track the specific steps and factors that lead to a particular decision or output. This not only hinders trust and accountability but also complicates the tracing of data usage and makes it tedious to ensure compliance with data privacy regulations. Additionally, unidentified biases in the training data can lead to unfair outputs, and the creation of highly realistic but fake content, like deepfakes, poses risks to content authenticity and verification. Addressing these issues requires improved explainability, traceability, and adherence to regulatory frameworks and ethical guidelines. Lack of fairness and ethical considerations - Generative AI models can perpetuate or even exacerbate existing biases present in their training data. This can lead to unfair treatment or misrepresentation of certain groups, raising ethical issues.

Here’s How Enterprises Can Navigate These Challenges

Understand and map the data flow - Enterprises must maintain a comprehensive inventory of the data that their GenAI systems process, including data sources, types, and destinations. Also, they should create a detailed data flow map to understand how data moves through their systems. Implement strong data governance - As per the data minimization regulation, enterprises must collect, process, and retain only the minimum amount of personal data necessary to fulfill a specific purpose. In addition to this, they should develop and enforce robust data privacy policies and procedures that comply with relevant regulations. Ensure data anonymization and pseudonymization – Techniques such as anonymization and pseudonymization can be implemented to reduce the chances of data reidentification. Strengthen security measures – Implement other security measures such as encryption for data at rest and in transit, access controls for protecting against unauthorized access, and regular monitoring and auditing to detect and respond to potential privacy breaches. To summarize, organizations must begin by complying with the latest data protection laws and practices, and strive to use data responsibly and ethically. Further, they should regularly train employees on data privacy best practices to effectively manage the challenges posed by Generative AI while leveraging its benefits responsibly and ethically. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

Copilot+ Recall is ‘Dumbest Cybersecurity Move in a Decade’: Researcher

Copilot Recall privacy settings

A new Microsoft Windows feature dubbed Recall planned for Copilot+ PCs has been called a security and privacy nightmare by cybersecurity researchers and privacy advocates. Copilot Recall will be enabled by default and will capture frequent screenshots, or “snapshots,” of a user’s activity and store them in a local database tied to the user account. The potential for exposure of personal and sensitive data through the new feature has alarmed security and privacy advocates and even sparked a UK inquiry into the issue.

Copilot Recall Privacy and Security Claims Challenged

In a long Mastodon thread on the new feature, Windows security researcher Kevin Beaumont wrote, “I’m not being hyperbolic when I say this is the dumbest cybersecurity move in a decade. Good luck to my parents safely using their PC.” In a blog post on Recall security and privacy, Microsoft said that processing and storage are done only on the local device and encrypted, but even Microsoft’s own explanations raise concerns: “Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.” Security and privacy advocates take issue with assertions that the data is stored securely on the local device. If someone has a user’s password or if a court orders that data be turned over for legal or law enforcement purposes, the amount of data exposed could be much greater with Recall than would otherwise be exposed. Domestic abuse situations could be worsened. And hackers, malware and infostealers will have access to vastly more data than they would without Recall. Beaumont said the screenshots are stored in a SQLite database, “and you can access it as the user including programmatically. It 100% does not need physical access and can be stolen.” He posted a video (republished below) he said was of two Microsoft engineers gaining access to the Recall database folder with apparent ease, “with SQLite database right there.” [videopress izzNn3K5] Beaumont said in a separate blog that in his own testing with an off-the-shelf infostealer and Microsoft Defender for Endpoint — which detected the infostealer — "by the time the automated remediation kicked in (which took over ten minutes) my Recall data was already long gone."

"Recall enables threat actors to automate scraping everything you’ve ever looked at within seconds," he concluded.

Does Recall Have Cloud Hooks?

Beaumont also questioned Microsoft’s assertion that all this is done locally. “So the code underpinning Copilot+ Recall includes a whole bunch of Azure AI backend code, which has ended up in the Windows OS,” he wrote on Mastodon.  “It also has a ton of API hooks for user activity monitoring. “It opens a lot of attack surface. ... They really went all in with this and it will have profound negative implications for the safety of people who use Microsoft Windows.”

Data May Not Be Completely Deleted

And sensitive data deleted by users will still be saved in Recall screenshots. “There's no feature to delete screenshots of things you delete while using your PC,” Beaumont said. “You would have to remember to go and purge screenshots that Recall makes every few seconds. If you or a friend use disappearing messages in WhatsApp, Signal etc, it is recorded regardless.” One commenter said Copilot Recall seems to raise compliance issues too, in part by creating additional unnecessary data that could survive deletion requests. “[T]his comprehensively fails PCI and GDPR immediately and the SOC2 controls list ain't looking so good either,” the commenter said. Leslie Carhart, Director of Incident Response at Dragos, replied that “the outrage and disbelief are warranted.” A second commenter noted, “GDPR has a very simple concept: Data Minimization. Quite simply, only store data that you actually have a legitimate, legal purpose for; and only for as long as necessary. Right there, this fails in spectacular fashion on both counts. It's going to store vast amounts of data for no specific purpose, potentially for far longer than any reasonable use of that data.” It remains to be seen if Microsoft will make any modifications to Recall to quell concerns before it officially ships. If not, security and privacy experts may find themselves busier than ever.

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

How to protect yourself from online harassment

10 April 2024 at 15:19

It takes a little to receive a lot of online hate today, from simply working as a school administrator to playing a role in a popular movie or video game.

But these moments of personal crisis have few, immediate solutions, as the current proposals to curb and stem online harassment zero in on the systemic—such as changes in data privacy laws to limit the personal information that can be weaponized online or calls for major social media platforms to better moderate hateful content and its spread.

Such structural shifts can take years (if they take place at all), which can leave today’s victims feeling helpless.

There are, however, a few steps that everyday people can take, starting now, to better protect themselves against online hate and harassment campaigns. And thankfully, none of them involve “just getting off the internet,” a suggestion that, according to Leigh Honeywell, is both ineffective and unwanted.

“The [idea that the] answer to being bullied is that you shouldn’t be able to participate in public life—I don’t think that’s okay,” said Honeywell, CEO and co-founder of the digital safety consultancy Tall Poppy.

Speaking to me on the Lock and Code podcast last month, Honeywell explained that Tall Poppy’s defense strategies to online harassment incorporate best practices from Honeywell’s prior industry—cybersecurity.

Here are a few steps that people can proactively take to limit online harassment before it happens.

Get good at Googling yourself

One of the first steps in protecting yourself from online harassment is finding out what information about you is already available online. This is because, as Honeywell said, much of that information can be weaponized for abuse.

Picture an angry diner posting a chef’s address on Yelp alongside a poor review, or a complete stranger sending in a fake bomb threat to a school address, or a real-life bully scraping the internet for embarrassing photos of someone they want to harass.  

All this information could be available online, and the best way to know if it exists is to do the searching yourself.

As for where to start?

“First name, last name, city name, or other characteristics about yourself,” Honeywell said, listing what, specifically, to search online.

It’s important to understand that the online search itself may not bring immediate results, but it will likely reveal active online profiles on platforms like LinkedIn, X (formerly Twitter), Facebook, and Instagram. If those profiles are public, an angry individual could scrape relevant information and use it to their advantage. Even a LinkedIn profile could be weaponized by someone who calls in fake complaints to a person’s employer, trying to have them fired from their position.

In combing through the data that you can find about yourself online, Honeywell said people should focus on what someone else could do with that data.

“If an adversary was trying to find out information about me, what would they find?” Honeywell said. “If they had that information, what would they do with it?”

Take down what you can

You’ve found what an adversary might use against you online. Now it’s time to take it down.

Admittedly, this can be difficult in the United States, as Americans are not protected by a national data privacy law that gives them the right to request their data be deleted from certain websites, platforms, and data brokers.

Where Americans could find some help, however, is from online resources and services that streamline the data removal process that is enshrined in some state laws. These tools, like the iOS app Permission Slip, released by Consumer Reports in 2022, show users what types of information companies are collecting about them, and give user the opportunity to request that such data be deleted.

Separately, Google released on online tool in 2023 where users can request that certain search results that contain their personal information be removed. You can learn more about the tool, called “Results about you,” here.

When all else fails, Honeywell said that people shouldn’t be afraid to escalate the situation to their state’s regulators. That could include filing an official complaint with a State Attorney General, or with the Consumer Financial Protection Bureau, or the Federal Trade Commission.

“It sounds like the big guns,” Honeywell said, “but I think it’s important that, as individuals, we do what we can to hold the companies that are creating this mess accountable.”

Lock down your accounts

If an adversary can’t find your information through an online search, they may try to steal that information by hacking into your accounts, Honeywell said.

“If I’m mad at David, I’m going to hack into David’s email and share personal information,” Honeywell said. “That’s a fairly standard way that we see some of the worst online harassment attacks escalate.”

While hackers may have plenty of novel tools at their disposal, the best defenses you can implement today are the use of unique passwords and multifactor authentication.

Let’s first talk about unique passwords.

Each and every single one of your online accounts—from your email, to your social media profiles, to your online banking—should have a strong, unique password. And because you likely have dozens upon dozens of online accounts to manage, you should keep track of all those passwords with a devoted password manager.

Using unique passwords is one of the best defenses to company data breaches that expose user login credentials. Once those credentials are available on the dark web, hackers will buy those credentials so they can attempt to use them to gain access to other online accounts. You can prevent those efforts going forward by refusing to repeat passwords across any of your online accounts.

Now, start using multifactor authentication, if you’re not already.

Multifactor authentication is offered by most major companies and services today, from your bank, to your email, to your medical provider. By using multifactor authentication, also called MFA or 2FA, you will be required to “authenticate” yourself with more than just your password. This means that when you enter your username and password onto a site or app, you will also be prompted with entering a separate code that is, in many cases, sent to your phone via text or an app.

MFA is one of the strongest protections to password abuse, ensuring that, even if a hacker has your username and password, they still can’t access your account because they will not have the additional authentication that is required to complete a login.

In the world of cybersecurity, these two defense practices are among the gold standard in stopping cyberattacks. In the world of online harassment, they’re much the same—they work to prevent the abuse of your online accounts.

Here to help

Online harassment is an isolating experience, but protecting yourself against it can be quite the opposite. Honeywell suggested that, for those who feel overwhelmed or who do not know where to start, they can find a friend to help.

“Buddy up,” Honeywell said. “If you’ve got a friend who’s good at Googling, work on each other’s profile, identify what information is out there about you.”

Honeywell also recommended going through data takedown requests together, as the processes can be “extremely tedious” and some of the services that promise to remove your information from the internet are really only trying to sell you a service.

If you’re still wondering what information about you is online and you aren’t comfortable with your way around Google, Malwarebytes has a new, free tool that reveals what information of yours is available on the dark web and across the internet at large. The Digital Footprint Portal, released in April, provides free, unlimited scans for everyone, and it can serve as a strong first step in understanding what information of yours needs to be locked down.

To learn what information about you has been exposed online, use our free scanner below.

Introducing the Digital Footprint Portal

10 April 2024 at 09:01

Digital security is about so much more than malware. That wasn’t always the case. 

When I started Malwarebytes more than 16 years ago, malware was the primary security concern—the annoying pop-ups, the fast-spreading viruses, the catastrophic worms—and throughout our company’s history, Malwarebytes routinely excelled against this threat. We caught malware that other vendors missed, and we pioneered malware detection methods beyond the signature-based industry standard.  

I’m proud of our success, but it wasn’t just our technology that got us here. It was our attitude.  

At Malwarebytes, we believe that everyone has the right to a secure digital life, no matter their budget, which is why our malware removal tool was free when it launched and remains free today. Our ad blocking tool, Browser Guard is also available to all without a charge. This was very much not the norm in cybersecurity, but I believe it was—and will always be—the right thing to do.  

Today, I am proud to add to our legacy of empowering individuals regardless of their wallet by releasing a new, free tool that better educates and prepares people for modern threats that abuse exposed data to target online identities. I’d like to welcome everyone to try our new Digital Footprint Portal.  

See your exposed data in our new Digital Footprint Portal.

By simply entering an email address, anyone can discover what information of theirs is available on the dark web to hackers, cybercriminals, and scammers. From our safe portal, everyday people can view past password breaches, active social media profiles, potential leaks of government ID info, and more.  

More than a decade ago, Malwarebytes revolutionized the antivirus industry by prioritizing the security of all individuals. Today, Malwarebytes is now also revolutionizing digital life protection by safeguarding the data that serves as the backbone of your identity, your privacy, your reputation, and your well-being online.  

Why data matters 

I can’t tell you how many times I’ve read that “data is the new oil” without reading any explanations as to why people should care.  

Here’s my attempt at clarifying the matter: Too much of our lives are put online without our control.  

Creating a social media account requires handing over your full name and birthdate. Completing any online shopping order requires detailing your address and credit card number. Getting approved for a mortgage requires the exchange of several documents that reveal your salary and your employer. Buying a plane ticket could necessitate your passport info. Messaging your doctor could involve sending a few photos that you’d like to keep private.  

As we know, a lot of this data is valuable to advertisers—this is what pundits focus on when they invoke the value of “oil” in discussing modern data collection—but this data is also valuable to an entirely separate group that has learned to abuse private information in novel and frightening ways: Cybercriminals.  

Long ago, cybercriminals would steal your username and password by fooling you with an urgently worded phishing email. Today, while this tactic is still being used, there’s a much easier path to data theft. Cybercriminals can simply buy your information on the dark web.  

That information can include credit card numbers—where the risk of financial fraud is obvious—and even more regulated forms of identity, like Social Security Numbers and passport info. Equipped with enough forms of “proof,” online thieves can fool a bank into routing your money elsewhere or trick a lender into opening a new line of credit in your name.  

Where the risk truly lies, however, is in fraudulent account access.  

If you’ve ever been involved in a company’s data breach (which is extremely likely), there’s a chance that the username and password that were associated with that data breach can be bought on the dark web for just pennies. Even though each data breach involves just one username and password for each account, cybercriminals know that many people frequently reuse passwords across multiple accounts. After illegally purchasing your login credentials that were exposed in one data breach, thieves will use those same credentials to try to log into more popular, sensitive online accounts, like your online banking, your email, and your social media.  

If any of these attempts at digital safe-cracking works, the potential for harm is enormous.  

With just your email login and password, cybercriminals can ransack photos that are stored in an associated cloud drive and use those for extortion. They can search for attachments that reveal credit card numbers, passport info, and ID cards and then use that information to fool a bank into letting them access your funds. They can pose as you in bogus emails and make fraudulent requests for money from your family and friends. They can even change your password and lock you out forever. 

This is the future of personal cybercrime, and as a company committed to stopping cyberthreats everywhere, we understand that we have a role to play in protecting people.  

We will always stop malware. We will always advise to create and use unique passwords and multifactor authentication. But today, we’re expanding our responsibility and helping you truly see the modern threats that could leverage your data.  

With the Digital Footprint Portal, who you are online is finally visible to you—not just cybercriminals. Use it today to understand where your data has been leaked, what passwords have been exposed, and how you can protect yourself online.  

Digitally safe 

Malwarebytes and the cybersecurity industry at large could not have predicted today’s most pressing threats against online identities and reputations, but that doesn’t mean we get to ignore them. The truth is that Malwarebytes was founded with a belief broader than anti-malware protection. Malwarebytes was founded to keep people safe.  

As cybercriminals change their tactics, as scammers needle their way onto online platforms, and as thieves steal and abuse the sensitive data that everyone places online, Malwarebytes will always stay one step ahead. The future isn’t about worms, viruses, Trojans, scams, pig butchering, or any other single scam. It’s about holistic digital life protection. We’re excited to help you get there.  

❌
❌