Normal view

There are new articles available, click to refresh the page.
Today — 18 June 2024Cybersecurity

Duo Charged with Operating $430 Million Dark Web Marketplace

Empire Market

Two suspected administrators of a $430 million dark web marketplace are facing the possibility of life sentences in the United States. The U.S. Department of Justice (DOJ) has charged Thomas Pavey, 38, and Raheim Hamilton, 28, with managing "Empire Market" from 2018 to 2020, and for previously selling counterfeit U.S. currency on AlphaBay, a now-defunct criminal market. The Justice Department alleges that Pavey and Hamilton facilitated nearly four million transactions on Empire Market, which involved drugs such as heroin, methamphetamine and cocaine, as well as counterfeit currency and stolen credit card information. Pavey is from Ormond Beach, Florida, and Hamilton is from Suffolk, Virginia. The indictment claims that they initially collaborated on selling counterfeit U.S. currency on AlphaBay. After AlphaBay was shut down in a global law enforcement operation in July 2017, Hamilton and Pavey launched Empire Market on February 1, 2018.

Operation of Empire Market

Empire Market featured categories such as Fraud, Drugs & Chemicals, Counterfeit Items, and Software & Malware. The indictment mentions at least one instance where counterfeit U.S. currency was sold to an undercover law enforcement agent on the platform. Transactions were conducted using cryptocurrency and the platform allowed users to even rate the sellers. Hamilton and Pavey allegedly managed Empire Market until August 22, 2020. During the investigation, the DOJ seized $75 million worth of cryptocurrency, along with cash and precious metals, though it remains unclear if these were obtained through raids on the suspects' properties.

New Dark Web Marketplaces Spring Up

This case is part of a broader trend where former users of one dark web marketplace create new platforms following law enforcement crackdowns. For example, after AlphaBay's closure, some vendors moved to create new marketplaces or tools like Skynet Market. Another notable cybercriminal forum - BreachForums - has encountered issues recently while attempting to resume operations after law enforcement actions. ShinyHunters – who had reportedly retired after tiring of the pressure of running a notorious hacker forum – returned on June 14 to announce that the forum is now under the ownership of a threat actor operating under the new handle name “Anastasia.” It’s not yet clear if the move will quell concerns that the forum has been taken over by law enforcement after a May 15 FBI-led takeover, but for now, BreachForums is up and running under its .st domain. The arrests of Pavey and Hamilton underscore the ongoing efforts by law enforcement to dismantle dark web marketplaces that facilitate illegal activities and highlight the significant legal consequences for those involved in such operations. Pavey and Hamilton are currently in custody, awaiting arraignment in a federal court in Chicago. They face numerous charges, including drug trafficking, computer fraud, counterfeiting and money laundering. Each charge carries a potential life sentence in federal prison.

NoName Carries Out Romania Cyberattack, Downs Portals of Government, Stock Exchange

Romania Government Cyberattack

Several pro-Russia hacker groups have allegedly carried out a massive Distributed Denial-of-Service (DDoS) attack in Romania on June 18, 2024. The Romania Cyberattack has affected critical websites, including the official site of Romania and portals of the country’s stock exchange and financial institutions. The attack was allegedly conducted by NoName in collaboration with the Russian Cyber Army, HackNet, and CyberDragon and Terminus. The extent of the damage, however, remains unclear. Romania Cyberattack

Details About Romania Cyberattack

According to NoName, the cyberattack was carried out on Romania for its pro-Ukraine stance in the Russia-Ukraine war. In its post on X, NoName claimed, “Together with colleagues shipped another batch of DDoS missiles to Romanian government websites.” The threat actor claimed to have attacked the following websites:
  • The Government of Romania: This is not the first time that the country’s official site was hacked. In 2022, Pro-Russia hacker group Killnet claimed to have carried out cyberattacks on websites of the government and Defense Ministry. However, at that time, the Romania Government claimed that there was no compromise of data due to the attack and the websites were soon restored.
  • National Bank of Romania: The National Bank of Romania is the central bank of Romania and was established in April 1880. Its headquarters are in the capital city of Bucharest.
  • Aedificium Bank for Housing: A banking firm that provides residential lending, home loans, savings, and financing services. It was founded in 2004 and has branches in the European Union (EU), and Europe, Middle East, and Africa (EMEA).
  • Bucharest Stock Exchange: The Bucharest Stock Exchange is the stock exchange of Romania located in Bucharest. As of 2023, there were 85 companies listed on the BVB. Romania Cyberattack
Despite the bold claims made by the NoName group, the extent of the Romania cyberattack, details of compromised data, or the motive behind the attack remain undisclosed. A visual examination of the affected organizations’ websites shows that all the listed websites are experiencing accessibility issues. These issues range from “403 Forbidden” errors to prolonged loading times, indicating a probable disruption or compromise. The situation is dynamic and continues to unravel. It is imperative to approach this information cautiously, as unverified claims in the cybersecurity world are not uncommon. The alleged NoName attack highlights the persistent threat of cyberattacks on critical entities, such as government organizations and financial institutions. However, official statements from the targeted organizations have yet to be released, leaving room for skepticism regarding the severity and authenticity of the Romania cyberattack claim. Until official communication is provided by the affected organizations, the true nature and impact of the alleged NoName attack remain uncertain.

Romania Cyberattacks Are Not Uncommon

This isn’t the first instance of NoName targeting organizations in Romania. In March this year, NoName attacked the Ministry of Internal Affairs, The Service of Special Communications, and the Central Government. In February, Over a hundred Romanian healthcare facilities were affected by a ransomware attack by an unknown hacker, with some doctors forced to resort to pen and paper.

How to Mitigate NoName DDoS attacks

Mitigation against NoName’s DDoS attacks require prolonged cloud protection tools and specialized software and filtering tools to detect the flow of traffic before it can hit the servers. In some cases, certain antivirus software can be successful in detecting threats that can be used by organizations to launch DDoS attacks. A robust and essential cyber hygiene practice to avoid threats includes patching vulnerabilities and not opening phishing emails that are specially crafted to look like urgent communications from legitimate government organizations and other spoofed entities. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

META Stealer Enhances Stealth with Cryptographic Builds in v5.0 Update

META stealer v5.0

META stealer v5.0 has recently launched, heralding a new phase of advanced and heightened features for the infostealer. This latest version introduces TLS encryption between the build and the C2 panel, a significant enhancement similar to recent updates in other leading stealers like Lumma and Vidar. The update announcement (screenshot below) emphasizes several key improvements aimed at enhancing functionality and security. This includes integration with TLS encryption, ensuring secure communication channels between the build and the control panel. This upgrade highlights the malware developer's commitment to enhance the stealer's capabilities and reach. [caption id="attachment_77605" align="alignnone" width="450"]META stealer 5.0 META stealer 5.0 details (source: X)[/caption]

Decoding the New META Stealer v5.0: Features and Capabilities

The new META Stealer v5.0 update introduces a new build system allowing users to generate unique builds tailored to their specific requirements. This system is supported by the introduction of "Stub token" currency, enabling users to create new Runtime stubs directly from the panel. This feature enhances flexibility and customization options for users. Another notable addition is the "Crypt build" option, enhancing security by encrypting builds to avoid detection during scans. This feature ensures that builds remain undetected at scan time, reinforcing the stealer's stealth capabilities, thus creating the perfect hindering plan for the information stealer. Additionally, the update includes improvements to the panel's security and licensing systems. The redesigned panel incorporates enhanced protection measures, while the revamped licensing system aims to reduce operational disruptions for users.

Previous META Stealer Promises and Upgrades 

The makers of META Stealer released the new update on June 17th, 2024 with a special focus on implementing a new system for generating unique stubs per user. This approach enhances individualized security and also highlights the stealer's commitment to continuous improvement and user satisfaction. Previously, in February 2023, META Stealer underwent significant updates with version 4.3. This update introduced features such as enhanced detection cleaning, the ability to create builds in multiple formats (including *.vbs and *.js), and integration with Telegram for build creation. These enhancements demonstrate META stealer's commitment to target unsuspecting victims.  META stealer continues to evolve with each update, reinforcing its position as a versatile and robust information stealer designed to meet the diverse needs of its user base while continuing targeting victims globally. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

NHS Ransomware Attack: What Makes Healthcare a Prime Target for Ransomware? – Source: www.databreachtoday.com

nhs-ransomware-attack:-what-makes-healthcare-a-prime-target-for-ransomware?-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Fraud Management & Cybercrime , Healthcare , Industry Specific Rubrik’s Steve Stone on Reducing Data-Related Vulnerabilities in Healthcare June 18, 2024     Steve Stone, head of Zero Labs, Rubrik The recent ransomware attack on a key UK National Health Service IT vendor has forced two London hospitals to reschedule […]

La entrada NHS Ransomware Attack: What Makes Healthcare a Prime Target for Ransomware? – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Hackers Plead Guilty After Breaching Law Enforcement Portal – Source: www.databreachtoday.com

hackers-plead-guilty-after-breaching-law-enforcement-portal-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Cybercrime , Fraud Management & Cybercrime , Government Justice Says Sagar Steven Singh and Nicholas Ceraolo Doxed and Threatened Victims Chris Riotta (@chrisriotta) • June 17, 2024     Image: Shutterstock Two hackers pleaded guilty Monday in federal court to conspiring to commit computer intrusion and aggravated identity theft. Authorities […]

La entrada Hackers Plead Guilty After Breaching Law Enforcement Portal – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Police Dismantle Asian Crime Ring Behind $25M Android Fraud – Source: www.databreachtoday.com

police-dismantle-asian-crime-ring-behind-$25m-android-fraud-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Fraud Management & Cybercrime , Geo Focus: Asia , Geo-Specific Hackers Used Dozens of Servers to Distribute Malicious Android Apps Jayant Chakravarti (@JayJay_Tech) • June 17, 2024     The Singapore Police Force arrested a man they said is a cybercrime ringleader from Malaysia. (Image: Public Affairs Department, Singapore Police […]

La entrada Police Dismantle Asian Crime Ring Behind $25M Android Fraud – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

CISA Conducts First-Ever AI Security Incident Response Drill – Source: www.databreachtoday.com

cisa-conducts-first-ever-ai-security-incident-response-drill-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Artificial Intelligence & Machine Learning , Governance & Risk Management , Government US Cyber Defense Agency Developing AI Security Incident Collaboration Playbook Chris Riotta (@chrisriotta) • June 17, 2024     The Cybersecurity and Infrastructure Security Agency is crafting a comprehensive framework to unify government, industry and global partners in […]

La entrada CISA Conducts First-Ever AI Security Incident Response Drill – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Signal Foundation Warns Against EU's Plan to Scan Private Messages for CSAM

By: Newsroom
18 June 2024 at 12:22
A controversial proposal put forth by the European Union to scan users' private messages for detection child sexual abuse material (CSAM) poses severe risks to end-to-end encryption (E2EE), warned Meredith Whittaker, president of the Signal Foundation, which maintains the privacy-focused messaging service of the same name. "Mandating mass scanning of private communications fundamentally

Rethinking Democracy for the Age of AI

18 June 2024 at 07:04

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.

We need to create new systems of governance that align incentives and are resilient against hacking … at every scale. From the individual all the way up to the whole of society.

For this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet. And it’s “blue sky” thinking, not even remotely considering what’s feasible today.

Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.

We security technologists have a lot of expertise in both secure system design and hacking. That’s why we have something to add to this discussion.

And finally, this is a work in progress. I’m trying to create a framework for viewing governance. So think of this more as a foundation for discussion, rather than a road map to a solution. And I think by writing, and what you’re going to hear is the current draft of my writing—and my thinking. So everything is subject to change without notice.

OK, so let’s go.

We all know about misinformation and how it affects democracy. And how propagandists have used it to advance their agendas. This is an ancient problem, amplified by information technologies. Social media platforms that prioritize engagement. “Filter bubble” segmentation. And technologies for honing persuasive messages.

The problem ultimately stems from the way democracies use information to make policy decisions. Democracy is an information system that leverages collective intelligence to solve political problems. And then to collect feedback as to how well those solutions are working. This is different from autocracies that don’t leverage collective intelligence for political decision making. Or have reliable mechanisms for collecting feedback from their populations.

Those systems of democracy work well, but have no guardrails when fringe ideas become weaponized. That’s what misinformation targets. The historical solution for this was supposed to be representation. This is currently failing in the US, partly because of gerrymandering, safe seats, only two parties, money in politics and our primary system. But the problem is more general.

James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate. It’s hard to organize. To be fair, these limitations are both good and bad. In any case, current technology—social media—breaks them both.

So this is a question: What does representation look like in a world without either filtering or geographical dispersal? Or, how do we avoid polluting 21st century democracy with prejudice, misinformation and bias. Things that impair both the problem solving and feedback mechanisms.

That’s the real issue. It’s not about misinformation, it’s about the incentive structure that makes misinformation a viable strategy.

This is problem No. 1: Our systems have misaligned incentives. What’s best for the small group often doesn’t match what’s best for the whole. And this is true across all sorts of individuals and group sizes.

Now, historically, we have used misalignment to our advantage. Our current systems of governance leverage conflict to make decisions. The basic idea is that coordination is inefficient and expensive. Individual self-interest leads to local optimizations, which results in optimal group decisions.

But this is also inefficient and expensive. The U.S. spent $14.5 billion on the 2020 presidential, senate and congressional elections. I don’t even know how to calculate the cost in attention. That sounds like a lot of money, but step back and think about how the system works. The economic value of winning those elections are so great because that’s how you impose your own incentive structure on the whole.

More generally, the cost of our market economy is enormous. For example, $780 billion is spent world-wide annually on advertising. Many more billions are wasted on ventures that fail. And that’s just a fraction of the total resources lost in a competitive market environment. And there are other collateral damages, which are spread non-uniformly across people.

We have accepted these costs of capitalism—and democracy—because the inefficiency of central planning was considered to be worse. That might not be true anymore. The costs of conflict have increased. And the costs of coordination have decreased. Corporations demonstrate that large centrally planned economic units can compete in today’s society. Think of Walmart or Amazon. If you compare GDP to market cap, Apple would be the eighth largest country on the planet. Microsoft would be the tenth.

Another effect of these conflict-based systems is that they foster a scarcity mindset. And we have taken this to an extreme. We now think in terms of zero-sum politics. My party wins, your party loses. And winning next time can be more important than governing this time. We think in terms of zero-sum economics. My product’s success depends on my competitors’ failures. We think zero-sum internationally. Arms races and trade wars.

Finally, conflict as a problem-solving tool might not give us good enough answers anymore. The underlying assumption is that if everyone pursues their own self interest, the result will approach everyone’s best interest. That only works for simple problems and requires systemic oppression. We have lots of problems—complex, wicked, global problems—that don’t work that way. We have interacting groups of problems that don’t work that way. We have problems that require more efficient ways of finding optimal solutions.

Note that there are multiple effects of these conflict-based systems. We have bad actors deliberately breaking the rules. And we have selfish actors taking advantage of insufficient rules.

The latter is problem No. 2: What I refer to as “hacking” in my latest book: “A Hacker’s Mind.” Democracy is a socio-technical system. And all socio-technical systems can be hacked. By this I mean that the rules are either incomplete or inconsistent or outdated—they have loopholes. And these can be used to subvert the rules. This is Peter Thiel subverting the Roth IRA to avoid paying taxes on $5 billion in income. This is gerrymandering, the filibuster, and must-pass legislation. Or tax loopholes, financial loopholes, regulatory loopholes.

In today’s society, the rich and powerful are just too good at hacking. And it is becoming increasingly impossible to patch our hacked systems. Because the rich use their power to ensure that the vulnerabilities don’t get patched.

This is bad for society, but it’s basically the optimal strategy in our competitive governance systems. Their zero-sum nature makes hacking an effective, if parasitic, strategy. Hacking isn’t a new problem, but today hacking scales better—and is overwhelming the security systems in place to keep hacking in check. Think about gun regulations, climate change, opioids. And complex systems make this worse. These are all non-linear, tightly coupled, unrepeatable, path-dependent, adaptive, co-evolving systems.

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

This is problem No. 3: Our systems of governance are not suited to our power level. They tend to be rights based, not permissions based. They’re designed to be reactive, because traditionally there was only so much damage a single person could do.

We do have systems for regulating dangerous technologies. Consider automobiles. They are regulated in many ways: drivers licenses + traffic laws + automobile regulations + road design. Compare this to aircrafts. Much more onerous licensing requirements, rules about flights, regulations on aircraft design and testing and a government agency overseeing it all day-to-day. Or pharmaceuticals, which have very complex rules surrounding everything around researching, developing, producing and dispensing. We have all these regulations because this stuff can kill you.

The general term for this kind of thing is the “precautionary principle.” When random new things can be deadly, we prohibit them unless they are specifically allowed.

So what happens when a significant percentage of our jobs are as potentially damaging as a pilot’s? Or even more damaging? When one person can affect everyone through synthetic biology. Or where a corporate decision can directly affect climate. Or something in AI or robotics. Things like the precautionary principle are no longer sufficient. Because breaking the rules can have global effects.

And AI will supercharge hacking. We have created a series of non-interoperable systems that actually interact and AI will be able to figure out how to take advantage of more of those interactions: finding new tax loopholes or finding new ways to evade financial regulations. Creating “micro-legislation” that surreptitiously benefits a particular person or group. And catastrophic risk means this is no longer tenable.

So these are our core problems: misaligned incentives leading to too effective hacking of systems where the costs of getting it wrong can be catastrophic.

Or, to put more words on it: Misaligned incentives encourage local optimization, and that’s not a good proxy for societal optimization. This encourages hacking, which now generates greater harm than at any point in the past because the amount of damage that can result from local optimization is greater than at any point in the past.

OK, let’s get back to the notion of democracy as an information system. It’s not just democracy: Any form of governance is an information system. It’s a process that turns individual beliefs and preferences into group policy decisions. And, it uses feedback mechanisms to determine how well those decisions are working and then makes corrections accordingly.

Historically, there are many ways to do this. We can have a system where no one’s preference matters except the monarch’s or the nobles’ or the landowners’. Sometimes the stronger army gets to decide—or the people with the money.

Or we could tally up everyone’s preferences and do the thing that at least half of the people want. That’s basically the promise of democracy today, at its ideal. Parliamentary systems are better, but only in the margins—and it all feels kind of primitive. Lots of people write about how informationally poor elections are at aggregating individual preferences. It also results in all these misaligned incentives.

I realize that democracy serves different functions. Peaceful transition of power, minimizing harm, equality, fair decision making, better outcomes. I am taking for granted that democracy is good for all those things. I’m focusing on how we implement it.

Modern democracy uses elections to determine who represents citizens in the decision-making process. And all sorts of other ways to collect information about what people think and want, and how well policies are working. These are opinion polls, public comments to rule-making, advocating, lobbying, protesting and so on. And, in reality, it’s been hacked so badly that it does a terrible job of executing on the will of the people, creating further incentives to hack these systems.

To be fair, the democratic republic was the best form of government that mid 18th century technology could invent. Because communications and travel were hard, we needed to choose one of us to go all the way over there and pass laws in our name. It was always a coarse approximation of what we wanted. And our principles, values, conceptions of fairness; our ideas about legitimacy and authority have evolved a lot since the mid 18th century. Even the notion of optimal group outcomes depended on who was considered in the group and who was out.

But democracy is not a static system, it’s an aspirational direction. One that really requires constant improvement. And our democratic systems have not evolved at the same pace that our technologies have. Blocking progress in democracy is itself a hack of democracy.

Today we have much better technology that we can use in the service of democracy. Surely there are better ways to turn individual preferences into group policies. Now that communications and travel are easy. Maybe we should assign representation by age, or profession or randomly by birthday. Maybe we can invent an AI that calculates optimal policy outcomes based on everyone’s preferences.

Whatever we do, we need systems that better align individual and group incentives, at all scales. Systems designed to be resistant to hacking. And resilient to catastrophic risks. Systems that leverage cooperation more and conflict less. And are not zero-sum.

Why can’t we have a game where everybody wins?

This has never been done before. It’s not capitalism, it’s not communism, it’s not socialism. It’s not current democracies or autocracies. It would be unlike anything we’ve ever seen.

Some of this comes down to how trust and cooperation work. When I wrote “Liars and Outliers” in 2012, I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

What I didn’t appreciate is how different the first and last two are. Morals and reputation are both old biological systems of trust. They’re person to person, based on human connection and cooperation. Laws—and especially security technologies—are newer systems of trust that force us to cooperate. They’re socio-technical systems. They’re more about confidence and control than they are about trust. And that allows them to scale better. Taxi driver used to be one of the country’s most dangerous professions. Uber changed that through pervasive surveillance. My Uber driver and I don’t know or trust each other, but the technology lets us both be confident that neither of us will cheat or attack each other. Both drivers and passengers compete for star rankings, which align local and global incentives.

In today’s tech-mediated world, we are replacing the rituals and behaviors of cooperation with security mechanisms that enforce compliance. And innate trust in people with compelled trust in processes and institutions. That scales better, but we lose the human connection. It’s also expensive, and becoming even more so as our power grows. We need more security for these systems. And the results are much easier to hack.

But here’s the thing: Our informal human systems of trust are inherently unscalable. So maybe we have to rethink scale.

Our 18th century systems of democracy were the only things that scaled with the technology of the time. Imagine a group of friends deciding where to have dinner. One is kosher, one is a vegetarian. They would never use a winner-take-all ballot to decide where to eat. But that’s a system that scales to large groups of strangers.

Scale matters more broadly in governance as well. We have global systems of political and economic competition. On the other end of the scale, the most common form of governance on the planet is socialism. It’s how families function: people work according to their abilities, and resources are distributed according to their needs.

I think we need governance that is both very large and very small. Our catastrophic technological risks are planetary-scale: climate change, AI, internet, bio-tech. And we have all the local problems inherent in human societies. We have very few problems anymore that are the size of France or Virginia. Some systems of governance work well on a local level but don’t scale to larger groups. But now that we have more technology, we can make other systems of democracy scale.

This runs headlong into historical norms about sovereignty. But that’s already becoming increasingly irrelevant. The modern concept of a nation arose around the same time as the modern concept of democracy. But constituent boundaries are now larger and more fluid, and depend a lot on context. It makes no sense that the decisions about the “drug war”—or climate migration—are delineated by nation. The issues are much larger than that. Right now there is no governance body with the right footprint to regulate Internet platforms like Facebook. Which has more users world-wide than Christianity.

We also need to rethink growth. Growth only equates to progress when the resources necessary to grow are cheap and abundant. Growth is often extractive. And at the expense of something else. Growth is how we fuel our zero-sum systems. If the pie gets bigger, it’s OK that we waste some of the pie in order for it to grow. That doesn’t make sense when resources are scarce and expensive. Growing the pie can end up costing more than the increase in pie size. Sustainability makes more sense. And a metric more suited to the environment we’re in right now.

Finally, agility is also important. Back to systems theory, governance is an attempt to control complex systems with complicated systems. This gets harder as the systems get larger and more complex. And as catastrophic risk raises the costs of getting it wrong.

In recent decades, we have replaced the richness of human interaction with economic models. Models that turn everything into markets. Market fundamentalism scaled better, but the social cost was enormous. A lot of how we think and act isn’t captured by those models. And those complex models turn out to be very hackable. Increasingly so at larger scales.

Lots of people have written about the speed of technology versus the speed of policy. To relate it to this talk: Our human systems of governance need to be compatible with the technologies they’re supposed to govern. If they’re not, eventually the technological systems will replace the governance systems. Think of Twitter as the de facto arbiter of free speech.

This means that governance needs to be agile. And able to quickly react to changing circumstances. Imagine a court saying to Peter Thiel: “Sorry. That’s not how Roth IRAs are supposed to work. Now give us our tax on that $5B.” This is also essential in a technological world: one that is moving at unprecedented speeds, where getting it wrong can be catastrophic and one that is resource constrained. Agile patching is how we maintain security in the face of constant hacking—and also red teaming. In this context, both journalism and civil society are important checks on government.

I want to quickly mention two ideas for democracy, one old and one new. I’m not advocating for either. I’m just trying to open you up to new possibilities. The first is sortition. These are citizen assemblies brought together to study an issue and reach a policy decision. They were popular in ancient Greece and Renaissance Italy, and are increasingly being used today in Europe. The only vestige of this in the U.S. is the jury. But you can also think of trustees of an organization. The second idea is liquid democracy. This is a system where everybody has a proxy that they can transfer to someone else to vote on their behalf. Representatives hold those proxies, and their vote strength is proportional to the number of proxies they have. We have something like this in corporate proxy governance.

Both of these are algorithms for converting individual beliefs and preferences into policy decisions. Both of these are made easier through 21st century technologies. They are both democracies, but in new and different ways. And while they’re not immune to hacking, we can design them from the beginning with security in mind.

This points to technology as a key component of any solution. We know how to use technology to build systems of trust. Both the informal biological kind and the formal compliance kind. We know how to use technology to help align incentives, and to defend against hacking.

We talked about AI hacking; AI can also be used to defend against hacking, finding vulnerabilities in computer code, finding tax loopholes before they become law and uncovering attempts at surreptitious micro-legislation.

Think back to democracy as an information system. Can AI techniques be used to uncover our political preferences and turn them into policy outcomes, get feedback and then iterate? This would be more accurate than polling. And maybe even elections. Can an AI act as our representative? Could it do a better job than a human at voting the preferences of its constituents?

Can we have an AI in our pocket that votes on our behalf, thousands of times a day, based on the preferences it infers we have. Or maybe based on the preferences it infers we would have if we read up on the issues and weren’t swayed by misinformation. It’s just another algorithm for converting individual preferences into policy decisions. And it certainly solves the problem of people not paying attention to politics.

But slow down: This is rapidly devolving into technological solutionism. And we know that doesn’t work.

A general question to ask here is when do we allow algorithms to make decisions for us? Sometimes it’s easy. I’m happy to let my thermostat automatically turn my heat on and off or to let an AI drive a car or optimize the traffic lights in a city. I’m less sure about an AI that sets tax rates, or corporate regulations or foreign policy. Or an AI that tells us that it can’t explain why, but strongly urges us to declare war—right now. Each of these is harder because they are more complex systems: non-local, multi-agent, long-duration and so on. I also want any AI that works on my behalf to be under my control. And not controlled by a large corporate monopoly that allows me to use it.

And learned helplessness is an important consideration. We’re probably OK with no longer needing to know how to drive a car. But we don’t want a system that results in us forgetting how to run a democracy. Outcomes matter here, but so do mechanisms. Any AI system should engage individuals in the process of democracy, not replace them.

So while an AI that does all the hard work of governance might generate better policy outcomes. There is social value in a human-centric political system, even if it is less efficient. And more technologically efficient preference collection might not be better, even if it is more accurate.

Procedure and substance need to work together. There is a role for AI in decision making: moderating discussions, highlighting agreements and disagreements helping people reach consensus. But it is an independent good that we humans remain engaged in—and in charge of—the process of governance.

And that value is critical to making democracy function. Democratic knowledge isn’t something that’s out there to be gathered: It’s dynamic; it gets produced through the social processes of democracy. The term of art is “preference formation.” We’re not just passively aggregating preferences, we create them through learning, deliberation, negotiation and adaptation. Some of these processes are cooperative and some of these are competitive. Both are important. And both are needed to fuel the information system that is democracy.

We’re never going to remove conflict and competition from our political and economic systems. Human disagreement isn’t just a surface feature; it goes all the way down. We have fundamentally different aspirations. We want different ways of life. I talked about optimal policies. Even that notion is contested: optimal for whom, with respect to what, over what time frame? Disagreement is fundamental to democracy. We reach different policy conclusions based on the same information. And it’s the process of making all of this work that makes democracy possible.

So we actually can’t have a game where everybody wins. Our goal has to be to accommodate plurality, to harness conflict and disagreement, and not to eliminate it. While, at the same time, moving from a player-versus-player game to a player-versus-environment game.

There’s a lot missing from this talk. Like what these new political and economic governance systems should look like. Democracy and capitalism are intertwined in complex ways, and I don’t think we can recreate one without also recreating the other. My comments about agility lead to questions about authority and how that interplays with everything else. And how agility can be hacked as well. We haven’t even talked about tribalism in its many forms. In order for democracy to function, people need to care about the welfare of strangers who are not like them. We haven’t talked about rights or responsibilities. What is off limits to democracy is a huge discussion. And Butterin’s trilemma also matters here: that you can’t simultaneously build systems that are secure, distributed, and scalable.

I also haven’t given a moment’s thought to how to get from here to there. Everything I’ve talked about—incentives, hacking, power, complexity—also applies to any transition systems. But I think we need to have unconstrained discussions about what we’re aiming for. If for no other reason than to question our assumptions. And to imagine the possibilities. And while a lot of the AI parts are still science fiction, they’re not far-off science fiction.

I know we can’t clear the board and build a new governance structure from scratch. But maybe we can come up with ideas that we can bring back to reality.

To summarize, the systems of governance we designed at the start of the Industrial Age are ill-suited to the Information Age. Their incentive structures are all wrong. They’re insecure and they’re wasteful. They don’t generate optimal outcomes. At the same time we’re facing catastrophic risks to society due to powerful technologies. And a vastly constrained resource environment. We need to rethink our systems of governance; more cooperation and less competition and at scales that are suited to today’s problems and today’s technologies. With security and precautions built in. What comes after democracy might very well be more democracy, but it will look very different.

This feels like a challenge worthy of our security expertise.

This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. It was previously published in Cyberscoop. I thought I posted it to my blog and Crypto-Gram last year, but it seems that I didn’t.

Helpful tools to get started in IoT Assessments

18 June 2024 at 09:00
Helpful tools to get started in IoT Assessments

The Internet of Things (IoT) can be a daunting field to get into. With many different tools and products available on the market it can be confusing to even know where to start. Having performed dozens of IoT assessments, I felt it would be beneficial to compile a basic list of items that are essential to getting started delving into the realm of testing embedded devices. The tools that will be covered in this post are primarily used to interact with the debug interface of embedded devices, however, many of them have multiple functions, from reading data from a memory chip to removing components from the physical circuit board. I would like to note that neither I, nor Rapid7, benefit in any way from the sale of any of these products. We honestly believe they are useful tools for any beginner.

1) Serial Debugger

One of the most used items when it comes to IoT testing would be a device used to interface with low-speed interfaces available on embedded devices. Gaining access to the debug interface on embedded devices is the easiest way to get a look under the hood of how the device is operating. One of the most popular and readily available devices on the market currently would be the Tigard.

Helpful tools to get started in IoT Assessments

The Tigard is a great open-source tool that has support for all the commonly used interfaces you might encounter on modern day embedded devices. It has support for Universal Asynchronous Receiver-Transmitter (UART), Joint Test Access Group (JTAG), Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), and Serial Wire Debug (SWD) connections. This device allows you to connect to various serial consoles or even extract the contents of commonly found flash memory chips. It is powered by a USB-C connection and also has the ability to select commonly used voltage supplies to power components when needed.

Link: https://www.crowdsupply.com/securinghw/tigard

2) PCByte Probes

A tool that saves a ton of time when it comes to connecting to serial interfaces and on-board components is a set of PCByte Probes. Without these probes, you would often have to resort to soldering on header pins or trying to attach to onboard components using probe connectors.

Helpful tools to get started in IoT Assessments

The starter level probe set includes 4 hands-free probes, a set of PCB holders, a magnetic base, and accessories. Oftentimes embedded devices contain small components on the circuit board that are not easily accessible due to size requirements. These probes allow for quick, solder-free, connections to be made to embedded devices. All you need to do is position the spring-loaded probes on areas of the circuit board and connect the included dupont wires to either a logic analyzer or a serial debugger to interface with the target device. The included circuit board holders are a nice touch to ensure the circuit board is kept firmly in position while working.

Link: https://sensepeek.com/pcbite-20

3) Rework Station

While working with embedded devices, there might be scenarios you run into that involve removing small components from the embedded device for offline analysis. There are many options for rework stations out on the internet, all with various levels of price and functionality. A model that hits the sweet spot of price and functionality is the Aoyue 968A+ Professional SMD Digital Hot Air Rework Station.

Helpful tools to get started in IoT Assessments

This rework station includes a number of tools to make any reworking job easy in one simple package. It includes a soldering iron, hot air rework gun, vacuum pickup tool, and a fume extractor. There are many times when performing embedded testing that it is necessary to either solder wires onto connections or remove components from the board for data extraction. The 70 watt soldering iron and 550 watt hot air gun provides plenty of power for quick soldering jobs and component rework.

Link: https://www.amazon.com/Aoyue-968A-Digital-Rework-Station/dp/B006FA481G?th=1

4) Logic Analyzer

Another important tool to have on hand when testing embedded devices is a logic analyzer. Many times, you will find that the debug port on an embedded device is not labeled on the circuit board. That is when a logic analyzer comes in handy to identify what various components on the board are without unnecessary guesswork. Logic analyzers are used to decode signals found on the board to identify and decode protocols such as UART, SPI, and I2C. There are many out on the market, but the sweet spot for price and functionality would be the Saleae Logic 8.

Helpful tools to get started in IoT Assessments

Saleae offers many different models of logic analyzers that all come in at different price points. Typically, the base model which supports 8 channels at a max speed of 100MS/s is sufficient for the majority, however, they do offer additional models that support a larger number of channels at higher speeds. Saleae includes the Logic 2 software which allows you to seamlessly interact with the device and identify protocols and decode signals on the board.

Link: https://usd.saleae.com/products/saleae-logic-8

As we've explored in this blog post, there are many options out on the market for conducting detailed analysis on embedded devices. Many of the tools out there are available at different price points and offer various levels of functionality and ease of interacting and interfacing with embedded devices. The goal with this guide is not to provide a comprehensive list of all available options, however to cover the basic tools used to begin your IoT journey.

Cybercriminals Exploit Free Software Lures to Deploy Hijack Loader and Vidar Stealer

By: Newsroom
18 June 2024 at 09:30
Threat actors are luring unsuspecting users with free or pirated versions of commercial software to deliver a malware loader called Hijack Loader, which then deploys an information stealer known as Vidar Stealer. "Adversaries had managed to trick users into downloading password-protected archive files containing trojanized copies of a Cisco Webex Meetings App (ptService.exe)," Trellix security

CMMC 1.0 & CMMC 2.0 – What’s Changed?

This blog delves into CMMC, the introduction of CMMC 2.0, what's changed, and what it means for your business.

The post CMMC 1.0 & CMMC 2.0 – What’s Changed? appeared first on Scytale.

The post CMMC 1.0 & CMMC 2.0 – What’s Changed? appeared first on Security Boulevard.

Start building your CRA compliance strategy now

18 June 2024 at 03:00

In March 2024, the European Parliament overwhelmingly approved the EU Cyber Resilience Act, or CRA, which will now be formally adopted with the goal of improving the cybersecurity of digital products. It sets out to do this by establishing essential requirements for manufacturers to ensure their products reach the market with fewer vulnerabilities.

The post Start building your CRA compliance strategy now appeared first on Security Boulevard.

Navigating Retail: Overcoming the Top 3 Identity Security Challenges

18 June 2024 at 01:33

As retailers compete in an increasingly competitive marketplace, they invest a great deal of resources in becoming household names. But brand recognition is a double-edged sword when it comes to cybersecurity. The bigger your name, the bigger the cyber target on your back. Data breaches in the retail sector cost an average of $3.28 million...

The post Navigating Retail: Overcoming the Top 3 Identity Security Challenges appeared first on Silverfort.

The post Navigating Retail: Overcoming the Top 3 Identity Security Challenges appeared first on Security Boulevard.

MEDUSA Ransomware Group Demands $220,000 from US Institutions, Threatens Data Exposure

MEDUSA Ransomware

Threat Actors (TAs) associated with the notorious MEDUSA ransomware have escalated their activities and have allegedly targeted two institutions in the USA. In a scenario mirroring all of its previous attacks, the group has not divulged critical information, such as the type of compromised data. It has, however, demanded a bounty of US $120,000 from Fitzgerald, DePietro & Wojnas CPAs, P.C and $100,000 from Tri-City College Prep High School to stop leaking internal data of the concerned organizations.

Understanding the MEDUSA Ransomware Attack

One of the two institutions targeted by MEDUSA is Tri-Cities Preparatory High School, a public charter middle and high school located in Prescott, Arizona, USA. The threat actor claimed to have access to 1.2 GB of the school's data and has threatened to publish it within 7-8 days. MEDUSA Ransomware Group The other organization that the group has claimed to have targeted is Fitzgerald, DePietro & Wojnas CPAs, P.C. It is an accounting firm based in Utica, New York, USA. The group claims to have access to 92.5 GB of the firm's data and has threatened to publish it within 8–9 days. Despite the tall claims made by the ransomware group, the official websites of the targeted companies seem to be fully functional, with no signs of any foul activity. The organizations, however, have not yet reacted to the alleged cyberattack, leaving the claims made by the ransomware group unverified.  The article would be updated once the respective organizations respond to the claims. The absence of confirmation raises the question of the authenticity of the ransomware claim. It remains to be seen if the tactic employed by MEDUSA group is to garner attention or if there are any ulterior motives attached to their actions. Only an official statement by the affected organizations can reveal the true nature of the situation. However, if the claims made by the MEDUSA ransomware group do turn out to be true, then the consequences could be sweeping. The potential leak of sensitive data could pose a significant threat to the affected organizations and their staff, students and employees.

Who is the MEDUSA Ransomware Group?

MEDUSA first came into limelight in June 2021 and has since launched attacks on organizations in many countries targeting multiple industries, including healthcare, education, manufacturing, and retail. Most of the victims, though, have established their base in the United States of America. MEDUSA carries out its attacks as a Ransomware-as-a-Service (RaaS) platform. It provides would-be target organizations with malicious software and infrastructure required to carry out disrupting ransomware attacks. The ransomware group also runs a public Telegram channel that TAs utilize to post data that might be stolen, which could be an attempt to extort organizations and demand ransom.

History of MEDUSA Ransomware Attacks

Last week, the Medusa group took ownership of the cyberattack on Australia’s Victoria Racing Club (VRC). To provide authenticity, Medusa shared thirty documents from the club and demanded a ransom of US$700,000 from anyone who wanted to either delete the data or else download it. The leaked data included financial details of gaming machines, prizes won by VRC members, customer invoices, marketing details, names, email addresses, and mobile phone numbers. The VRC confirmed the breach, with its chief executive Steve Rosich releasing a statement: "We are currently communicating with our employees, members, partners, and sponsors to inform them that the VRC recently experienced a cyber incident.” In 2024, MEDUSA had targeted four organizations across different countries, including France, Italy, and Spain. The group’s modus operandi remains constant, with announcements being made on their dark web forum accompanied by deadlines and ransom demands. As organizations grapple with the fallout of cyberattacks by groups like MEDUSA, it becomes critical to remain cautious and implement strategic security measures. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Guidehouse and Nan McKay to Pay $11.3M for Cybersecurity Failures in COVID-19 Rental Assistance

Cybersecurity

Guidehouse Inc., based in McLean, Virginia, and Nan McKay and Associates, headquartered in El Cajon, California, have agreed to pay settlements totaling $11.3 million to resolve allegations under the False Claims Act. The settlements came from their failure to meet cybersecurity requirements in contracts aimed at providing secure online access for low-income New Yorkers applying for federal rental assistance during the COVID-19 pandemic.

What Exactly Happened?

In response to the economic hardships brought on by the pandemic, Congress enacted the Emergency Rental Assistance Program (ERAP) in early 2021. This initiative was designed to offer financial support to eligible low-income households in covering rent, rental arrears, utilities, and other housing-related expenses. Participating state agencies, such as New York's Office of Temporary and Disability Assistance (OTDA), were tasked with distributing federal funding to qualified tenants and landlords. Guidehouse assumed a pivotal role as the prime contractor for New York's ERAP, responsible for overseeing the ERAP technology and services. Nan McKay acted as Guidehouse's subcontractor, entrusted with delivering and maintaining the ERAP technology used by New Yorkers to submit online applications for rental assistance.

Admission of Violations and Settlement

Critical to the allegations were breaches in cybersecurity protocols. Both Guidehouse and Nan McKay admitted to failing their obligation to conduct required pre-production cybersecurity testing on the ERAP Application. Consequently, the ERAP system went live on June 1, 2021, only to be shut down twelve hours later by OTDA due to a cybersecurity breach. This data breach exposed the personally identifiable information (PII) of applicants, which was found accessible on the Internet. Guidehouse and Nan McKay acknowledged that proper cybersecurity testing could have detected and potentially prevented such breaches. Additionally, Guidehouse admitted to using a third-party data cloud software program to store PII without obtaining OTDA’s permission, violating their contractual obligations.

Government Response and Accountability

Principal Deputy Assistant Attorney General Brian M. Boynton of the Justice Department’s Civil Division emphasized the importance of adhering to cybersecurity commitments associated with federal funding. "Federal funding frequently comes with cybersecurity obligations, and contractors and grantees must honor these commitments,” said Boynton. “The Justice Department will continue to pursue knowing violations of material cybersecurity requirements aimed at protecting sensitive personal information.” U.S. Attorney Carla B. Freedman for the Northern District of New York echoed these sentiments, highlighting the necessity for federal contractors to prioritize cybersecurity obligations. “Contractors who receive federal funding must take their cybersecurity obligations seriously,” said Freedman. “We will continue to hold entities and individuals accountable when they knowingly fail to implement and follow cybersecurity requirements essential to protect sensitive information.” Acting Inspector General Richard K. Delmar of the Department of the Treasury emphasized the severe impact of these breaches on a program crucial to the government’s pandemic recovery efforts. He expressed gratitude for the partnership with the DOJ in addressing this breach and ensuring accountability. “These vendors failed to meet their data integrity obligations in a program on which so many eligible citizens depend for rental security, which jeopardized the effectiveness of a vital part of the government’s pandemic recovery effort,” said Delmar. “Treasury OIG is grateful for DOJ’s support of its oversight work to accomplish this recovery.” New York State Comptroller Thomas P. DiNapoli emphasized the critical role of protecting the integrity of programs like ERAP, vital to economic recovery. He thanked federal partners for their collaborative efforts in holding these contractors accountable. “This settlement sends a strong message to New York State contractors that there will be consequences if they fail to safeguard the personal information entrusted to them or meet the terms of their contracts,” said DiNapoli. “Rental assistance has been vital to our economic recovery, and the integrity of the program needs to be protected. I thank the United States Department of Justice, United States Attorney for the Northern District of New York Freedman and the United States Department of Treasury Office of the Inspector General for their partnership in exposing this breach and holding these vendors accountable.”

Initiative to Address Cybersecurity Risks

In response to such breaches, the Deputy Attorney General announced the Civil Cyber-Fraud Initiative on October 6, 2021. This initiative aims to hold accountable entities or individuals who knowingly endanger sensitive information through inadequate cybersecurity practices or misrepresentations. The investigation into these breaches was initiated following a whistleblower lawsuit under the False Claims Act. As part of the settlement, whistleblower Elevation 33 LLC, owned by a former Guidehouse employee, will receive approximately $1.95 million. Trial Attorney J. Jennifer Koh from the Civil Division's Commercial Litigation Branch, Fraud Section, and Assistant U.S. Attorney Adam J. Katz from the Northern District of New York led the case, with support from the Department of the Treasury OIG and the Office of the New York State Comptroller. These settlements highlight the imperative for rigorous cybersecurity measures in federal contracts, particularly in safeguarding sensitive personal information critical to public assistance programs. As the government continues to navigate evolving cybersecurity threats, it remains steadfast in enforcing accountability among contractors entrusted with protecting essential public resources.

Cybersecurity Experts Warn of Rising Malware Threats from Sophisticated Social Engineering Tactics

TA571 and ClearFake Campaign 

Cybersecurity researchers have uncovered a disturbing trend in malware delivery tactics involving sophisticated social engineering techniques. These methods exploit user trust and familiarity with PowerShell scripts to compromise systems.  Among these threat actors, the two highlighted, TA571 and ClearFake campaign, were seen leveraging social engineering for spreading malware. According to researchers, the threat actors associated with TA571 and the ClearFake cluster have been actively using a novel approach to infiltrate systems.  This technique involves manipulating users into copying and pasting malicious PowerShell scripts under the guise of resolving legitimate issues.

Understanding the TA571 and ClearFake Campaign 

[caption id="attachment_77553" align="alignnone" width="1402"]TA571 and ClearFake Campaign  Example of a ClearFake attack chain. (Source: Proofpoint)[/caption] The TA571 campaign, first observed in March 2024, distributed emails containing HTML attachments that mimic legitimate Microsoft Word error messages. These messages coerce users to execute PowerShell scripts supposedly aimed at fixing document viewing issues.  Similarly, the ClearFake campaign, identified in April 2024, employs fake browser update prompts on compromised websites. These prompts instruct users to run PowerShell scripts to install what appears to be necessary security certificates, says Proofpoint. Upon interaction with the malicious prompts, users unwittingly copy PowerShell commands to their clipboard. Subsequent instructions guide them to paste and execute these commands in PowerShell terminals or via Windows Run dialog boxes. Once executed, these scripts initiate a chain of events leading to the download and execution of malware payloads such as DarkGate, Matanbuchus, and NetSupport RAT. The complexity of these attacks is compounded by their ability to evade traditional detection methods. Malicious scripts are often concealed within double-Base64 encoded HTML elements or obscured in JavaScript, making them challenging to identify and block preemptively.

Attack Variants, Evolution, and Recommendations

Since their initial observations, Proofpoint has noted the evolution of these techniques. TA571, for instance, has diversified its lures, sometimes directing victims to use the Windows Run dialog for script execution instead of PowerShell terminals. Meanwhile, Clearlake has incorporated blockchain-based techniques like "EtherHiding" to host malicious scripts, adding a layer of obfuscation. These developments highlight the critical importance of user education and better cybersecurity measures within organizations. Employees must be trained to recognize suspicious messages and actions that prompt the execution of PowerShell scripts from unknown sources. Organizations should also deploy advanced threat detection and blocking mechanisms capable of identifying malicious activities embedded within seemingly legitimate web pages or email attachments. While the TA571 and ClearFake campaigns represent distinct threat actors with varying objectives, their utilization of advanced social engineering and PowerShell exploitation techniques demands heightened vigilance from organizations worldwide. By staying informed and implementing better cybersecurity practices, businesses can better defend against these online threats.

CISA & EAC Release Guide to Enhance Election Security Through Public Communication

Election Security

In a joint effort to enhance election security and public confidence, the Cybersecurity and Infrastructure Security Agency (CISA) and the U.S. Election Assistance Commission (EAC) have released a comprehensive guide titled “Enhancing Election Security Through Public Communications.” This guide on election security is designed for state, local, tribal, and territorial election officials who play a critical role as the primary sources of official election information.

Why Communication is Important in Election Security

Open and transparent communication with the American public is essential to maintaining trust in the electoral process. State and local election officials are on the front lines, engaging with the public and the media on numerous election-related topics. These range from election dates and deadlines to voter registration, candidate filings, voting locations, election worker recruitment, security measures, and the publication of results. The new guide aims to provide these officials with a strong framework and practical tools to develop and implement an effective, year-round communications plan. “The ability for election officials to be transparent about the elections process and communicate quickly and effectively with the American people is crucial for building and maintaining their trust in the security and integrity of our elections process,” stated CISA Senior Advisor Cait Conley. The election security guide offers practical advice on how to tailor communication plans to the specific needs and resources of different jurisdictions. It includes worksheets to help officials develop core components of their communication strategies. This approach recognizes the diverse nature of election administration across the United States, where varying local contexts require customized solutions. EAC Chairman Ben Hovland, Vice Chair Donald Palmer, Commissioner Thomas Hicks, and Commissioner Christy McCormick collectively emphasized the critical role of election officials as trusted sources of information. “This resource supports election officials to successfully deliver accurate communication to voters with the critical information they need before and after Election Day,” they said. Effective and transparent communication not only aids voters in casting their ballots but also helps instill confidence in the security and accuracy of the election results.

How Tailored Communication Enhances Election Security

The release of this guide on election security comes at a crucial time when trust in the electoral process is increasingly under scrutiny. In recent years, the rise of misinformation and cyber threats has posed significant challenges to the integrity of elections worldwide. By equipping election officials with the tools to communicate effectively and transparently, CISA and the EAC are taking proactive steps to safeguard the democratic process. One of the strengths of this guide is its emphasis on tailoring communication strategies to the unique needs of different jurisdictions. This is a pragmatic approach that acknowledges the diverse landscape of election administration in the U.S. It recognizes that a one-size-fits-all solution is not feasible and that local context matters significantly in how information is disseminated and received. Furthermore, the guide’s focus on year-round communication is a noteworthy aspect. Election security is not just a concern during election cycles but is a continuous process that requires ongoing vigilance and engagement with the public. By encouraging a year-round communication plan, the guide promotes sustained efforts to build and maintain public trust. However, while the guide is a step in the right direction, its effectiveness will largely depend on the implementation by election officials at all levels. Adequate training and resources must be provided to ensure that officials can effectively utilize the tools and strategies outlined in the guide. Additionally, there needs to be a concerted effort to address potential barriers to effective communication, such as limited funding or technological challenges in certain jurisdictions.

To Wrap UP

The “Enhancing Election Security Through Public Communications” guide by CISA and the EAC is a timely and necessary resource for election officials across the United States. As election officials begin to implement the strategies outlined in the guide, it is imperative that they receive the support and resources needed to overcome any challenges. Ultimately, the success of this initiative will hinge on the ability of election officials to engage with the public in a clear, accurate, and transparent manner, thereby reinforcing the security and integrity of the election process.

The Annual SaaS Security Report: 2025 CISO Plans and Priorities

18 June 2024 at 07:23
Seventy percent of enterprises are prioritizing investment in SaaS security by establishing dedicated teams to secure SaaS applications, as part of a growing trend of maturity in this field of cybersecurity, according to a new survey released this month by the Cloud Security Alliance (CSA). Despite economic instability and major job cuts in 2023, organizations drastically increased investment in

New Malware Targets Exposed Docker APIs for Cryptocurrency Mining

By: Newsroom
18 June 2024 at 05:41
Cybersecurity researchers have uncovered a new malware campaign that targets publicly exposed Docket API endpoints with the aim of delivering cryptocurrency miners and other payloads. Included among the tools deployed is a remote access tool that's capable of downloading and executing more malicious programs as well as a utility to propagate the malware via SSH, cloud analytics platform Datadog

Podcast Episode: AI in Kitopia

18 June 2024 at 03:05

Artificial intelligence will neither solve all our problems nor likely destroy the world, but it could help make our lives better if it’s both transparent enough for everyone to understand and available for everyone to use in ways that augment us and advance our goals — not for corporations or government to extract something from us and exert power over us. Imagine a future, for example, in which AI is a readily available tool for helping people communicate across language barriers, or for helping vision- or hearing-impaired people connect better with the world.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

This is the future that Kit Walsh, EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects, and EFF Senior Staff Technologist Jacob Hoffman-Andrews, are working to bring about. They join EFF’s Cindy Cohn and Jason Kelley to discuss how AI shouldn’t be a tool to cash in, or to classify people for favor or disfavor, but instead to engage with technology and information in ways that advance us all. 

In this episode you’ll learn about: 

  • The dangers in using AI to determine who law enforcement investigates, who gets housing or mortgages, who gets jobs, and other decisions that affect people’s lives and freedoms. 
  • How "moral crumple zones” in technological systems can divert responsibility and accountability from those deploying the tech. 
  • Why transparency and openness of AI systems — including training AI on consensually obtained, publicly visible data — is so important to ensure systems are developed without bias and to everyone’s benefit. 
  • Why “watermarking” probably isn’t a solution to AI-generated disinformation. 

Kit Walsh is a senior staff attorney at EFF, serving as Director of Artificial Intelligence & Access to Knowledge Legal Projects. She has worked for years on issues of free speech, net neutrality, copyright, coders' rights, and other issues that relate to freedom of expression and access to knowledge, supporting the rights of political protesters, journalists, remix artists, and technologists to agitate for social change and to express themselves through their stories and ideas. Before joining EFF, Kit led the civil liberties and patent practice areas at the Cyberlaw Clinic, part of Harvard University's Berkman Klein Center for Internet and Society; earlier, she worked at the law firm of Wolf, Greenfield & Sacks, litigating patent, trademark, and copyright cases in courts across the country. Kit holds a J.D. from Harvard Law School and a B.S. in neuroscience from MIT, where she studied brain-computer interfaces and designed cyborgs and artificial bacteria. 

Jacob Hoffman-Andrews is a senior staff technologist at EFF, where he is lead developer on Let's Encrypt, the free and automated Certificate Authority; he also works on EFF's Encrypt the Web initiative and helps maintain the HTTPS Everywhere browser extension. Before working at EFF, Jacob was on Twitter's anti-spam and security teams. On the security team, he implemented HTTPS-by-default with forward secrecy, key pinning, HSTS, and CSP; on the anti-spam team, he deployed new machine-learned models to detect and block spam in real-time. Earlier, he worked on Google’s maps, transit, and shopping teams.

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

KIT WALSH
Contrary to some marketing claims, AI is not the solution to all of our problems. So I'm just going to talk about how AI exists in Kitopia. And in particular, the technology is available for everyone to understand. It is available for everyone to use in ways that advance their own values rather than hard coded to advance the values of the people who are providing it to you and trying to extract something from you and as opposed to embodying the values of a powerful organization, public or private, that wants to exert more power over you by virtue of automating its decisions.
So it can make more decisions classifying people, figuring out whom to favor, whom to disfavor. I'm defining Kitopia a little bit in terms of what it's not, but to get back to the positive vision, you have this intellectual commons of research development of data that we haven't really touched on privacy yet, but but data that is sourced in a consensual way and when it's, essentially, one of the things that I would love to have is a little AI muse that actually does embody my values and amplifies my ability to engage with technology and information on the Internet in a way that doesn't feel icky or oppressive and I don't have that in the world yet.

CINDY COHN
That’s Kit Walsh, describing an ideal world she calls “Kitopia”. Kit is a senior staff attorney at the Electronic Frontier Foundation. She works on free speech, net neutrality and copyright and many other issues related to freedom of expression and access to knowledge. In fact, her full title is EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects. So, where is Kitopia, you might ask? Well we can’t get there from here - yet. Because it doesn’t exist. Yet. But here at EFF we like to imagine what a better online world would look like, and how we will get there and today we’re joined by Kit and by EFF’s Senior Staff Technologist Jacob Hoffman-Andrews. In addition to working on AI with us, Jacob is a lead developer on Let's Encrypt, and his work on that project has been instrumental in helping us encrypt the entire web. I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

JACOB HOFFMAN-ANDREWS
I think in my ideal world people are more able to communicate with each other across language barriers, you know, automatic translation, transcription of the world for people who are blind or for deaf people to be able to communicate more clearly with hearing people. I think there's a lot of ways in which AI can augment our weak human bodies in ways that are beneficial for people and not simply increasing the control that their governments and their employers have over their lives and their bodies.

JASON KELLEY
We’re talking to Kit and Jacob both, because this is such a big topic that we really need to come at it from multiple angles to make sense of it and to figure out the answer to the really important question which is, How can AI actually make the world we live in, a better place?

CINDY COHN
So while many other people have been trying to figure out how to cash in on AI, Kit and Jacob have been looking at AI from a public interest and civil liberties perspective on behalf of EFF. And they’ve also been giving a lot of thought to what an ideal AI world looks like.

JASON KELLEY
AI can be more than just another tool that’s controlled by big tech. It really does have the potential to improve lives in a tangible way. And that’s what this discussion is all about. So we’ll start by trying to wade through the hype, and really nail down what AI actually is and how it can and is affecting our daily lives.

KIT WALSH
The confusion is understandable because AI is being used as a marketing term quite a bit, rather than as an abstract concept, rather than as a scientific concept.
And the ways that I think about AI, particularly in the decision-making context, which is one of our top priorities in terms of where we think that AI is impacting people's rights, is first I think about what kind of technology are we really talking about because sometimes you have a tool that actually no one is calling AI, but it is nonetheless an example of algorithmic decision-making.
That also sounds very fancy. This can be a fancy computer program to make decisions, or it can be a buggy Excel spreadsheet that litigators discover is actually just omitting important factors when it's used to decide whether people get health care or not in a state health care system.

CINDY COHN
You're not making those up, Kit. These are real examples.

KIT WALSH
That’s not a hypothetical. Unfortunately, it’s not a hypothetical, and the people who litigated that case lost some clients because when you're talking about not getting health care that can be life or death. And machine learning can either be a system where you – you, humans, code a reinforcement mechanism. So you have sort of random changes happening to an algorithm, and it gets rewarded when it succeeds according to your measure of success, and rejected otherwise.
It can be training on vast amounts of data, and that's really what we've seen a huge surge in over the past few years, and that training can either be what's called unsupervised, where you just ask your system that you've created to identify what the patterns are in a bunch of raw data, maybe raw images, or it can be supervised in the sense that humans, usually low paid humans, are coding their views on what's reflected in the data.
So I think that this is a picture of a cow, or I think that this picture is adult and racy. So some of these are more objective than others, and then you train your computer system to reproduce those kinds of classifications when it makes new things that people ask for with those keywords, or when it's asked to classify a new thing that it hasn't seen before in its training data.
So that's really a very high level oversimplification of the technological distinctions. And then because we're talking about decision-making, it's really important who is using this tool.
Is this the government which has all of the power of the state behind it and which administers a whole lot of necessary public benefits - that is using decisions to decide who is worthy and who is not to obtain those benefits? Or, who should be investigated? What neighborhoods should be investigated?
We'll talk a little bit more about the use in law enforcement later on, but it's also being used quite a bit in the private sector to determine who's allowed to get housing, whether to employ someone, whether to give people mortgages, and that's something that impacts people's freedoms as well.

CINDY COHN
So Jacob, two questions I used to distill down on AI decision-making are, who is the decision-making supposed to be serving and who bears the consequences if it gets it wrong? And if we think of those two framing questions, I think we get at a lot of the issues from a civil liberties perspective. That sound right to you?

JACOB HOFFMAN-ANDREWS
Yeah, and, you know, talking about who bears the consequences when an AI or technological system gets it wrong, sometimes it's the person that system is acting upon, the person who's being decided whether they get healthcare or not and sometimes it can be the operator.
You know, it's, uh, popular to have kind of human in the loop, like, oh, we have this AI decision-making system that's maybe not fully baked. So there's a human who makes the final call. The AI just advises the human and, uh, there's a great paper by Madeleine Clare Elish describing this as a form of moral crumple zones. Uh, so, you may be familiar in a car, modern cars are designed so that in a collision, certain parts of the car will collapse to absorb the force of the impact.
So the car is destroyed but the human is preserved. And, in some human in the loop decision making systems often involving AI, it's kind of the reverse. The human becomes the crumple zone for when the machine screws up. You know, you were supposed to catch the machine screwup. It didn't screw up in over a thousand iterations and then the one time it did, well, that was your job to catch it.
And, you know, these are obviously, you know, a crumple zone in a car is great. A moral crumple zone in a technological system is a really bad idea. And it takes away responsibility from the deployers of that system who ultimately need to bear the responsibility when their system harms people.

CINDY COHN
So I wanna ask you, what would it look like if we got it right? I mean, I think we do want to have some of these technologies available to help people make decisions.
They can find patterns in giant data probably better than humans can most of the time. And we'd like to be able to do that. So since we're fixing the internet now, I want to stop you for a second and ask you how would we fix the moral crumple zone problem or what were the things we think about to do that?

JACOB HOFFMAN-ANDREWS
You know, I think for the specific problem of, you know, holding say a safety driver or like a human decision-maker responsible for when the AI system they're supervising screws up, I think ultimately what we want is that the responsibility can be applied all the way up the chain to the folks who decided that that system should be in use. They need to be responsible for making sure it's actually a safe, fair system that is reliable and suited for purpose.
And you know, when a system is shown to bring harm, for instance, you know, a self-driving car that crashes into pedestrians and kills them, you know, that needs to be pulled out of operation and either fixed or discontinued.

CINDY COHN
Yeah, it made me think a little bit about, you know, kind of a change that was made, I think, by Toyota years ago, where they let the people on the front line stop the line, right? Um, I think one thing that comes out of that is you need to let the people who are in the loop have the power to stop the system, and I think all too often we don't.
We devolve the responsibility down to that person who's kind of the last fair chance for something but we don't give them any responsibility to raise concerns when they see problems, much less the people impacted by the decisions.

KIT WALSH
And that’s also not an accident of the appeal of these AI systems. It's true that you can't hold a machine accountable really, but that doesn't deter all of the potential markets for the AI. In fact, it's appealing for some regulators, some private entities, to be able to point to the supposed wisdom and impartiality of an algorithm, which if you understand where it comes from, the fact that it's just repeating the patterns or biases that are reflected in how you trained it, you see it's actually, it's just sort of automated discrimination in many cases and that can work in several ways.
In one instance, it's intentionally adopted in order to avoid the possibility of being held liable. We've heard from a lot of labor rights lawyers that when discriminatory decisions are made, they're having a lot more trouble proving it now because people can point to an algorithm as the source of the decision.
And if you were able to get insight in how that algorithm were developed, then maybe you could make your case. But it's a black box. A lot of these things that are being used are not publicly vetted or understood.
And it's especially pernicious in the context of the government making decisions about you, because we have centuries of law protecting your due process rights to understand and challenge the ways that the government makes determinations about policy and about your specific instance.
And when those decisions and when those decision-making processes are hidden inside an algorithm then the old tools aren't always effective at protecting your due process and protecting the public participation in how rules are made.

JASON KELLEY
It sounds like in your better future, Kit, there's a lot more transparency into these algorithms, into this black box that's sort of hiding them from us. Is that part of what you see as something we need to improve to get things right?

KIT WALSH
Absolutely. Transparency and openness of AI systems is really important to make sure that as it develops, it develops to the benefit of everyone. It's developed in plain sight. It's developed in collaboration with communities and a wider range of people who are interested and affected by the outcomes, particularly in the government context though I'll speak to the private context as well. When the government passes a new law, that's not done in secret. When a regulator adopts a new rule, that's also not done in secret. There's either, sure, that's, there are exceptions.

CINDY COHN
Right, but that’s illegal.

JASON KELLEY
Yeah, that's the idea. Right. You want to get away from that also.

KIT WALSH
Yeah, if we can live in Kitopia for a moment where, where these things are, are done more justly, within the framework of government rulemaking, if that's occurring in a way that affects people, then there is participation. There's meaningful participation. There's meaningful accountability. And in order to meaningfully have public participation, you have to have transparency.
People have to understand what the new rule is that's going to come into force. And because of a lot of the hype and mystification around these technologies, they're being adopted under what's called a procurement process, which is the process you use to buy a printer.
It's the process you use to buy an appliance, not the process you use to make policy. But these things embody policy. They are the rule. Sometimes when the legislature changes the law, the tool doesn't get updated and it just keeps implementing the old version. And that means that the legislature's will is being overridden by the designers of the tool.

JASON KELLEY
You mentioned predictive policing, I think, earlier, and I wonder if we could talk about that for just a second because it's one way where I think we at EFF have been thinking a lot about how this kind of algorithmic decision-making can just obviously go wrong, and maybe even should never be used in the first place.
What we've seen is that it's sort of, you know, very clearly reproduces the problems with policing, right? But how does AI or this sort of predictive nature of the algorithmic decision-making for policing exacerbate these problems? Why is it so dangerous I guess is the real question.

KIT WALSH
So one of the fundamental features of AI is that it looks at what you tell it to look at. It looks at what data you offer it, and then it tries to reproduce the patterns that are in it. Um, in the case of policing, as well as related issues around decisions for pretrial release and parole determinations, you are feeding it data about how the police have treated people, because that's what you have data about.
And the police treat people in harmful, racist, biased, discriminatory, and deadly ways that it's really important for us to change, not to reify into a machine that is going to seem impartial and seem like it creates a veneer of justification for those same practices to continue. And sometimes this happens because the machine is making an ultimate decision, but that's not usually what's happening.
Usually the machine is making a recommendation. And one of the reasons we don't think that having a human in the loop is really a cure for the discriminatory harms is that humans are more likely to follow the AI if it gives them cover for a biased decision that they're going to make. And relatedly, some humans, a lot of people, develop trust in the machine and wind up following it quite a bit.
So in these contexts, if you really wanted to make predictions about where a crime was going to occur, well it would send you to Wall Street. And that's not, that's not the result that law enforcement wants.
But, first of all, you would actually need data about where crimes occur, and generally people who don't get caught by the police are not filling out surveys to say, here are the crimes I got away with so that you can program a tool that's going to do better at sort of reflecting some kind of reality that you're trying to capture. You only know how the system has treated people so far and all that you can do with AI technology is reinforce that. So it's really not an appropriate problem to try to solve with this technology.

CINDY COHN
Yeah, our friends at Human Rights Data Analysis Group who did some of this work said, you know, we call it predictive policing, but it's really predicting the police because we're using what the police already do to train up a model, and of course it's not going to fix the problems with how police have been acting in the past. Sorry to interrupt. Go on.

KIT WALSH
No, to build on that, by definition, it thinks that the past behavior is ideal, and that's what it should aim for. So, it's not a solution to any kind of problem where you're trying to change a broken system.

CINDY COHN
And in fact, what they found in the research was that the AI system will not only replicate what the police do, it will double down on the bias because it's seeing a small trend and it will increase the trend. And I don't remember the numbers, but it's pretty significant. So it's not just that the AI system will replicate what the police do. What they found in looking at these systems is that the AI systems increase the bias in the underlying data.
It's really important that we continue to emphasize the ways in which AI and machine learning are already being used and already being used in ways that people may not see, but dramatically impact them. But right now, what's front of mind for a lot of people is generative AI. And I think many, many more people have started playing around with that. And so I want to start with how we think about generative AI and the issues it brings. And Jacob, I know you have some thoughts about that.

JACOB HOFFMAN-ANDREWS
Yeah. To call back to, at the beginning you asked about, how do we define AI? I think one of the really interesting things in the field is that it's changed so much over time. And, you know, when computers first became broadly available, you know, people have been thinking for a very long time, what would it mean for a computer to be intelligent? And for a while we thought, wow, you know, if a computer could play chess and beat a human, we would say that's an intelligent computer.
Um, if a computer could recognize, uh, what's in an image, is this an image of a cat or a cow - that would be intelligence. And of course now they can, and we don't consider it intelligence anymore. And you know, now we might say if a computer could write a term paper, that's intelligence and I don't think we're there yet, but the development of chatbots does make a lot of people feel like we're closer to intelligence because you can have a back and forth and you can ask questions and receive answers.
And some of those answers will be confabulations and, but some percentage of the time they'll be right. And it starts to feel like something you're interacting with. And I think, rightly so, people are worried that this will destroy jobs for writers and for artists. And to an earlier question about, you know, what does it look like if we get it right, I think, you know, the future we want is one where people can write beautiful things and create beautiful things and, you know, still make a great living at it and be fulfilled and safe in their daily needs and be recognized for that. And I think that's one of the big challenges we're facing with generative AI.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. How to Fix the Internet is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our discussion with Kit and Jacob about AI: the good, the bad, and what could be better.

CINDY COHN
There’s been a lot of focus on the dark side of generative AI and the idea of using copyright to address those problems has emerged. We have worries about that as a way to sort out between good and bad uses of AI, right Kit?

KIT WALSH
Absolutely. We have had a lot of experience with copyright being used as a tool of censorship, not only against individual journalists and artists and researchers, but also against entire mediums for expression, against libraries, against the existence of online platforms where people are able to connect and copyright not only lasts essentially forever, it comes with draconian penalties that are essentially a financial death sentence for the typical person in the United States. So in the context of generative AI, there is a real issue with the potential to displace creative labor. And it's a lot like the issues of other forms of automation that displace other forms of labor.
And it's not always the case that an equal number of new jobs are created, or that those new jobs are available to the people who have been displaced. And that's a pretty big social problem that we have. In Kitopia, we have AI and it's used so that there's less necessary labor to achieve a higher standard of living for people, and we should be able to be excited about automation of labor tasks that aren't intrinsically rewarding.
One of the reasons that we're not is because the fruits of that increased production flow to the people who own the AI, not to the people who were doing that labor, who now have to find another way to trade their labor for money or else become homeless and starve and die, and that's cruel.
It is the world that we're living in so it's really understandable to me that an artist is going to want to reach for copyright, which has the potential of big financial damages against someone who infringes, and is the way that we've thought about monetization of artistic works. I think that way of thinking about it is detrimental, but I also think it's really understandable.
One of the reasons why the particular legal theories in the lawsuits against generative AI technologies are concerning is because they wind up stretching existing doctrines of copyright law. So in particular, the very first case against Stable Diffusion argued that you were creating an infringing derivative work when you trained your model to recognize the patterns in five billion images.
It's a derivative work of each and every one of them. And that can only succeed as a legal theory if you throw out the existing understanding of what a derivative work is, that it has to be substantially similar to a thing that it's infringing and that limitation is incredibly important for human creativity.
The elements of my work that you might recognize from my artistic influences in the ordinary course of artistic borrowing and inspiration are protected. I'm able to make my art without people coming after me because I like to draw eyes the same way as my inspiration or so on, because ultimately the work is not substantially similar.
And if we got rid of that protection, it would be really bad for everybody.
But at the same time, you can see how someone might say, why should I pay a commission to an artist if I can get something in the same style? To which I would say, try it. It's not going to be what you want because art is not about replicating patterns that are found in a bunch of training data.
It can be a substitute for stock photography or other forms of art that are on the lower end of how much creativity is going into the expression, but for the higher end, I think that part of the market is safe. So I think all artists are potentially impacted by this. I'm not saying only bad artists have to care, but there is this real impact.
Their financial situation is precarious already, and they deserve to make a living, and this is a bandaid because we don't have a better solution in place to support people and let them create in a way that is in accord with their values and their goals. We really don't have that either in the situation where people are primarily making their income doing art that a corporation wants them to make to maximize its products.
No artist wants to create assets for content. Artists want to express and create new beauty and new meaning and the system that we have doesn't achieve that. We can certainly envision better ones but in the meantime, the best tool that artists have is banding together to negotiate with collective power, and it's really not a good enough tool at this point.
But I also think there's a lot of room to ethically use generative AI if you're working with an artist and you're trying to communicate your vision for something visual, maybe you're going to use an AI tool in order to make something that has some of the elements you're looking for and then say this, this is what I want to pay you to, to draw. I want this kind of pose, right? But, but, more unicorns.

JASON KELLEY
And I think while we're talking about these sort of seemingly good, but ultimately dangerous solutions for the different sort of problems that we're thinking about now more than ever because of generative AI, I wanted to talk with Jacob a little bit about watermarking. And this is meant to solve a sort of problem of knowing what is and is not generated by AI.
And people are very excited about this idea that through some sort of, well, actually you just explain Jacob, cause you are the technologist. What is watermarking? Is this a good idea? Will this work to help us understand and distinguish between AI-generated things and things that are just made by people?

JACOB HOFFMAN-ANDREWS
Sure. So a very real and closely related risk of generative AI is that it is - it will, and already is - flooding the internet with bullshit. Uh, you know, many of the articles you might read on any given topic, these days the ones that are most findable are often generated by AI.
And so an obvious next step is, well, what if we could recognize the stuff that's written by AI or the images that are generated by AI, because then we could just skip that. You know, I wouldn't read this article cause I know it's written by AI or you can go even a step further, you could say, well, maybe search engines should downrank things that were written by AI or social networks should label it or allow you to opt out of it.
You know, there's a lot of question about, if we could immediately recognize all the AI stuff, what would we do about it? There's a lot of options, but the first question is, can we even recognize it? So right off the bat, you know, when ChatGPT became available to the public, there were people offering ChatGPT detectors. You know, you could look at this content and, you know, you can kind of say, oh, it tends to look like this.
And you can try to write something that detects its output, and the short answer is it doesn't work and it's actually pretty harmful. A number of students have been harmed because their instructors have run their work through a ChatGPT detector, an AI detector that has incorrectly labeled it.
There's not a reliable pattern in the output that you can always see. Well, what if the makers of the AI put that pattern there? And, you know, for a minute, let's switch from text based to image based stuff. Jason, have you ever gone to a stock photo site to download a picture of something?

JASON KELLEY
I sadly have.

JACOB HOFFMAN-ANDREWS
Yeah. So you might recognize the images they have there, they want to make sure you pay for the image before they use it. So there's some text written across it in a kind of ghostly white diagonal. It says, this is from say shutterstock.com. So that's a form of watermark. If you just went and downloaded that image rather than paying for the cleaned up version, there's a watermark on it.
So the concept of watermarking for AI provenance is that It would be invisible. It would be kind of mixed into the pixels at such a subtle level that you as a human can't detect it, but you know, a computer program designed to detect that watermark could so you could imagine the AI might generate a picture and then in the top left pixel, increase its shade by the smallest amount, and then the next one, decrease it by the smallest amount and so on throughout the whole image.
And you can encode a decent amount of data that way, like what system produced it, when, all that information. And actually the EFF has published some interesting research in the past on a similar system in laser printers where little yellow dots are embedded by certain laser printers, by most laser printers that you can get as an anti counterfeiting measure.

JASON KELLEY
This is one of our most popular discoveries that comes back every few years, if I remember right, because people are just gobsmacked that they can't see them, but they're there, and that they have this information. It's a really good example of how this works.

CINDY COHN
Yeah, and it's used to make sure that they can trace back to the printer that printed anything on the off chance that what you're printing is fake money.

JACOB HOFFMAN-ANDREWS
Indeed, yeah.
The other thing people really worry about is that AI will make it a lot easier to generate disinformation and then spread it and of course if you're generating disinformation it's useful to strip out the watermark. You would maybe prefer that people don't know it's AI. And so you're not limited to resizing or cropping an image. You can actually, you know, run it through a program. You can see what the shades of all the different pixels are. And you, in theory probably know what the watermarking system in use is. And given that degree of flexibility, it seems very, very likely - and I think past technology has proven this out - that it's not going to be hard to strip out the watermark. And in fact, it's not even going to be hard to develop a program to automatically strip out the watermark.

CINDY COHN
Yep. And you, you end up in a cat and mouse game where the people who you most want to catch, who are doing sophisticated disinformation, say to try to upset elections, are going to be able to either strip out the watermark or fake it and so you end up where the things that you most want to identify are probably going to trick people. Is that, is that the way you're thinking about it?

JACOB HOFFMAN-ANDREWS
Yeah, that's pretty much what I'm getting at. I wanted to say one more thing on, um, watermarking. I'd like to talk about chainsaw dogs. There's this popular genre of image on Facebook right now of a man and his chainsaw carved wooden dog and, often accompanied by a caption like, look how great my dad is, he carved this beautiful thing.
And these are mostly AI generated and they receive, you know, thousands of likes and clicks and go wildly viral. And you can imagine a weaker form of the disinformation claim of say, ‘Well, okay, maybe state actors will strip out watermarks so they can conduct their disinformation campaigns, but at least adding watermarks to AI images will prevent this proliferation of garbage on the internet.’
People will be able to see, oh, that's a fake. I'm not going to click on it. And I think the problem with that is even people who are just surfing for likes on social media actually love to strip out credits from artists already. You know, cartoonists get their signatures stripped out and in the examples of these chainsaw dogs, you know, there is actually an original.
There's somebody who made a real carving of a dog. It was very skillfully executed. And these are generated using kind of image to image AI, where you take an image and you generate an image that has a lot of the same concepts. A guy, a dog, made of wood and so they're already trying to strip attribution in one way.
And I think likely they would also find a way to strip any watermarking on the images they're generating.

CINDY COHN
So Jacob, we heard earlier about Kit's ideal world. I'd love to hear about the future world that Jacob wants us to live in.

JACOB HOFFMAN-ANDREWS
Yeah. I think the key thing is, you know, that people are safer in their daily lives than they are today. They're not worried about their livelihoods going away. I think this is a recurring theme when most new technology is invented that, you know, if it replaces somebody's job, and that person's job doesn't get easier, they don't get to keep collecting a paycheck. They just lose their job.
So I think in the ideal future, people have a means to live and to be fulfilled in their lives to do meaningful work still. And also in general, human agency is expanded rather than restricted. The promise of a lot of technologies that, you know, you can do more in the world, you can achieve the conditions you want in your life.

CINDY COHN
Oh that sounds great. I want to come back to you Kit. We've talked a little about Kitopia, including at the top of the show. Let's talk a little bit more. What else are we missing?

KIT WALSH
So in Kitopia, people are able to use AI if it's a useful part of their artistic expression, they're able to use AI if they need to communicate something visual when I'm hiring a concept artist, when I am getting a corrective surgery, and I want to communicate to the surgeon what I want things to look like.
There are a lot of ways in which words don't communicate as well as images. And not everyone has the skill or the time or interest to go and learn a bunch of photoshop to communicate with their surgeon. I think it would be great if more people were interested and had the leisure and freedom to do visual art.
But in Kitopia, that's something that you have because your basic needs are met. And in part, automation is something that should help us do that more. The ability to automate aspects of, of labor should wind up benefiting everybody. That's the vision of AI in Kitopia.

CINDY COHN
Nice. Well that's a wonderful place to end. We're all gonna pack our bags and move to Kitopia. And hopefully by the time we get there, it’ll be waiting for us.
You know, Jason, that was such a rich conversation. I'm not sure we need to do a little recap like we usually do. Let's just close it out.

JASON KELLEY
Yeah, you know, that sounds good. I'll take it from here. Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we would love to hear from you. You can visit EFF.org slash podcasts to click on listener feedback and let us know what you think of this or any other episode.
You can also get a transcript or information about this episode and the guests. And while you're there of course, you can become an EFF member, pick up some merch, or just see what's happening in digital rights this or any other week. This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.
In this episode, you heard Kalte Ohren by Alex featuring starfrosch & Jerry Spoon; lost Track by Airtone; Come Inside by Zep Hume; Xena's Kiss/Medea's Kiss by MWIC; Homesick By Siobhan D and Drops of H2O ( The Filtered Water Treatment ) by J.Lang. Our theme music is by Nat Keefe of BeatMower with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. We’ll see you next time. I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

 

VMware Issues Patches for Cloud Foundation, vCenter Server, and vSphere ESXi

By: Newsroom
18 June 2024 at 04:24
VMware has released updates to address critical flaws impacting Cloud Foundation, vCenter Server, and vSphere ESXi that could be exploited to achieve privilege escalation and remote code execution. The list of vulnerabilities is as follows - CVE-2024-37079 & CVE-2024-37080 (CVSS scores: 9.8) - Multiple heap-overflow vulnerabilities in the implementation of the DCE/RPC protocol that could

Singapore Police Extradites Malaysians Linked to Android Malware Fraud

By: Newsroom
18 June 2024 at 03:38
The Singapore Police Force (SPF) has announced the extradition of two men from Malaysia for their alleged involvement in a mobile malware campaign targeting citizens in the country since June 2023. The unnamed individuals, aged 26 and 47, engaged in scams that tricked unsuspecting users into downloading malicious apps onto their Android devices via phishing campaigns with the aim of stealing

Key Takeaways From Horizon3.ai’s Analysis of an Entra ID Compromise

As enterprises shift from on-premises to cloud systems, hybrid cloud solutions have become essential for optimizing performance, scalability, and user ease. However, risks arise when poorly configured environments connect to the cloud. A compromised Microsoft Active Directory can fully compromise a synchronized Microsoft Entra ID tenant, undermining the integrity and trust of connected services.

The post Key Takeaways From Horizon3.ai’s Analysis of an Entra ID Compromise appeared first on Security Boulevard.

Linux Malware Campaign Uses Discord Emojis in Attack on Indian Government Targets

Discord emojis used in cyber attack

Cybersecurity researchers are tracking a novel Linux malware campaign that makes use of Discord emojis for command and control (C2) communication with attackers. The campaign’s unusual combination of Linux malware and phishing lures suggests an attack aimed at Linux desktop users, the researchers from Volexity said. “Volexity assesses it is highly likely this campaign, and the malware used, is targeted specifically towards government entities in India, who use a custom Linux distribution named BOSS as their daily desktop,” they wrote.

Threat Actor ‘UTA0137’ Linked to Campaign

Volexity researchers connected the campaign to a Pakistan-based threat actor they call UTA0137. The researchers said they have “high confidence that UTA0137 has espionage-related objectives and a remit to target government entities in India. Based on Volexity’s analysis, UTA0137’s campaigns appear to have been successful.” The researchers say they have “moderate confidence” that UTA0137 is a Pakistan-based threat actor because of the group’s targets and a few other reasons:
  • The Pakistani time zone was hardcoded in one malware sample.
  • There are weak infrastructure links to SideCopy, a known Pakistan-based threat actor.
  • The Punjabi language was used in the malware.
The malware used by the threat group uses a modified version of the discord-c2 GitHub project for its Discord command and control (C2) communication. The malware, dubbed DISGOMOJI by the researchers, is written in Golang and compiled for Linux systems. The threat actors also use the DirtyPipe (CVE-2022-0847) privilege escalation exploit against “BOSS 9” systems, which remain vulnerable to the exploit.

Attack Starts With DSOP PDF

The malware is delivered via a DSOP.pdf lure, which claims to be a beneficiary document of India’s Defence Service Officer Provident Fund (screenshot below). [caption id="attachment_77503" align="alignnone" width="750"]DSOP phishing lure The DSOP lure that downloads the malware[/caption] The malware then downloads the next-stage payload, named vmcoreinfo, from a remote server, clawsindia[.]in. The payload is an instance of the DISGOMOJI malware and is dropped in a hidden folder named .x86_64-linux-gnu in the user’s home directory. DISGOMOJI, a UPX-packed ELF written in Golang, uses Discord for C2. “An authentication token and server ID are hardcoded inside the ELF, which are used to access the Discord server,”  they wrote. “The malware creates a dedicated channel for itself in the Discord server, meaning each channel in the server represents an individual victim. The attacker can then interact with every victim individually using these channels.” On startup, DISGOMOJI sends a check-in message in the channel that contains information like the internal IP, the user name, host name, OS and current working directory. The malware can survive reboots through the addition of a @reboot entry to the crontab, and it also downloads a script named uevent_seqnum.sh to copy files from any attached USB devices.

Discord Emojis Used for C2 Communication

C2 communication uses an emoji-based protocol, “where the attacker sends commands to the malware by sending emojis to the command channel, with additional parameters following the emoji where applicable.” A Clock emoji in the command message lets the attacker know a command is being processed, while a Check Mark emoji confirms that the command was executed. The researchers summarized the emoji commands in a table: [caption id="attachment_77505" align="alignnone" width="750"]Discord emoji malware The Discord emojis used to communicate with attackers (source: Volexity)[/caption] Post-exploitation activities include use of the Zenity utility to display malicious dialog boxes to socially engineer users into giving up their passwords. Open source tools such as Nmap, Chisel and Ligolo are also used, and the DirtyPipe exploit suggests increasing sophistication of the atacker's methods, the researchers said. Indicators of compromise (IoCs) can be downloaded from the Volexity GitHub page:

Akira Ransomware Claims the TETRA Technologies, 40GB of Sensitive Data at Risk

TETRA Technologies cyberattack

TETRA Technologies, Inc., a diversified oil and gas services company operating through divisions including Fluids, Production Testing, Compression, and Offshore, has reportedly fallen victim to the Akira ransomware group. This TETRA Technologies cyberattack has put crucial data at risk, including personal documents like passports, birth certificates, and driver’s licenses, as well as confidential agreements and NDAs. The threat actor responsible for the attack has indicated their intention to release approximately 40GB of sensitive data. Despite these claims, TETRA Technologies has not yet issued an official statement confirming or denying the breach.

Decoding the TETRA Technologies Cyberattack Claim by Akira Ransomware

[caption id="attachment_77529" align="alignnone" width="716"]TETRA Technologies Cyberattack Source: Dark Web[/caption] The Cyber Express has reached out to the organization to learn more about this TETRA Technologies cyberattack. However, at the time of writing this, no official statement or response has been received, leaving the claims for the TETRA Technologies cyberattack unconfirmed. While the company’s public-facing website appears to be operational, it is speculated that the attack may have targeted internal systems or backend infrastructure rather than causing a visible disruption like a DDoS attack or website defacement. The threat actor behind this attack, Akira ransomware, has emerged as a significant threat in cybersecurity, highlighted by the Cybersecurity and Infrastructure Security Agency (CISA) warning and its widespread impact across various industries worldwide. Known for a dual extortion tactic involving data exfiltration and encryption, Akira ransomware demands ransom payments to prevent data publication on their dark website and to receive decryption keys. The group's name references a 1988 anime film, and they use specific strings like "*.akira" and "akira_readme.txt" for detection. 

TETRA Technologies Releases New Processes for Managing Cybersecurity Risks and Governance

In their recent regulatory filings, specifically the 10-K filed on 2024-02-27, TETRA Technologies detailed their cybersecurity risk management and governance processes. These include ongoing risk assessments, incident response planning, and the implementation of cybersecurity training programs for employees. The company acknowledges the persistent evolution of cyber threats and emphasizes the importance of maintaining robust defenses against potential attacks. The Vice President of Information Technology leads TETRA Technologies’ cybersecurity initiatives, supported by a comprehensive framework to assess, identify, and manage cybersecurity risks across their operations. Regular updates and enhancements to their security protocols are integral to adapting to emerging threats and complying with regulatory standards. The Board of Directors and Audit Committee of TETRA Technologies provide oversight on cybersecurity matters, receiving periodic updates on the company’s cybersecurity risk profile and incident response capabilities. Management highlighted its commitment to safeguarding sensitive information and maintaining operational continuity despite the challenges posed by cyber threats. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Phishing Attack at Los Angeles County Department of Public Health Leads to Major Data Breach

Los Angeles County DPH

The Los Angeles County Department of Public Health (DPH) has disclosed a significant data breach impacting more than 200,000 individuals. The data breach at Los Angeles County DPH, occurring between February 19 and 20, 2024, involved the theft of sensitive personal, medical, and financial information. The data breach was initiated through a phishing attack, where an external threat actor obtained the login credentials of 53 DPH employees. “Between February 19, 2024, and February 20, 2024, DPH experienced a phishing attack,” reads the official notice.

Data Breach at Los Angeles County DPH: What Happened

The phishing email, designed to appear legitimate, tricked employees into divulging their credentials by clicking on a malicious link. This unauthorized access led to a wide-ranging compromise of data, affecting various individuals associated with DPH, including clients, employees, and others. The compromised email accounts contained a wealth of sensitive data. The potentially exposed information includes:
  • First and last names
  • Dates of birth
  • Diagnosis and prescription details
  • Medical record numbers/patient IDs
  • Medicare/Med-Cal numbers
  • Health insurance information
  • Social Security numbers
  • Other financial information
It is important to note that not all of the above data elements were present for every affected individual. Each individual may have been impacted differently based on the specific information contained in the compromised accounts. “Affected individuals may have been impacted differently and not all of the elements listed were present for each individual,” Los Angeles County DPH informed.

 Data Breach at Los Angeles County DPH Notification 

DPH is taking extensive steps to notify all potentially affected individuals. Notifications are being sent via post to those whose mailing addresses are available. For individuals without a mailing address, DPH also posts a notice on its website to provide necessary information and resources. The department has advised impacted individuals to review the content and accuracy of their medical records with their healthcare providers. However, on delay in notification, Los Angeles County DPH said, “Due to an investigation by law enforcement, we were advised to delay notification of this incident, as public notice may have hindered their investigation.” To assist in protecting against potential misuse of their information, DPH is offering one year of free identity monitoring services through Kroll, a global leader in risk mitigation and response. “To help relieve concerns and restore confidence following this incident, we have secured the services of Kroll, a global leader in risk mitigation and response, to provide identity monitoring for one year at no cost to affected clients,” reads the notice.

Response and Preventive Measures

Upon discovering the Los Angeles County DPH data breach, DPH took immediate action to mitigate further risks. The department disabled the affected email accounts, reset and re-imaged the users’ devices, blocked the websites involved in the phishing campaign, and quarantined all suspicious incoming emails. Additionally, DPH has implemented numerous security enhancements to prevent similar incidents in the future. Awareness notifications have been distributed to all workforce members, reminding them to be vigilant when reviewing emails, especially those containing links or attachments. These measures aim to bolster the department’s defense against phishing attacks and other cyber threats. The incident was promptly reported to law enforcement authorities, who investigated the breach. The US Department of Health and Human Services’ Office for Civil Rights and other relevant agencies are also notified, as required by law and contractual obligations.

Steps for Individuals to Protect Themselves

While DPH cannot confirm whether any information has been accessed or misused, affected individuals are encouraged to take proactive steps to protect their personal information. These steps include:
  • Reviewing Medical Records: Individuals should review their medical records and Explanation of Benefits statements for any discrepancies or unauthorized services. Any irregularities should be reported to their healthcare provider or health plan.
  • Requesting Credit Reports: Individuals should remain vigilant against identity theft and fraud by regularly reviewing their financial statements and credit reports. Under US law, individuals are entitled to one free credit report annually from each of the three major credit reporting bureaus: Equifax, Experian, and TransUnion. Free credit reports can be requested at www.annualcreditreport.com or by calling 1-877-322-8228.
  • Placing Fraud Alerts: Individuals can place a fraud alert on their credit files, which notifies creditors to take additional steps to verify identity before granting credit. Fraud alerts can be set up by contacting any of the major credit bureaus.
  • Security Freezes: A security freeze can also be placed on credit reports, which prevents credit bureaus from releasing any information without written authorization. This measure can help prevent unauthorized credit activity but may delay the approval of new credit requests.
The Los Angeles County Department of Public Health continues to cooperate with law enforcement and other agencies to protect the privacy and security of its clients, employees, and other stakeholders.
❌
❌