Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Got WiFi? Will Spy

By: CMcG
16 April 2024 at 20:40
"anyone from a landlord to a laundromat – could be required to help the government spy." (Guardian) The Guardian covers the Houses expansion of Section 702 of the Foreign Intelligence Surveillance Act (FISA). Although presented as a re-authorization, "The Turner-Himes amendment – so named for its champions Representatives Mike Turner and Jim Himes – would permit federal law enforcement to also force "any other service provider" with access to communications equipment to hand over data." [disclaimer: I am related to the author of this article.]

Intelligence Agencies argue that FISA is crucial to preventing terrorist attacks. Meanwhile, Rolling Stone reports "Seth Stern, director of advocacy at the Freedom of the Press Foundation, warns Rolling Stone that the House bill "would let intelligence agencies commandeer countless American businesses and individuals to spy on journalists and their sources on the government's behalf." "

NSA Issues Cybersecurity Guidance for Secure AI Deployment

NSA AI Deployment

The National Security Agency (NSA) is taking a proactive stance in cybersecurity with the release of a Cybersecurity Information Sheet (CSI) titled “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.” This initiative underlines the growing importance of securing artificial intelligence (AI) systems in the face of evolving cyber threats.

Dave Luber, National Security Agency Cybersecurity Director, emphasized the significance of AI in today’s landscape, acknowledging both its potential benefits and the security challenges it poses. He stated, “AI brings unprecedented opportunity, but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis.” NSA AI Deployment

NSA Collaborative Effort

The CSI, a collaborative effort involving the National Security Agency's Artificial Intelligence Security Center (AISC) and several international partners, including the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI), aims to provide guidance to National Security System owners and Defense Industrial Base companies deploying AI systems developed by external entities. While initially targeted at national security applications, the guidance holds relevance for any organization integrating AI capabilities into managed environments, particularly those operating in high-threat, high-value sectors. It builds upon previously released guidelines, signaling a concerted effort to address emerging security challenges in AI development and deployment. This release marks a significant milestone for the AISC, established by the National Security Agency in September 2023 as part of the Cybersecurity Collaboration Center (CCC). The center's mission encompasses detecting and countering AI vulnerabilities, fostering partnerships with industry stakeholders, academia, and international allies, and promoting best practices to enhance the security of AI systems.

Future Directions

Looking ahead, the AISC plans to collaborate with global partners to develop a comprehensive series of guidance on various aspects of AI security. These topics include data security, content authenticity, model security, identity management, model testing and red teaming, incident response, and recovery. By addressing these critical areas, the NSA aims to enhance the confidentiality, integrity, and availability of AI systems, staying ahead of adversaries' tactics and techniques. The release of the CSI reflects a broader commitment to cybersecurity and highlights the importance of collaboration in defending against cyber threats. As AI continues to reshape industries and society at large, ensuring the security of these systems is paramount to safeguarding sensitive data, critical infrastructure, and national security interests. With the rapid evolution of AI technology, ongoing collaboration and proactive security measures will be essential to mitigate emerging risks and maintain trust in AI-driven solutions. The National Security Agency's guidance serves as a foundation for organizations to enhance the resilience of their AI systems and adapt to the evolving threat landscape. In an era defined by digital transformation and unprecedented connectivity, securing AI systems is not merely a technical challenge but a strategic imperative. By leveraging collective expertise and resources, stakeholders can navigate the complexities of AI security and foster a safer, more resilient digital ecosystem for all. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Declassified NSA Newsletters

2 April 2024 at 13:05

Through a 2010 FOIA request (yes, it took that long), we have copies of the NSA’s KRYPTOS Society Newsletter, “Tales of the Krypt,” from 1994 to 2003.

There are many interesting things in the 800 pages of newsletter. There are many redactions. And a 1994 review of Applied Cryptography by redacted:

Applied Cryptography, for those who don’t read the internet news, is a book written by Bruce Schneier last year. According to the jacket, Schneier is a data security expert with a master’s degree in computer science. According to his followers, he is a hero who has finally brought together the loose threads of cryptography for the general public to understand. Schneier has gathered academic research, internet gossip, and everything he could find on cryptography into one 600-page jumble.

The book is destined for commercial success because it is the only volume in which everything linked to cryptography is mentioned. It has sections on such-diverse topics as number theory, zero knowledge proofs, complexity, protocols, DES, patent law, and the Computer Professionals for Social Responsibility. Cryptography is a hot topic just now, and Schneier stands alone in having written a book on it which can be browsed: it is not too dry.

Schneier gives prominence to applications with large sections.on protocols and source code. Code is given for IDEA, FEAL, triple-DES, and other algorithms. At first glance, the book has the look of an encyclopedia of cryptography. Unlike an encyclopedia, however, it can’t be trusted for accuracy.

Playing loose with the facts is a serious problem with Schneier. For example in discussing a small-exponent attack on RSA, he says “an attack by Michael Wiener will recover e when e is up to one quarter the size of n.” Actually, Wiener’s attack recovers the secret exponent d when e has less than one quarter as many bits as n, which is a quite different statement. Or: “The quadratic sieve is the fastest known algorithm for factoring numbers less than 150 digits…. The number field sieve is the fastest known factoring algorithm, although the quadratric sieve is still faster for smaller numbers (the break even point is between 110 and 135 digits).” Throughout the book, Schneier leaves the impression of sloppiness, of a quick and dirty exposition. The reader is subjected to the grunge of equations, only to be confused or misled. The large number of errors compounds the problem. A recent version of the errata (Schneier publishes updates on the internet) is fifteen pages and growing, including errors in diagrams, errors in the code, and errors in the bibliography.

Many readers won’t notice that the details are askew. The importance of the book is that it is the first stab at.putting the whole subject in one spot. Schneier aimed to provide a “comprehensive reference work for modern cryptography.” Comprehensive it is. A trusted reference it is not.

Ouch. But I will not argue that some of my math was sloppy, especially in the first edition (with the blue cover, not the red cover).

A few other highlights:

  • 1995 Kryptos Kristmas Kwiz, pages 299–306
  • 1996 Kryptos Kristmas Kwiz, pages 414–420
  • 1998 Kryptos Kristmas Kwiz, pages 659–665
  • 1999 Kryptos Kristmas Kwiz, pages 734–738
  • Dundee Society Introductory Placement Test (from questions posed by Lambros Callimahos in his famous class), pages 771–773
  • R. Dale Shipp’s Principles of Cryptanalytic Diagnosis, pages 776–779
  • Obit of Jacqueline Jenkins-Nye (Bill Nye the Science Guy’s mother), pages 755–756
  • A praise of Pi, pages 694–696
  • A rant about Acronyms, pages 614–615
  • A speech on women in cryptology, pages 593–599

NSA Buying Bulk Surveillance Data on Americans without a Warrant

30 January 2024 at 07:12

It finally admitted to buying bulk data on Americans from data brokers, in response to a query by Senator Weyden.

This is almost certainly illegal, although the NSA maintains that it is legal until it’s told otherwise.

Some news articles.

In conversation: Bruce Schneier on AI-powered mass spying

29 January 2024 at 11:25

For decades, governments and companies have surveilled the conversations, movements, and behavior of the public.

And then the internet came along and made that a whole lot easier.  

Today, search engines collect our queries, browsers collect our device information, smartphones collect out locations, social media platforms collect our conversations with one another, and, depending on the country, governments either collect that same information from the companies that maintain it, or they gather it directly themselves by monitoring their people.

That’s a lot of data that, until now, has been difficult to parse at scale.

But cryptographer and computer security professional Bruce Schneier believes that’s going to change, all because of the advent of artificial intelligence.

Already equipped with reams of data, governments and companies will now be able to ask AI models to make conclusions from the data, Schneier said, supercharging the human work of inquiry with the limitless scale of AI.

“Spying is limited by the need for human labor,” Schneier wrote. “AI is about to change that.”

In January, Senior Privacy Advocate David Ruiz (and host of the Lock and Code podcast) spoke with Schneier about the future of AI-enabled mass spying—the implications, the responses from the public, and whether there’s any hope in dismantling a system so deeply embedded in the modern world.

Below is an edited transcript of their conversation.

DAVID RUIZ: We know that mass surveillance has this “Collect it all” mentality—of the NSA, obviously, but also from companies that gather clicks and shares and locations and app downloads and all of that.

What is the mass spying equivalent of that?

BRUCE SCHNEIER: So, it’s “Collect it all,” but it’s “Collect different things.”

Let’s step back here.

We kind of know what mass surveillance, mass spying looks like. It’s the kind of thing we saw in former East Germany, where like 10 percent of the population spied on 90 percent of the population. And it was incredibly manpower intensive.

Now, we moved into a world of automatic mass surveillance with the rise of the internet and the rise of cheap data storage and processing. This is maybe 20 years ago, when companies and the NSA start collecting mass surveillance data, and that is metadata—the kind of data that Snowden talked about—data about data. Data from your phone, from your browser, who you spoke to, where you went, what you spent money on, what websites you visited, that’s kind of mass surveillance data.

Spying, the difference I’m making, is about conversations.

And while computers were easily able to interpret location data and data about who you spoke to—your email “from” and “to” lines—they weren’t able to interpret what you said.  Voice conversations, text conversations, email conversations, they could do keyword searches and if you think about, Chinese censorship is based pretty naively on keyword searches. [For instance,] you say the bad word, you get censored. That’s still very primitive.

Spying—listening in on a conversation, knowing what people are saying—required human beings, and there weren’t enough humans to listen to everything. So, you had this automatic surveillance: We can know where everybody is, that’s easy.  But knowing what everybody’s saying, we still couldn’t do—we didn’t have the people.

As AI becomes increasingly capable of understanding and even having conversations, it can start doing the role that people used to do and engage in this kind of spying at a mass level.

Now, the question you asked started me in this monologue is what did that look like? And the answer is: We have no idea.

We kind of know what East Germany looked like, former Soviet Union. We do know what some of these countries look like. But I don’t think it’s the same. It is not this kind of mass spying by corporations. Or by democratic governments. So, we don’t know. But it’s worth at least thinking about. 

DAVID RUIZ: When I was trying to answer my own question, I feel like one of the potential futures here is, instead of “Collect it all,” it’s this attempt at “Know it all.”

BRUCE SCHNEIER: It always was “Know it all,” but now it becomes more possible.

DAVID RUIZ: Yeah, and I was hypothesizing that there might be companies that say “We have created a package of questions that we use on AIs that we know and trust to discover these new insights,” and [they] sell these packages to companies that are trying to find out XYZ and ABC about their customers, or governments about their persons.

And that already sounds awful. It sounds both boring and dreadful, which is what I think surveillance is today—it is boring operationally in terms of [how] we don’t see data brokers. We don’t see what gets collected every single day behind the interface of the web. But it is used so much to power the direction of the web.

BRUCE SCHNEIER: And my guess is that’s right. And it’s not actually boring. Certainly, the companies wanted you to think that. It is very hidden. Data brokers spent a lot of effort making sure that there’s no visibility into their industry, so we don’t know the data being collected on us.

But remember, this is still all surveillance data. It’s not the contents of your phone calls, it’s the billing information. It’s who you called, what time, how long you spoke, that kind of information. Maybe the location of your two phones when you were talking. This brings it up to another level. 

DAVID RUIZ: You mentioned that this is the conversations—the content of communications. And I wanted to address that immediately because I think whenever spying is discussed, a lot of folks picture someone rifling through text messages or emails. And I think some people might balk at that. They might say and reject the premise based on things like “Oh, Apple can’t look at my iMessages, and Google said it wouldn’t advertise based on email content some years ago, and my phone provider, I don’t think, is recording my phone calls.”

So, my question after those kinds of claims, is: Is that too narrow a lens to understand this future of mass spying? What am I missing when I say those things to calm myself down?

BRUCE SCHNEIER: I think it’s all economics. Google did realize that eavesdropping on the contents of your Gmail wasn’t terribly useful. So, they didn’t do it. Of course, they made a virtue out of it. That stuff will change.

We’re already seeing [the questions of] “Is this company using your data to train their AI? Will it be used to influence how an AI interacts with you?”[And] already we can see that the tones of voice we use with these AI chatbots is mirrored. They say they’re not storing it and using it for training—we don’t know the answer.

As this stuff becomes more valuable, as companies see a business interest in using it, of course it’s going to be used. It’s not going to be a world where we [can] rely on the goodness of the hearts of for-profit corporations not to abuse our privacy.

So, it’s a matter of how cheap it is. It is going to become cheap for Google to eavesdrop on the contents of everyone’s email conversations, or for Zoom to eavesdrop on all meetings? They currently say they’re not [monitoring meetings in] their terms of service, they’ll probably [receive] push back if they change it—they were already pushed back because there was a thought that Zoom would be using your information to train an AI. [It] turned out to not be true, but there was a little panic there. But how long does that last? 

We saw this with Facebook and their surveillance over the years and decades.

They did a new thing, there was a pushback, there’s an outcry, and after a couple months, it was the new normal. So, I don’t know where this is going to go, but the tech is moving to the point where this stuff is possible and, in fact, easy. 

DAVID RUIZ: We talked about the business case for [AI-enabled mass spying] and I wanted to ask more broadly: Who will engage in AI mass spying?

BRUCE SCHNEIER: The answer is going to be everybody who can.

Who are the major players here? Certainly there are corporations who are doing it for manipulation purposes. Surveillance is the business model of the internet because advertising is the business model of the internet, and advertising is convincing you to do something that you might not have done otherwise.

So, this surveillance-based manipulation is the business model, and anything that gives a company an advantage, they’re going to do. We [already] saw that with personalized advertising [which is] based on characteristics that you might not be happy sharing.

So, companies will do that.

Now, in the West, we tend not to have governments do this on their own citizens—they rely on corporations. The US government doesn’t spy on us directly, they get the spying data from corporations who do that to everybody.

You go to other countries, more totalitarian countries, and governments do engage in surveillance and will engage in spying. Here, I’m thinking about China, but other countries as well. So, very much think of it in terms of power:

Those that have the power will engage in the behaviors because it magnifies their power. There is no surprise. 

DAVID RUIZ: There’s a lot of times where we could think “Why would a company do this? Why would a government do this?” And there’s a lot of alleged reasons. The NSA will say that it collects as much data as possible for national security reasons, but they’ve been saying that for decades, and the national security reason seems to change every few years, right? There’s a, there’s a “national security risk du jour.”

And it’s much easier and simpler to think of it as those that have power hope to keep that power and amass more of it.

I wanted to separately ask: We talked about corporations. We talked about governments. What about people? I have access to AI tools in a way that I don’t have access to data collection regimes.

So, are people going to spy on people?

BRUCE SCHNEIER:  People are already spying on people. It’s more one-on-one. There’s an entire industry of super creepy spyware that is sold to people who want to spy on their wives and girlfriends. This is kind of a gross industry, but it exists and it is used in many countries.

So, yes. But here again, the power matters, and it’s generally the more powerful spying on the less powerful in a relationship. It’s generally the man spying on the woman because that’s the power imbalance.

When you think about bulk surveillance, it’s access to the data.

I really can’t spy on my neighborhood. Sure, I can set up a camera in front of my house and watch what’s going on in my street, but that’s all I could do. I don’t have the power to put cameras everywhere. I don’t have the power to get your email or your Zoom stream.

So, I think you are going to see some use of these technologies by people who are not traditionally powerful. But in general, like all these technologies, they benefit the powerful more than the less powerful.

DAVID RUIZ: We know that people act differently when they know they’re being surveilled—will that also apply to mass spying?

BRUCE SCHNEIER: Certainly, people in the United States have lived out of this kind of regime in post-9/11. Lots of Muslims lived in a world where they were being spied on a lot.

Lots of people—Jon Penney comes to mind, there are others—have researched how the feeling that you’re under constant surveillance—or spying, they’re both the same here—leads to self-censorship.

If you think you are being watched all the time, you behave differently. We do know that there’s an enormous chilling effect on how you behave, what you do, on your conformity. And if you think you’re being watched all the time, you tend to conform. You’re not going to do something different. You’re not going to stand out.

This seems really bad for society because that’s how society innovates. A world where people don’t try new things is a world that stagnates.

To take an easy example, about 10 years ago, I forget the year, gay marriage became legal in the United States, in all 50 states. It became the law of the land and that change was the result of a multi-decade process. For a while it was illegal and immoral, then it was illegal and moral, and then it became legal and moral.  And in order for that to happen—in order for that whole progression—somebody way back in the beginning had to try gay sex once and say: “You know, that wasn’t so bad. That was kind of fun.”

And the reason they were able to do that is that they weren’t being watched. They could do it in the privacy of their own bedroom and no one could stop them.

If you live in a world where, whether it’s gay sex or marijuana or whatever it is that becomes the mainstream moral norm over several generations, if you can’t try it, if there can’t be a counterculture of doing it, it never will become the social norm. If everybody who tried pot in the ‘50s was immediately discovered and arrested, you’d never get to legalization. You never would get a generation of people that would say, “You know, that’s not that bad. Why are we criminalizing this kind of not harmful drug?”

DAVID RUIZ: On the current environment that we live in, where so much surveillance happens every day—again, both from corporations and from governments—it doesn’t seem like there’s any way to dismantle it. And it feels like the potential of mass spying to produce even deeper insights means that we’ve lost the battle on surveillance. Have we lost that battle?

BRUCE SCHNEIER: I never think it’s too late, and that kind of fatalism doesn’t make sense when you look at history. It’s like saying in the 1200s, “Well, we tried to fight against monarchy. I guess that didn’t work. It’s too late.” Or centuries later, “You know, we tried to fight against slavery, that didn’t work.” Or, “You know, we’ll never give women the vote. That ship has sailed.”

We, as a species, regularly make our society more moral, more ethical, more egalitarian. It’s slow, it’s bursty, but decade over decade, century over century, we are improving.

So, no, I don’t think from now until the end of our species, the level of surveillance we see today cannot be rolled back. I think that is ridiculous.

We no longer send five-year-olds up chimneys to clean them. We don’t do that. We changed. We no longer allow companies to sell pajamas that catch on fire. We changed. We can do that here.

Like the other big things, like monarchy, like slavery, like the patriarchy, these things are going to be hard to dismantle, but they are dismantlable.

Near term, I think you’re right. Near term, both companies and governments are just punch-drunk on our data, and they’re not going to give it up. But long term, lots of things are possible and will happen. 

DAVID RUIZ: I mean this as a high compliment: You are the most optimistic guest we’ve had on the podcast.

BRUCE SCHNEIER: I’m near term pessimistic and long term optimistic. Near term, I think we’re screwed. The tech monopolies are so powerful, and we saw that with social media. Both Republicans and Democrats agreed—this never happened before—that Facebook was harming our society in different ways, but it’s harming society. [They] called Zuckerberg in, called the other companies in, yelled at them, [said] something must be done, and nothing was done.

That is sheer lobbying power right there in operation.

So, near term, I don’t see any solution, but our species has handled harder problems than this. This won’t be the one that stumps us.

DAVID RUIZ: What do we do at this point? 

BRUCE SCHNEIER: I want this to be a political issue. This stuff changes when it becomes an issue that voters care about. If there is a debate question on this, if this becomes something that politicians are asked about, then change will happen, right? If it isn’t, then it is really just the lobbyists that get to decide what happens.

“What should we do?” is agitate for change. Make this political, make this something that politicians can’t ignore.

Where change is happening is the EU. You have listeners in the EU, and they will know that things are happening there. Right now, Europe is the regulatory superpower on the planet. They are the jurisdiction where we got a comprehensive data privacy law, where they are passing an AI security law, stuff that you would never see in the United States.

So, look outside the US right now, but make this political. That’s how we’re going to make it better.

But we’re fighting uphill. It’s very hard in the United States to enact policies that the money doesn’t want. Money gets its way in in US policy. And the money wants this. 

DAVID RUIZ: And I think disentangling money from politics in the United States is a different [conversation], and unfortunately we don’t have the time for it.

BRUCE SCHNEIER: No, surely you can solve that in half an hour.

DAVID RUIZ: Actually, are you booked for the next half hour?

BRUCE SCHNEIER: If I was able to solve that, I would be not doing what I’m doing now, because, you’re right, that might not be the most important problem, but as Professor Larry Lessig said, it’s the first problem. It’s a problem we need to solve to solve every other problem. 

❌
❌