Reading view

There are new articles available, click to refresh the page.

How AI Will Change Democracy

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale.

It gets interesting when changes in degree can become changes in kind. High-speed trading is fundamentally different than regular human trading. AIs have invented fundamentally new strategies in the game of Go. Millions of AI-controlled social media accounts could fundamentally change the nature of propaganda.

It’s these sorts of changes and how AI will affect democracy that I want to talk about.

To start, I want to list some of AI’s core competences. First, it is really good as a summarizer. Second, AI is good at explaining things, teaching with infinite patience. Third, and related, AI can persuade. Propaganda is an offshoot of this. Fourth, AI is fundamentally a prediction technology. Predictions about whether turning left or right will get you to your destination faster. Predictions about whether a tumor is cancerous might improve medical diagnoses. Predictions about which word is likely to come next can help compose an email. Fifth, AI can assess. Assessing requires outside context and criteria. AI is less good at assessing, but it’s getting better. Sixth, AI can decide. A decision is a prediction plus an assessment. We are already using AI to make all sorts of decisions.

How these competences translate to actual useful AI systems depends a lot on the details. We don’t know how far AI will go in replicating or replacing human cognitive functions. Or how soon that will happen. In constrained environments it can be easy. AIs already play chess and Go better than humans. Unconstrained environments are harder. There are still significant challenges to fully AI-piloted automobiles. The technologist Jaron Lanier has a nice quote, that AI does best when “human activities have been done many times before, but not in exactly the same way.”

In this talk, I am going to be largely optimistic about the technology. I’m not going to dwell on the details of how the AI systems might work. Much of what I am talking about is still in the future. Science fiction, but not unrealistic science fiction.

Where I am going to be less optimistic—and more realistic—is about the social implications of the technology. Again, I am less interested in how AI will substitute for humans. I’m looking more at the second-order effects of those substitutions: How the underlying systems will change because of changes in speed, scale, scope and sophistication. My goal is to imagine the possibilities. So that we might be prepared for their eventuality.

And as I go through the possibilities, keep in mind a few questions: Will the change distribute or consolidate power? Will it make people more or less personally involved in democracy? What needs to happen before people will trust AI in this context? What could go wrong if a bad actor subverted the AI in this context? And what can we do, as security technologists, to help?

I am thinking about democracy very broadly. Not just representations, or elections. Democracy as a system for distributing decisions evenly across a population. It’s a way of converting individual preferences into group decisions. And that includes bureaucratic decisions.

To that end, I want to discuss five different areas where AI will affect democracy: Politics, lawmaking, administration, the legal system and, finally, citizens themselves.

I: AI-assisted politicians

I’ve already said that AIs are good at persuasion. Politicians will make use of that. Pretty much everyone talks about AI propaganda. Politicians will make use of that, too. But let’s talk about how this might go well.

In the past, candidates would write books and give speeches to connect with voters. In the future, candidates will also use personalized chatbots to directly engage with voters on a variety of issues. AI can also help fundraise. I don’t have to explain the persuasive power of individually crafted appeals. AI can conduct polls. There’s some really interesting work into having large language models assume different personas and answer questions from their points of view. Unlike people, AIs are always available, will answer thousands of questions without getting tired or bored and are more reliable. This won’t replace polls, but it can augment them. AI can assist human campaign managers by coordinating campaign workers, creating talking points, doing media outreach and assisting get-out-the-vote efforts. These are all things that humans already do. So there’s no real news there.

The changes are largely in scale. AIs can engage with voters, conduct polls and fundraise at a scale that humans cannot—for all sizes of elections. They can also assist in lobbying strategies. AIs could also potentially develop more sophisticated campaign and political strategies than humans can. I expect an arms race as politicians start using these sorts of tools. And we don’t know if the tools will favor one political ideology over another.

More interestingly, future politicians will largely be AI-driven. I don’t mean that AI will replace humans as politicians. Absent a major cultural shift—and some serious changes in the law—that won’t happen. But as AI starts to look and feel more human, our human politicians will start to look and feel more like AI. I think we will be OK with it, because it’s a path we’ve been walking down for a long time. Any major politician today is just the public face of a complex socio-technical system. When the president makes a speech, we all know that they didn’t write it. When a legislator sends out a campaign email, we know that they didn’t write that either—even if they signed it. And when we get a holiday card from any of these people, we know that it was signed by an autopen. Those things are so much a part of politics today that we don’t even think about it. In the future, we’ll accept that almost all communications from our leaders will be written by AI. We’ll accept that they use AI tools for making political and policy decisions. And for planning their campaigns. And for everything else they do. None of this is necessarily bad. But it does change the nature of politics and politicians—just like television and the internet did.

II: AI-assisted legislators

AIs are already good at summarization. This can be applied to listening to constituents:  summarizing letters, comments and making sense of constituent inputs. Public meetings might be summarized. Here the scale of the problem is already overwhelming, and AI can make a big difference. Beyond summarizing, AI can highlight interesting arguments or detect bulk letter-writing campaigns. They can aid in political negotiating.

AIs can also write laws. In November 2023, Porto Alegre, Brazil became the first city to enact a law that was entirely written by AI. It had to do with water meters. One of the councilmen prompted ChatGPT, and it produced a complete bill. He submitted it to the legislature without telling anyone who wrote it. And the humans passed it without any changes.

A law is just a piece of generated text that a government agrees to adopt. And as with every other profession, policymakers will turn to AI to help them draft and revise text. Also, AI can take human-written laws and figure out what they actually mean. Lots of laws are recursive, referencing paragraphs and words of other laws. AIs are already good at making sense of all that.

This means that AI will be good at finding legal loopholes—or at creating legal loopholes. I wrote about this in my latest book, A Hacker’s Mind. Finding loopholes is similar to finding vulnerabilities in software. There’s also a concept called “micro-legislation.” That’s the smallest unit of law that makes a difference to someone. It could be a word or a punctuation mark. AIs will be good at inserting micro-legislation into larger bills. More positively, AI can help figure out unintended consequences of a policy change—by simulating how the change interacts with all the other laws and with human behavior.

AI can also write more complex law than humans can. Right now, laws tend to be general. With details to be worked out by a government agency. AI can allow legislators to propose, and then vote on, all of those details. That will change the balance of power between the legislative and the executive branches of government. This is less of an issue when the same party controls the executive and the legislative branches. It is a big deal when those branches of government are in the hands of different parties. The worry is that AI will give the most powerful groups more tools for propagating their interests.

AI can write laws that are impossible for humans to understand. There are two kinds of laws: specific laws, like speed limits, and laws that require judgment, like those that address reckless driving. Imagine that we train an AI on lots of street camera footage to recognize reckless driving and that it gets better than humans at identifying the sort of behavior that tends to result in accidents. And because it has real-time access to cameras everywhere, it can spot it … everywhere. The AI won’t be able to explain its criteria: It would be a black-box neural net. But we could pass a law defining reckless driving by what that AI says. It would be a law that no human could ever understand. This could happen in all sorts of areas where judgment is part of defining what is illegal. We could delegate many things to the AI because of speed and scale. Market manipulation. Medical malpractice. False advertising. I don’t know if humans will accept this.

III: AI-assisted bureaucracy

Generative AI is already good at a whole lot of administrative paperwork tasks. It will only get better. I want to focus on a few places where it will make a big difference. It could aid in benefits administration—figuring out who is eligible for what. Humans do this today, but there is often a backlog because there aren’t enough humans. It could audit contracts. It could operate at scale, auditing all human-negotiated government contracts. It could aid in contracts negotiation. The government buys a lot of things and has all sorts of complicated rules. AI could help government contractors navigate those rules.

More generally, it could aid in negotiations of all kinds. Think of it as a strategic adviser. This is no different than a human but could result in more complex negotiations. Human negotiations generally center around only a few issues. Mostly because that’s what humans can keep in mind. AI versus AI negotiations could potentially involve thousands of variables simultaneously. Imagine we are using an AI to aid in some international trade negotiation and it suggests a complex strategy that is beyond human understanding. Will we blindly follow the AI? Will we be more willing to do so once we have some history with its accuracy?

And one last bureaucratic possibility: Could AI come up with better institutional designs than we have today? And would we implement them?

IV: AI-assisted legal system

When referring to an AI-assisted legal system, I mean this very broadly—both lawyering and judging and all the things surrounding those activities.

AIs can be lawyers. Early attempts at having AIs write legal briefs didn’t go well. But this is already changing as the systems get more accurate. Chatbots are now able to properly cite their sources and minimize errors. Future AIs will be much better at writing legalese, drastically reducing the cost of legal counsel. And there’s every indication that it will be able to do much of the routine work that lawyers do. So let’s talk about what this means.

Most obviously, it reduces the cost of legal advice and representation, giving it to people who currently can’t afford it. An AI public defender is going to be a lot better than an overworked not very good human public defender. But if we assume that human-plus-AI beats AI-only, then the rich get the combination, and the poor are stuck with just the AI.

It also will result in more sophisticated legal arguments. AI’s ability to search all of the law for precedents to bolster a case will be transformative.

AI will also change the meaning of a lawsuit. Right now, suing someone acts as a strong social signal because of the cost. If the cost drops to free, that signal will be lost. And orders of magnitude more lawsuits will be filed, which will overwhelm the court system.

Another effect could be gutting the profession. Lawyering is based on apprenticeship. But if most of the apprentice slots are filled by AIs, where do newly minted attorneys go to get training? And then where do the top human lawyers come from? This might not happen. AI-assisted lawyers might result in more human lawyering. We don’t know yet.

AI can help enforce the law. In a sense, this is nothing new. Automated systems already act as law enforcement—think speed trap cameras and Breathalyzers. But AI can take this kind of thing much further, like automatically identifying people who cheat on tax returns, identifying fraud on government service applications and watching all of the traffic cameras and issuing citations.

Again, the AI is performing a task for which we don’t have enough humans. And doing it faster, and at scale. This has the obvious problem of false positives. Which could be hard to contest if the courts believe that the computer is always right. This is a thing today: If a Breathalyzer says you’re drunk, it can be hard to contest the software in court. And also the problem of bias, of course: AI law enforcers may be more and less equitable than their human predecessors.

But most importantly, AI changes our relationship with the law. Everyone commits driving violations all the time. If we had a system of automatic enforcement, the way we all drive would change—significantly. Not everyone wants this future. Lots of people don’t want to fund the IRS, even though catching tax cheats is incredibly profitable for the government. And there are legitimate concerns as to whether this would be applied equitably.

AI can help enforce regulations. We have no shortage of rules and regulations. What we have is a shortage of time, resources and willpower to enforce them, which means that lots of companies know that they can ignore regulations with impunity. AI can change this by decoupling the ability to enforce rules from the resources necessary to do it. This makes enforcement more scalable and efficient. Imagine putting cameras in every slaughterhouse in the country looking for animal welfare violations or fielding an AI in every warehouse camera looking for labor violations. That could create an enormous shift in the balance of power between government and corporations—which means that it will be strongly resisted by corporate power.

AIs can provide expert opinions in court. Imagine an AI trained on millions of traffic accidents, including video footage, telemetry from cars and previous court cases. The AI could provide the court with a reconstruction of the accident along with an assignment of fault. AI could do this in a lot of cases where there aren’t enough human experts to analyze the data—and would do it better, because it would have more experience.

AIs can also perform judging tasks, weighing evidence and making decisions, probably not in actual courtrooms, at least not anytime soon, but in other contexts. There are many areas of government where we don’t have enough adjudicators. Automated adjudication has the potential to offer everyone immediate justice. Maybe the AI does the first level of adjudication and humans handle appeals. Probably the first place we’ll see this is in contracts. Instead of the parties agreeing to binding arbitration to resolve disputes, they’ll agree to binding arbitration by AI. This would significantly decrease cost of arbitration. Which would probably significantly increase the number of disputes.

So, let’s imagine a world where dispute resolution is both cheap and fast. If you and I are business partners, and we have a disagreement, we can get a ruling in minutes. And we can do it as many times as we want—multiple times a day, even. Will we lose the ability to disagree and then resolve our disagreements on our own? Or will this make it easier for us to be in a partnership and trust each other?

V: AI-assisted citizens

AI can help people understand political issues by explaining them. We can imagine both partisan and nonpartisan chatbots. AI can also provide political analysis and commentary. And it can do this at every scale. Including for local elections that simply aren’t important enough to attract human journalists. There is a lot of research going on right now on AI as moderator, facilitator, and consensus builder. Human moderators are still better, but we don’t have enough human moderators. And AI will improve over time. AI can moderate at scale, giving the capability to every decision-making group—or chatroom—or local government meeting.

AI can act as a government watchdog. Right now, much local government effectively happens in secret because there are no local journalists covering public meetings. AI can change that, providing summaries and flagging changes in position.

AIs can help people navigate bureaucracies by filling out forms, applying for services and contesting bureaucratic actions. This would help people get the services they deserve, especially disadvantaged people who have difficulty navigating these systems. Again, this is a task that we don’t have enough qualified humans to perform. It sounds good, but not everyone wants this. Administrative burdens can be deliberate.

Finally, AI can eliminate the need for politicians. This one is further out there, but bear with me. Already there is research showing AI can extrapolate our political preferences. An AI personal assistant trained on and continuously attuned to your political preferences could advise you, including what to support and who to vote for. It could possibly even vote on your behalf or, more interestingly, act as your personal representative.

This is where it gets interesting. Our system of representative democracy empowers elected officials to stand in for our collective preferences. But that has obvious problems. Representatives are necessary because people don’t pay attention to politics. And even if they did, there isn’t enough room in the debate hall for everyone to fit. So we need to pick one of us to pass laws in our name. But that selection process is incredibly inefficient. We have complex policy wants and beliefs and can make complex trade-offs. The space of possible policy outcomes is equally complex. But we can’t directly debate the policies. We can only choose one of two—or maybe a few more—candidates to do that for us. This has been called democracy’s “lossy bottleneck.” AI can change this. We can imagine a personal AI directly participating in policy debates on our behalf along with millions of other personal AIs and coming to a consensus on policy.

More near term, AIs can result in more ballot initiatives. Instead of five or six, there might be five or six hundred, as long as the AI can reliably advise people on how to vote. It’s hard to know whether this is a good thing. I don’t think we want people to become politically passive because the AI is taking care of it. But it could result in more legislation that the majority actually wants.

Where will AI take us?

That’s my list. Again, watch where changes of degree result in changes in kind. The sophistication of AI lawmaking will mean more detailed laws, which will change the balance of power between the executive and the legislative branches. The scale of AI lawyering means that litigation becomes affordable to everyone, which will mean an explosion in the amount of litigation. The speed of AI adjudication means that contract disputes will get resolved much faster, which will change the nature of settlements. The scope of AI enforcement means that some laws will become impossible to evade, which will change how the rich and powerful think about them.

I think this is all coming. The time frame is hazy, but the technology is moving in these directions.

All of these applications need security of one form or another. Can we provide confidentiality, integrity and availability where it is needed? AIs are just computers. As such, they have all the security problems regular computers have—plus the new security risks stemming from AI and the way it is trained, deployed and used. Like everything else in security, it depends on the details.

First, the incentives matter. In some cases, the user of the AI wants it to be both secure and accurate. In some cases, the user of the AI wants to subvert the system. Think about prompt injection attacks. In most cases, the owners of the AIs aren’t the users of the AI. As happened with search engines and social media, surveillance and advertising are likely to become the AI’s business model. And in some cases, what the user of the AI wants is at odds with what society wants.

Second, the risks matter. The cost of getting things wrong depends a lot on the application. If a candidate’s chatbot suggests a ridiculous policy, that’s easily corrected. If an AI is helping someone fill out their immigration paperwork, a mistake can get them deported. We need to understand the rate of AI mistakes versus the rate of human mistakes—and also realize that AI mistakes are viewed differently than human mistakes. There are also different types of mistakes: false positives versus false negatives. But also, AI systems can make different kinds of mistakes than humans do—and that’s important. In every case, the systems need to be able to correct mistakes, especially in the context of democracy.

Many of the applications are in adversarial environments. If two countries are using AI to assist in trade negotiations, they are both going to try to hack each other’s AIs. This will include attacks against the AI models but also conventional attacks against the computers and networks that are running the AIs. They’re going to want to subvert, eavesdrop on or disrupt the other’s AI.

Some AI applications will need to run in secure environments. Large language models work best when they have access to everything, in order to train. That goes against traditional classification rules about compartmentalization.

Fourth, power matters. AI is a technology that fundamentally magnifies power of the humans who use it, but not equally across users or applications. Can we build systems that reduce power imbalances rather than increase them? Think of the privacy versus surveillance debate in the context of AI.

And similarly, equity matters. Human agency matters.

And finally, trust matters. Whether or not to trust an AI is less about the AI and more about the application. Some of these AI applications are individual. Some of these applications are societal. Whether something like “fairness” matters depends on this. And there are many competing definitions of fairness that depend on the details of the system and the application. It’s the same with transparency. The need for it depends on the application and the incentives. Democratic applications are likely to require more transparency than corporate ones and probably AI models that are not owned and run by global tech monopolies.

All of these security issues are bigger than AI or democracy. Like all of our security experience, applying it to these new systems will require some new thinking.

AI will be one of humanity’s most important inventions. That’s probably true. What we don’t know is if this is the moment we are inventing it. Or if today’s systems are yet more over-hyped technologies. But these are security conversations we are going to need to have eventually.

AI is fundamentally a power-enhancing technology. We need to ensure that it distributes power and doesn’t further concentrate it.

AI is coming for democracy. Whether the changes are a net positive or negative depends on us. Let’s help tilt things to the positive.

This essay is adapted from a keynote speech delivered at the RSA Conference in San Francisco on May 7, 2024. It originally appeared in Cyberscoop.

 

How AI Will Change Democracy

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale...

The post How AI Will Change Democracy appeared first on Security Boulevard.

‘We deeply regret the distress’: cinema apologises for Richard Dreyfuss comments at Jaws screening

The actor took to the stage in a dress backed by Taylor Swift’s Love Story, then reportedly made a number of sexist and transphobic comments

A cinema in Massachusetts has apologised to the audience at a special screening of Jaws and a Q&A with its star, Richard Dreyfuss, who reportedly made a number of sexist and transphobic comments.

Appearing at the Cabot theatre in Beverly, Massachusetts on 25 May, Dreyfuss took to the stage in a house dress to a background track of Taylor Swift’s Love Story, shaking his hips suggestively and brandishing his walking stick like a baseball bat.

Continue reading...

💾

© Photograph: Kristin Callahan/Shutterstock

💾

© Photograph: Kristin Callahan/Shutterstock

Take Command Summit: Take Breaches from Inevitable to Preventable on May 21

Take Command Summit: Take Breaches from Inevitable to Preventable on May 21

Registration is now open for Take Command, a day-long virtual summit in partnership with AWS. You do not want to miss it. You’ll get new attack intelligence, insight into AI disruption, transparent MDR partnerships, and more.

In 2024, adversaries are using AI and new techniques, working in gangs with nation-state budgets. But it’s “inevitable” they’ll succeed? Really?

Before any talk of surrender, please join us at Take Command. We’ve packed the day with information and insights you can take back to your team and use immediately.

You’ll hear from Chief Scientist Raj Samani, our own Chief Security Officer Jaya Baloo, global security leaders, hands-on practitioners, and Rapid7 Labs leaders like Christiaan Beek and Caitlin Condon. You’ll get a first look at new, emergent research, trends, and intelligence from the curators of Metasploit and our renowned open source communities.

You’ll leave with actionable strategies to safeguard against the newest ransomware, state-sponsored TTPs, and marquee vulnerabilities.

Can’t make the entire day? Check out the agenda, see what fits

The summit kicks off with back-to-back keynotes. First, “Know Your Adversary: Breaking Down the 2024 Attack Intelligence Report” and “The State of Security 2024.”

You’ll get an insider view of Rapid7’s MDR SOC. Sessions range from “Building Defenses Through AI” to “Unlocking Success: Strategies for Measuring Team Performance” to a big favorite “Before, During, & After Ransomware Attacks.” Though no one really talks about it, there’s a lengthy “before” period, and new, good things you can do to frustrate the bad guys.

Take Command will offer strategies on building cybersecurity culture (yes, it’s difficult with humans). And, of course, preparing for the Securities & Exchange Commission's Cybersecurity Disclosure Rules. You’ll hear from Sabeen Malik, VP, Global Government Affairs and Public Policy, Kyra Ayo Caros Director, Corporate Securities & Compliance and Harley L. Geiger, Venable LLP.

Now, turning the tables on attackers is possible

Adversaries are inflicting $10 trillion in damage to the global economy every year , and the goal posts keep moving. As risks from cloud, IoT, AI and quantum computing proliferate and attacks get more frequent, SecOps have never been more stressed. And more in need of sophisticated guidance.

Mark your calendar for May 21. Get details here. You’ll be saving a lot more than the date.

How the “Frontier” Became the Slogan of Uncontrolled AI

Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called “frontier AI.” In previous decades, the internet itself was considered an electronic frontier. Early cyberspace pioneer John Perry Barlow wrote “Unlike previous frontiers, this one has no end.” When he and others founded the internet’s most important civil liberties organization, they called it the Electronic Frontier Foundation.

America’s experience with frontiers is fraught, to say the least. Expansion into the Western frontier and beyond has been a driving force in our country’s history and identity—and has led to some of the darkest chapters of our past. The tireless drive to conquer the frontier has directly motivated some of this nation’s most extreme episodes of racism, imperialism, violence, and exploitation.

That history has something to teach us about the material consequences we can expect from the promotion of AI today. The race to build the next great AI app is not the same as the California gold rush. But the potential that outsize profits will warp our priorities, values, and morals is, unfortunately, analogous.

Already, AI is starting to look like a colonialist enterprise. AI tools are helping the world’s largest tech companies grow their power and wealth, are spurring nationalistic competition between empires racing to capture new markets, and threaten to supercharge government surveillance and systems of apartheid. It looks more than a bit like the competition among colonialist state and corporate powers in the seventeenth century, which together carved up the globe and its peoples. By considering America’s past experience with frontiers, we can understand what AI may hold for our future, and how to avoid the worst potential outcomes.

America’s “Frontier” Problem

For 130 years, historians have used frontier expansion to explain sweeping movements in American history. Yet only for the past thirty years have we generally acknowledged its disastrous consequences.

Frederick Jackson Turner famously introduced the frontier as a central concept for understanding American history in his vastly influential 1893 essay. As he concisely wrote, “American history has been in a large degree the history of the colonization of the Great West.”

Turner used the frontier to understand all the essential facts of American life: our culture, way of government, national spirit, our position among world powers, even the “struggle” of slavery. The endless opportunity for westward expansion was a beckoning call that shaped the American way of life. Per Turner’s essay, the frontier resulted in the individualistic self-sufficiency of the settler and gave every (white) man the opportunity to attain economic and political standing through hardscrabble pioneering across dangerous terrain.The New Western History movement, gaining steam through the 1980s and led by researchers like Patricia Nelson Limerick, laid plain the racial, gender, and class dynamics that were always inherent to the frontier narrative. This movement’s story is one where frontier expansion was a tool used by the white settler to perpetuate a power advantage.The frontier was not a siren calling out to unwary settlers; it was a justification, used by one group to subjugate another. It was always a convenient, seemingly polite excuse for the powerful to take what they wanted. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy. But many of those whom he influenced did not do this; they celebrated it as a feature, not a bug. Theodore Roosevelt wrote extensively and explicitly about how the frontier and his conception of white supremacy justified expansion to points west and, through the prosecution of the Spanish-American War, far across the Pacific. Woodrow Wilson, too, celebrated the imperial loot from that conflict in 1902. Capitalist systems are “addicted to geographical expansion” and even, when they run out of geography, seek to produce new kinds of spaces to expand into. This is what the geographer David Harvey calls the “spatial fix.”Claiming that AI will be a transformative expanse on par with the Louisiana Purchase or the Pacific frontiers is a bold assertion—but increasingly plausible after a year dominated by ever more impressive demonstrations of generative AI tools. It’s a claim bolstered by billions of dollars in corporate investment, by intense interest of regulators and legislators worldwide in steering how AI is developed and used, and by the variously utopian or apocalyptic prognostications from thought leaders of all sectors trying to understand how AI will shape their sphere—and the entire world.

AI as a Permission Structure

Like the western frontier in the nineteenth century, the maniacal drive to unlock progress via advancement in AI can become a justification for political and economic expansionism and an excuse for racial oppression.

In the modern day, OpenAI famously paid dozens of Kenyans little more than a dollar an hour to process data used in training their models underlying products such as ChatGPT. Paying low wages to data labelers surely can’t be equated to the chattel slavery of nineteenth-century America. But these workers did endure brutal conditions, including being set to constantly review content with “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest.” There is a global market for this kind of work, which has been essential to the most important recent advances in AI such as Reinforcement Learning with Human Feedback, heralded as the most important breakthrough of ChatGPT.

The gold rush mentality associated with expansion is taken by the new frontiersmen as permission to break the rules, and to build wealth at the expense of everyone else. In 1840s California, gold miners trespassed on public lands and yet were allowed to stake private claims to the minerals they found, and even to exploit the water rights on those lands. Again today, the game is to push the boundaries on what rule-breaking society will accept, and hope that the legal system can’t keep up.

Many internet companies have behaved in exactly the same way since the dot-com boom. The prospectors of internet wealth lobbied for, or simply took of their own volition, numerous government benefits in their scramble to capture those frontier markets. For years, the Federal Trade Commission has looked the other way or been lackadaisical in halting antitrust abuses by Amazon, Facebook, and Google. Companies like Uber and Airbnb exploited loopholes in, or ignored outright, local laws on taxis and hotels. And Big Tech platforms enjoyed a liability shield that protected them from punishment the contents people posted to their sites.

We can already see this kind of boundary pushing happening with AI.

Modern frontier AI models are trained using data, often copyrighted materials, with untested legal justification. Data is like water for AI, and, like the fight over water rights in the West, we are repeating a familiar process of public acquiescence to private use of resources. While some lawsuits are pending, so far AI companies have faced no significant penalties for the unauthorized use of this data.

Pioneers of self-driving vehicles tried to skip permitting processes and used fake demonstrations of their capabilities to avoid government regulation and entice consumers. Meanwhile, AI companies’ hope is that they won’t be held to blame if the AI tools they produce spew out harmful content that causes damage in the real world. They are trying to use the same liability shield that fostered Big Tech’s exploitation of the previous electronic frontiers—the web and social media—to protect their own actions.

Even where we have concrete rules governing deleterious behavior, some hope that using AI is itself enough to skirt them. Copyright infringement is illegal if a person does it, but would that same person be punished if they train a large language model to regurgitate copyrighted works? In the political sphere, the Federal Election Commission has precious few powers to police political advertising; some wonder if they simply won’t be considered relevant if people break those rules using AI.

AI and American Exceptionalism

Like The United States’ historical frontier, AI has a feel of American exceptionalism. Historically, we believed we were different from the Old World powers of Europe because we enjoyed the manifest destiny of unrestrained expansion between the oceans. Today, we have the most CPU power, the most data scientists, the most venture-capitalist investment, and the most AI companies. This exceptionalism has historically led many Americans to believe they don’t have to play by the same rules as everyone else.

Both historically and in the modern day, this idea has led to deleterious consequences such as militaristic nationalism (leading to justifying of foreign interventions in Iraq and elsewhere), masking of severe inequity within our borders, abdication of responsibility from global treaties on climate and law enforcement, and alienation from the international community. American exceptionalism has also wrought havoc on our country’s engagement with the internet, including lawless spying and surveillance by forces like the National Security Agency.

The same line of thinking could have disastrous consequences if applied to AI. It could perpetuate a nationalistic, Cold War–style narrative about America’s inexorable struggle with China, this time predicated on an AI arms race. Moral exceptionalism justifies why we should be allowed to use tools and weapons that are dangerous in the hands of a competitor, or enemy. It could enable the next stage of growth of the military-industrial complex, with claims of an urgent need to modernize missile systems and drones through using AI. And it could renew a rationalization for violating civil liberties in the US and human rights abroad, empowered by the idea that racial profiling is more objective if enforced by computers.The inaction of Congress on AI regulation threatens to land the US in a regime of de facto American exceptionalism for AI. While the EU is about to pass its comprehensive AI Act, lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West—a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?

AI and Unbridled Growth

The fundamental problem with frontiers is that they seem to promise cost-free growth. There was a constant pressure for American westward expansion because a bigger, more populous country accrues more power and wealth to the elites and because, for any individual, a better life was always one more wagon ride away into “empty” terrain. AI presents the same opportunities. No matter what field you’re in or what problem you’re facing, the attractive opportunity of AI as a free labor multiplier probably seems like the solution; or, at least, makes for a good sales pitch.

That would actually be okay, except that the growth isn’t free. America’s imperial expansion displaced, harmed, and subjugated native peoples in the Americas, Africa, and the Pacific, while enlisting poor whites to participate in the scheme against their class interests. Capitalism makes growth look like the solution to all problems, even when it’s clearly not. The problem is that so many costs are externalized. Why pay a living wage to human supervisors training AI models when an outsourced gig worker will do it at a fraction of the cost? Why power data centers with renewable energy when it’s cheaper to surge energy production with fossil fuels? And why fund social protections for wage earners displaced by automation if you don’t have to? The potential of consumer applications of AI, from personal digital assistants to self-driving cars, is irresistible; who wouldn’t want a machine to take on the most routinized and aggravating tasks in your daily life? But the externalized cost for consumers is accepting the inevitability of domination by an elite who will extract every possible profit from AI services.

Controlling Our Frontier Impulses

None of these harms are inevitable. Although the structural incentives of capitalism and its growth remain the same, we can make different choices about how to confront them.

We can strengthen basic democratic protections and market regulations to avoid the worst impacts of AI colonialism. We can require ethical employment for the humans toiling to label data and train AI models. And we can set the bar higher for mitigating bias in training and harm from outputs of AI models.

We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models that any individual—or company—could use and build upon.

More ambitiously, we can choose not to privatize the economic gains of AI. We can cap corporate profits, raise the minimum wage, or redistribute an automation dividend as a universal basic income to let everyone share in the benefits of the AI revolution. And, if these technologies save as much labor as companies say they do, maybe we can also all have some of that time back.

And we don’t have to treat the global AI gold rush as a zero-sum game. We can emphasize international cooperation instead of competition. We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.

This essay was written with Nathan Sanders, and was originally published in Jacobin.

Wyze cameras show the wrong feeds to customers. Again.

Last September, we wrote an article about how Wyze home cameras temporarily showed other people’s security feeds.

As far as home cameras go, we said this is absolutely up there at the top of the “things you don’t want to happen” list. Turning your customers into Peeping Tom against their will and exposing other customers’ footage is definitely not OK.

It’s not OK, but yet here we are again. On February 17, TheVerge reported that history had repeated itself. Wyze co-founder David Crosby confirmed that users were able to briefly see into a stranger’s property because they were shown an image from someone else’s camera.

Crosby told The Verge:

“We have now identified a security issue where some users were able to see thumbnails of cameras that were not their own in the Events tab.”

So, it’s not a full feed and just a thumbnail, you might think. Is that such a big deal? Well, it was a bit more than that. Users got notification alerts for events in their house. I don’t know how you feel when you get one of those while you know there shouldn’t be anyone there, but it’s enough to make me nervous.

Imagine your surprise when you then see someone else’s house as the cause for that notification.

Wyze blames the issue on overload and corruption of user data after an AWS outage. However, AWS did not report an outage during the time Wyze cameras were having these problems.

And, while the company originally said it had identified 14 instances of the security issue, the number of complaints on Reddit and the Wyze forums indicated that there must have been a lot more.

This turned out to be the case. In an email sent to customers, Wyze revealed that it was actually around 13,000 people who got an unauthorized peek at thumbnails from other people’s homes.

Wyze chalks up the incident to a recently-integrated third-party caching client library which caused the issue when they brought back cameras online after an outage at AWS.

“This client library received unprecedented load conditions caused by devices coming back online all at once. As a result of increased demand, it mixed up device ID and user ID mapping and connected some data to incorrect accounts.”

Wyze says it has added an extra layer of verification before users can view Event videos.

So, all we can do is hope we don’t have to write another story like this one in a few months.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

❌