Reading view

There are new articles available, click to refresh the page.

Google’s A.I. Search Leaves Publishers Scrambling

Since Google overhauled its search engine, publishers have tried to assess the danger to their brittle business models while calling for government intervention.

© Jason Henry for The New York Times

Google’s chief executive, Sundar Pichai, last year. A new A.I.-generated feature in Google search results “is greatly detrimental to everyone apart from Google,” a newspaper executive said.

How AI Will Change Democracy

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale.

It gets interesting when changes in degree can become changes in kind. High-speed trading is fundamentally different than regular human trading. AIs have invented fundamentally new strategies in the game of Go. Millions of AI-controlled social media accounts could fundamentally change the nature of propaganda.

It’s these sorts of changes and how AI will affect democracy that I want to talk about.

To start, I want to list some of AI’s core competences. First, it is really good as a summarizer. Second, AI is good at explaining things, teaching with infinite patience. Third, and related, AI can persuade. Propaganda is an offshoot of this. Fourth, AI is fundamentally a prediction technology. Predictions about whether turning left or right will get you to your destination faster. Predictions about whether a tumor is cancerous might improve medical diagnoses. Predictions about which word is likely to come next can help compose an email. Fifth, AI can assess. Assessing requires outside context and criteria. AI is less good at assessing, but it’s getting better. Sixth, AI can decide. A decision is a prediction plus an assessment. We are already using AI to make all sorts of decisions.

How these competences translate to actual useful AI systems depends a lot on the details. We don’t know how far AI will go in replicating or replacing human cognitive functions. Or how soon that will happen. In constrained environments it can be easy. AIs already play chess and Go better than humans. Unconstrained environments are harder. There are still significant challenges to fully AI-piloted automobiles. The technologist Jaron Lanier has a nice quote, that AI does best when “human activities have been done many times before, but not in exactly the same way.”

In this talk, I am going to be largely optimistic about the technology. I’m not going to dwell on the details of how the AI systems might work. Much of what I am talking about is still in the future. Science fiction, but not unrealistic science fiction.

Where I am going to be less optimistic—and more realistic—is about the social implications of the technology. Again, I am less interested in how AI will substitute for humans. I’m looking more at the second-order effects of those substitutions: How the underlying systems will change because of changes in speed, scale, scope and sophistication. My goal is to imagine the possibilities. So that we might be prepared for their eventuality.

And as I go through the possibilities, keep in mind a few questions: Will the change distribute or consolidate power? Will it make people more or less personally involved in democracy? What needs to happen before people will trust AI in this context? What could go wrong if a bad actor subverted the AI in this context? And what can we do, as security technologists, to help?

I am thinking about democracy very broadly. Not just representations, or elections. Democracy as a system for distributing decisions evenly across a population. It’s a way of converting individual preferences into group decisions. And that includes bureaucratic decisions.

To that end, I want to discuss five different areas where AI will affect democracy: Politics, lawmaking, administration, the legal system and, finally, citizens themselves.

I: AI-assisted politicians

I’ve already said that AIs are good at persuasion. Politicians will make use of that. Pretty much everyone talks about AI propaganda. Politicians will make use of that, too. But let’s talk about how this might go well.

In the past, candidates would write books and give speeches to connect with voters. In the future, candidates will also use personalized chatbots to directly engage with voters on a variety of issues. AI can also help fundraise. I don’t have to explain the persuasive power of individually crafted appeals. AI can conduct polls. There’s some really interesting work into having large language models assume different personas and answer questions from their points of view. Unlike people, AIs are always available, will answer thousands of questions without getting tired or bored and are more reliable. This won’t replace polls, but it can augment them. AI can assist human campaign managers by coordinating campaign workers, creating talking points, doing media outreach and assisting get-out-the-vote efforts. These are all things that humans already do. So there’s no real news there.

The changes are largely in scale. AIs can engage with voters, conduct polls and fundraise at a scale that humans cannot—for all sizes of elections. They can also assist in lobbying strategies. AIs could also potentially develop more sophisticated campaign and political strategies than humans can. I expect an arms race as politicians start using these sorts of tools. And we don’t know if the tools will favor one political ideology over another.

More interestingly, future politicians will largely be AI-driven. I don’t mean that AI will replace humans as politicians. Absent a major cultural shift—and some serious changes in the law—that won’t happen. But as AI starts to look and feel more human, our human politicians will start to look and feel more like AI. I think we will be OK with it, because it’s a path we’ve been walking down for a long time. Any major politician today is just the public face of a complex socio-technical system. When the president makes a speech, we all know that they didn’t write it. When a legislator sends out a campaign email, we know that they didn’t write that either—even if they signed it. And when we get a holiday card from any of these people, we know that it was signed by an autopen. Those things are so much a part of politics today that we don’t even think about it. In the future, we’ll accept that almost all communications from our leaders will be written by AI. We’ll accept that they use AI tools for making political and policy decisions. And for planning their campaigns. And for everything else they do. None of this is necessarily bad. But it does change the nature of politics and politicians—just like television and the internet did.

II: AI-assisted legislators

AIs are already good at summarization. This can be applied to listening to constituents:  summarizing letters, comments and making sense of constituent inputs. Public meetings might be summarized. Here the scale of the problem is already overwhelming, and AI can make a big difference. Beyond summarizing, AI can highlight interesting arguments or detect bulk letter-writing campaigns. They can aid in political negotiating.

AIs can also write laws. In November 2023, Porto Alegre, Brazil became the first city to enact a law that was entirely written by AI. It had to do with water meters. One of the councilmen prompted ChatGPT, and it produced a complete bill. He submitted it to the legislature without telling anyone who wrote it. And the humans passed it without any changes.

A law is just a piece of generated text that a government agrees to adopt. And as with every other profession, policymakers will turn to AI to help them draft and revise text. Also, AI can take human-written laws and figure out what they actually mean. Lots of laws are recursive, referencing paragraphs and words of other laws. AIs are already good at making sense of all that.

This means that AI will be good at finding legal loopholes—or at creating legal loopholes. I wrote about this in my latest book, A Hacker’s Mind. Finding loopholes is similar to finding vulnerabilities in software. There’s also a concept called “micro-legislation.” That’s the smallest unit of law that makes a difference to someone. It could be a word or a punctuation mark. AIs will be good at inserting micro-legislation into larger bills. More positively, AI can help figure out unintended consequences of a policy change—by simulating how the change interacts with all the other laws and with human behavior.

AI can also write more complex law than humans can. Right now, laws tend to be general. With details to be worked out by a government agency. AI can allow legislators to propose, and then vote on, all of those details. That will change the balance of power between the legislative and the executive branches of government. This is less of an issue when the same party controls the executive and the legislative branches. It is a big deal when those branches of government are in the hands of different parties. The worry is that AI will give the most powerful groups more tools for propagating their interests.

AI can write laws that are impossible for humans to understand. There are two kinds of laws: specific laws, like speed limits, and laws that require judgment, like those that address reckless driving. Imagine that we train an AI on lots of street camera footage to recognize reckless driving and that it gets better than humans at identifying the sort of behavior that tends to result in accidents. And because it has real-time access to cameras everywhere, it can spot it … everywhere. The AI won’t be able to explain its criteria: It would be a black-box neural net. But we could pass a law defining reckless driving by what that AI says. It would be a law that no human could ever understand. This could happen in all sorts of areas where judgment is part of defining what is illegal. We could delegate many things to the AI because of speed and scale. Market manipulation. Medical malpractice. False advertising. I don’t know if humans will accept this.

III: AI-assisted bureaucracy

Generative AI is already good at a whole lot of administrative paperwork tasks. It will only get better. I want to focus on a few places where it will make a big difference. It could aid in benefits administration—figuring out who is eligible for what. Humans do this today, but there is often a backlog because there aren’t enough humans. It could audit contracts. It could operate at scale, auditing all human-negotiated government contracts. It could aid in contracts negotiation. The government buys a lot of things and has all sorts of complicated rules. AI could help government contractors navigate those rules.

More generally, it could aid in negotiations of all kinds. Think of it as a strategic adviser. This is no different than a human but could result in more complex negotiations. Human negotiations generally center around only a few issues. Mostly because that’s what humans can keep in mind. AI versus AI negotiations could potentially involve thousands of variables simultaneously. Imagine we are using an AI to aid in some international trade negotiation and it suggests a complex strategy that is beyond human understanding. Will we blindly follow the AI? Will we be more willing to do so once we have some history with its accuracy?

And one last bureaucratic possibility: Could AI come up with better institutional designs than we have today? And would we implement them?

IV: AI-assisted legal system

When referring to an AI-assisted legal system, I mean this very broadly—both lawyering and judging and all the things surrounding those activities.

AIs can be lawyers. Early attempts at having AIs write legal briefs didn’t go well. But this is already changing as the systems get more accurate. Chatbots are now able to properly cite their sources and minimize errors. Future AIs will be much better at writing legalese, drastically reducing the cost of legal counsel. And there’s every indication that it will be able to do much of the routine work that lawyers do. So let’s talk about what this means.

Most obviously, it reduces the cost of legal advice and representation, giving it to people who currently can’t afford it. An AI public defender is going to be a lot better than an overworked not very good human public defender. But if we assume that human-plus-AI beats AI-only, then the rich get the combination, and the poor are stuck with just the AI.

It also will result in more sophisticated legal arguments. AI’s ability to search all of the law for precedents to bolster a case will be transformative.

AI will also change the meaning of a lawsuit. Right now, suing someone acts as a strong social signal because of the cost. If the cost drops to free, that signal will be lost. And orders of magnitude more lawsuits will be filed, which will overwhelm the court system.

Another effect could be gutting the profession. Lawyering is based on apprenticeship. But if most of the apprentice slots are filled by AIs, where do newly minted attorneys go to get training? And then where do the top human lawyers come from? This might not happen. AI-assisted lawyers might result in more human lawyering. We don’t know yet.

AI can help enforce the law. In a sense, this is nothing new. Automated systems already act as law enforcement—think speed trap cameras and Breathalyzers. But AI can take this kind of thing much further, like automatically identifying people who cheat on tax returns, identifying fraud on government service applications and watching all of the traffic cameras and issuing citations.

Again, the AI is performing a task for which we don’t have enough humans. And doing it faster, and at scale. This has the obvious problem of false positives. Which could be hard to contest if the courts believe that the computer is always right. This is a thing today: If a Breathalyzer says you’re drunk, it can be hard to contest the software in court. And also the problem of bias, of course: AI law enforcers may be more and less equitable than their human predecessors.

But most importantly, AI changes our relationship with the law. Everyone commits driving violations all the time. If we had a system of automatic enforcement, the way we all drive would change—significantly. Not everyone wants this future. Lots of people don’t want to fund the IRS, even though catching tax cheats is incredibly profitable for the government. And there are legitimate concerns as to whether this would be applied equitably.

AI can help enforce regulations. We have no shortage of rules and regulations. What we have is a shortage of time, resources and willpower to enforce them, which means that lots of companies know that they can ignore regulations with impunity. AI can change this by decoupling the ability to enforce rules from the resources necessary to do it. This makes enforcement more scalable and efficient. Imagine putting cameras in every slaughterhouse in the country looking for animal welfare violations or fielding an AI in every warehouse camera looking for labor violations. That could create an enormous shift in the balance of power between government and corporations—which means that it will be strongly resisted by corporate power.

AIs can provide expert opinions in court. Imagine an AI trained on millions of traffic accidents, including video footage, telemetry from cars and previous court cases. The AI could provide the court with a reconstruction of the accident along with an assignment of fault. AI could do this in a lot of cases where there aren’t enough human experts to analyze the data—and would do it better, because it would have more experience.

AIs can also perform judging tasks, weighing evidence and making decisions, probably not in actual courtrooms, at least not anytime soon, but in other contexts. There are many areas of government where we don’t have enough adjudicators. Automated adjudication has the potential to offer everyone immediate justice. Maybe the AI does the first level of adjudication and humans handle appeals. Probably the first place we’ll see this is in contracts. Instead of the parties agreeing to binding arbitration to resolve disputes, they’ll agree to binding arbitration by AI. This would significantly decrease cost of arbitration. Which would probably significantly increase the number of disputes.

So, let’s imagine a world where dispute resolution is both cheap and fast. If you and I are business partners, and we have a disagreement, we can get a ruling in minutes. And we can do it as many times as we want—multiple times a day, even. Will we lose the ability to disagree and then resolve our disagreements on our own? Or will this make it easier for us to be in a partnership and trust each other?

V: AI-assisted citizens

AI can help people understand political issues by explaining them. We can imagine both partisan and nonpartisan chatbots. AI can also provide political analysis and commentary. And it can do this at every scale. Including for local elections that simply aren’t important enough to attract human journalists. There is a lot of research going on right now on AI as moderator, facilitator, and consensus builder. Human moderators are still better, but we don’t have enough human moderators. And AI will improve over time. AI can moderate at scale, giving the capability to every decision-making group—or chatroom—or local government meeting.

AI can act as a government watchdog. Right now, much local government effectively happens in secret because there are no local journalists covering public meetings. AI can change that, providing summaries and flagging changes in position.

AIs can help people navigate bureaucracies by filling out forms, applying for services and contesting bureaucratic actions. This would help people get the services they deserve, especially disadvantaged people who have difficulty navigating these systems. Again, this is a task that we don’t have enough qualified humans to perform. It sounds good, but not everyone wants this. Administrative burdens can be deliberate.

Finally, AI can eliminate the need for politicians. This one is further out there, but bear with me. Already there is research showing AI can extrapolate our political preferences. An AI personal assistant trained on and continuously attuned to your political preferences could advise you, including what to support and who to vote for. It could possibly even vote on your behalf or, more interestingly, act as your personal representative.

This is where it gets interesting. Our system of representative democracy empowers elected officials to stand in for our collective preferences. But that has obvious problems. Representatives are necessary because people don’t pay attention to politics. And even if they did, there isn’t enough room in the debate hall for everyone to fit. So we need to pick one of us to pass laws in our name. But that selection process is incredibly inefficient. We have complex policy wants and beliefs and can make complex trade-offs. The space of possible policy outcomes is equally complex. But we can’t directly debate the policies. We can only choose one of two—or maybe a few more—candidates to do that for us. This has been called democracy’s “lossy bottleneck.” AI can change this. We can imagine a personal AI directly participating in policy debates on our behalf along with millions of other personal AIs and coming to a consensus on policy.

More near term, AIs can result in more ballot initiatives. Instead of five or six, there might be five or six hundred, as long as the AI can reliably advise people on how to vote. It’s hard to know whether this is a good thing. I don’t think we want people to become politically passive because the AI is taking care of it. But it could result in more legislation that the majority actually wants.

Where will AI take us?

That’s my list. Again, watch where changes of degree result in changes in kind. The sophistication of AI lawmaking will mean more detailed laws, which will change the balance of power between the executive and the legislative branches. The scale of AI lawyering means that litigation becomes affordable to everyone, which will mean an explosion in the amount of litigation. The speed of AI adjudication means that contract disputes will get resolved much faster, which will change the nature of settlements. The scope of AI enforcement means that some laws will become impossible to evade, which will change how the rich and powerful think about them.

I think this is all coming. The time frame is hazy, but the technology is moving in these directions.

All of these applications need security of one form or another. Can we provide confidentiality, integrity and availability where it is needed? AIs are just computers. As such, they have all the security problems regular computers have—plus the new security risks stemming from AI and the way it is trained, deployed and used. Like everything else in security, it depends on the details.

First, the incentives matter. In some cases, the user of the AI wants it to be both secure and accurate. In some cases, the user of the AI wants to subvert the system. Think about prompt injection attacks. In most cases, the owners of the AIs aren’t the users of the AI. As happened with search engines and social media, surveillance and advertising are likely to become the AI’s business model. And in some cases, what the user of the AI wants is at odds with what society wants.

Second, the risks matter. The cost of getting things wrong depends a lot on the application. If a candidate’s chatbot suggests a ridiculous policy, that’s easily corrected. If an AI is helping someone fill out their immigration paperwork, a mistake can get them deported. We need to understand the rate of AI mistakes versus the rate of human mistakes—and also realize that AI mistakes are viewed differently than human mistakes. There are also different types of mistakes: false positives versus false negatives. But also, AI systems can make different kinds of mistakes than humans do—and that’s important. In every case, the systems need to be able to correct mistakes, especially in the context of democracy.

Many of the applications are in adversarial environments. If two countries are using AI to assist in trade negotiations, they are both going to try to hack each other’s AIs. This will include attacks against the AI models but also conventional attacks against the computers and networks that are running the AIs. They’re going to want to subvert, eavesdrop on or disrupt the other’s AI.

Some AI applications will need to run in secure environments. Large language models work best when they have access to everything, in order to train. That goes against traditional classification rules about compartmentalization.

Fourth, power matters. AI is a technology that fundamentally magnifies power of the humans who use it, but not equally across users or applications. Can we build systems that reduce power imbalances rather than increase them? Think of the privacy versus surveillance debate in the context of AI.

And similarly, equity matters. Human agency matters.

And finally, trust matters. Whether or not to trust an AI is less about the AI and more about the application. Some of these AI applications are individual. Some of these applications are societal. Whether something like “fairness” matters depends on this. And there are many competing definitions of fairness that depend on the details of the system and the application. It’s the same with transparency. The need for it depends on the application and the incentives. Democratic applications are likely to require more transparency than corporate ones and probably AI models that are not owned and run by global tech monopolies.

All of these security issues are bigger than AI or democracy. Like all of our security experience, applying it to these new systems will require some new thinking.

AI will be one of humanity’s most important inventions. That’s probably true. What we don’t know is if this is the moment we are inventing it. Or if today’s systems are yet more over-hyped technologies. But these are security conversations we are going to need to have eventually.

AI is fundamentally a power-enhancing technology. We need to ensure that it distributes power and doesn’t further concentrate it.

AI is coming for democracy. Whether the changes are a net positive or negative depends on us. Let’s help tilt things to the positive.

This essay is adapted from a keynote speech delivered at the RSA Conference in San Francisco on May 7, 2024. It originally appeared in Cyberscoop.

 

How AI Will Change Democracy

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale...

The post How AI Will Change Democracy appeared first on Security Boulevard.

Why Google’s AI Overviews gets things wrong

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

When Google announced it was rolling out its artificial-intelligence-powered search feature earlier this month, the company promised that “Google will do the googling for you.” The new feature, called AI Overviews, provides brief, AI-generated summaries highlighting key information and links on top of search results.

Unfortunately, AI systems are inherently unreliable. Within days of AI Overviews’ release in the US, users were sharing examples of responses that were strange at best. It suggested that users add glue to pizza or eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875. 

On Thursday, Liz Reid, head of Google Search, announced that the company has been making technical improvements to the system to make it less likely to generate incorrect answers, including better detection mechanisms for nonsensical queries. It is also limiting the inclusion of satirical, humorous, and user-generated content in responses, since such material could result in misleading advice.

But why is AI Overviews returning unreliable, potentially dangerous information? And what, if anything, can be done to fix it?

How does AI Overviews work?

In order to understand why AI-powered search engines get things wrong, we need to look at how they’ve been optimized to work. We know that AI Overviews uses a new generative AI model in Gemini, Google’s family of large language models (LLMs), that’s been customized for Google Search. That model has been integrated with Google’s core web ranking systems and designed to pull out relevant results from its index of websites.

Most LLMs simply predict the next word (or token) in a sequence, which makes them appear fluent but also leaves them prone to making things up. They have no ground truth to rely on, but instead choose each word purely on the basis of a statistical calculation. That leads to hallucinations. It’s likely that the Gemini model in AI Overviews gets around this by using an AI technique called retrieval-augmented generation (RAG), which allows an LLM to check specific sources outside of the data it’s been trained on, such as certain web pages, says Chirag Shah, a professor at the University of Washington who specializes in online search.

Once a user enters a query, it’s checked against the documents that make up the system’s information sources, and a response is generated. Because the system is able to match the original query to specific parts of web pages, it’s able to cite where it drew its answer from—something normal LLMs cannot do.

One major upside of RAG is that the responses it generates to a user’s queries should be more up to date, more factually accurate, and more relevant than those from a typical model that just generates an answer based on its training data. The technique is often used to try to prevent LLMs from hallucinating. (A Google spokesperson would not confirm whether AI Overviews uses RAG.)

So why does it return bad answers?

But RAG is far from foolproof. In order for an LLM using RAG to come up with a good answer, it has to both retrieve the information correctly and generate the response correctly. A bad answer results when one or both parts of the process fail.

In the case of AI Overviews’ recommendation of a pizza recipe that contains glue—drawing from a joke post on Reddit—it’s likely that the post appeared relevant to the user’s original query about cheese not sticking to pizza, but something went wrong in the retrieval process, says Shah. “Just because it’s relevant doesn’t mean it’s right, and the generation part of the process doesn’t question that,” he says.

Similarly, if a RAG system comes across conflicting information, like a policy handbook and an updated version of the same handbook, it’s unable to work out which version to draw its response from. Instead, it may combine information from both to create a potentially misleading answer. 

“The large language model generates fluent language based on the provided sources, but fluent language is not the same as correct information,” says Suzan Verberne, a professor at Leiden University who specializes in natural-language processing.

The more specific a topic is, the higher the chance of misinformation in a large language model’s output, she says, adding: “This is a problem in the medical domain, but also education and science.”

According to the Google spokesperson, in many cases when AI Overviews returns incorrect answers it’s because there’s not a lot of high-quality information available on the web to show for the query—or because the query most closely matches satirical sites or joke posts.

The spokesperson says the vast majority of AI Overviews provide high-quality information and that many of the examples of bad answers were in response to uncommon queries, adding that AI Overviews containing potentially harmful, obscene, or otherwise unacceptable content came up in response to less than one in every 7 million unique queries. Google is continuing to remove AI Overviews on certain queries in accordance with its content policies. 

It’s not just about bad training data

Although the pizza glue blunder is a good example of a case where AI Overviews pointed to an unreliable source, the system can also generate misinformation from factually correct sources. Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico, googled “How many Muslim presidents has the US had?’” AI Overviews responded: “The United States has had one Muslim president, Barack Hussein Obama.” 

While Barack Obama is not Muslim, making AI Overviews’ response wrong, it drew its information from a chapter in an academic book titled Barack Hussein Obama: America’s First Muslim President? So not only did the AI system miss the entire point of the essay, it interpreted it in the exact opposite of the intended way, says Mitchell. “There’s a few problems here for the AI; one is finding a good source that’s not a joke, but another is interpreting what the source is saying correctly,” she adds. “This is something that AI systems have trouble doing, and it’s important to note that even when it does get a good source, it can still make errors.”

Can the problem be fixed?

Ultimately, we know that AI systems are unreliable, and so long as they are using probability to generate text word by word, hallucination is always going to be a risk. And while AI Overviews is likely to improve as Google tweaks it behind the scenes, we can never be certain it’ll be 100% accurate.

Google has said that it’s adding triggering restrictions for queries where AI Overviews were not proving to be especially helpful and has added additional “triggering refinements” for queries related to health. The company could add a step to the information retrieval process designed to flag a risky query and have the system refuse to generate an answer in these instances, says Verberne. Google doesn’t aim to show AI Overviews for explicit or dangerous topics, or for queries that indicate a vulnerable situation, the company spokesperson says.

Techniques like reinforcement learning from human feedback, which incorporates such feedback into an LLM’s training, can also help improve the quality of its answers. 

Similarly, LLMs could be trained specifically for the task of identifying when a question cannot be answered, and it could also be useful to instruct them to carefully assess the quality of a retrieved document before generating an answer, Verbene says: “Proper instruction helps a lot!” 

Although Google has added a label to AI Overviews answers reading “Generative AI is experimental,” it should consider making it much clearer that the feature is in beta and emphasizing that it is not ready to provide fully reliable answers, says Shah. “Until it’s no longer beta—which it currently definitely is, and will be for some time— it should be completely optional. It should not be forced on us as part of core search.”

Google Eats Rocks, a Win for A.I. Interpretability and Safety Vibe Check

“Pass me the nontoxic glue and a couple of rocks, because it’s time to whip up a meal with Google’s new A.I. Overviews.”

© Photo Illustration by The New York Times; Photo: Enter89/Getty Images (rocks); T Kimura/Getty Images (plate)

The Long History of Discrimination in Job Hiring Assessments

pApplying for jobs can be a difficult and frustrating experience: you’re putting forward your qualifications to be judged by a prospective employer. We all want to be treated fairly. We want our qualifications to speak for themselves. But for job seekers who have been historically excluded or discriminated against because of their race, gender identity, or disability, there can be another question lurking in the background: Am I being judged, not for my ability to do the job, but for my identity?/p pAutomated decision-making tools, including those using artificial intelligence, or AI, and algorithms, have been widely adopted in hiring. Today seven out of 10 employers use them. We have a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hiredpreviously written/a about AI and some of the newer ways that it’s impacting hiring, including how it lacks transparency and can harbor serious flaws that lead to bias and discrimination. But these tools are just the latest frontier in a long history of employment tests that can discriminate and harm job seekers. For example, one of the landmark civil rights cases, a href=https://supreme.justia.com/cases/federal/us/401/424/Griggs v. Duke Power Co (1971)/a, was about a company’s use of bogus tests to a href=https://www.eeoc.gov/meetings/meeting-january-31-2023-navigating-employment-discrimination-ai-and-automated-systems-new/mooreblock the promotion of Black workers/a./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d.jpg class=attachment-4x3_full size-4x3_full alt= decoding=async srcset=https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired target=_blank How Artificial Intelligence Might Prevent You From Getting Hired /a /div div class=wp-link__description a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletAI-based tools are used throughout hiring processes, increasing the odds of discrimination in the workplace./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pWhen tests and tools that have a long history of problems are combined with new technologies like AI, risks of harm only increase, exacerbating harmful barriers to employment based on race, gender, disability, and other protected characteristics. While the harm of racial discrimination in employment tests has long been recognized and challenged, there has been less awareness about how these tests impact applicants who, in addition to facing racial discrimination, face discrimination based on their disabilities./p pThe use of personality assessments in hiring processes has become increasingly common. Yet these tests often ask general questions that may have little to do with the ability to do the job and capture traits that are directly linked with characteristics commonly associated with autism and mental health conditions such as depression and anxiety. This creates a high risk that qualified workers with these disabilities will be disadvantaged compared to other workers and may be unfairly and illegally screened out./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring target=_blank tabindex=-1 img width=750 height=375 src=https://www.aclu.org/wp-content/uploads/2023/11/9adf74e5819f7726f6dd759d712b47eb.jpg class=attachment-4x3_full size-4x3_full alt=A graphic featuring a diverse group of individuals. decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2023/11/9adf74e5819f7726f6dd759d712b47eb.jpg 750w, https://www.aclu.org/wp-content/uploads/2023/11/9adf74e5819f7726f6dd759d712b47eb-400x200.jpg 400w, https://www.aclu.org/wp-content/uploads/2023/11/9adf74e5819f7726f6dd759d712b47eb-600x300.jpg 600w sizes=(max-width: 750px) 100vw, 750px / /a /div div class=wp-link__title a href=https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring target=_blank Know Your Rights | Know Your Digital Rights: Digital Discrimination in Hiring /a /div div class=wp-link__description a href=https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletEqual access to job opportunities is a core component of economic justice. Increasingly, employers are using automated tools in their hiring.../p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pTo push back, we a class=Hyperlink SCXW161865474 BCX0 href=https://www.aclu.org/documents/aclu-complaint-to-the-ftc-regarding-aon-consulting-inc target=_blank rel=noreferrer noopenerfiled a complaint/a to the Federal Trade Commission (FTC) against Aon, a major hiring technology vendor, alleging that Aon is deceptively marketing widely used online hiring tests as “bias-free” even though the tests discriminate against job seekers based on traits like their race or disability. The ACLU and co-counsel have also filed charges with the Equal Employment Opportunity Commission (EEOC) against both Aon and an employer that uses Aon’s assessments on behalf of a biracial (Black/white) autistic job applicant who was required to take Aon assessments as part of the employer’s hiring process./p pTwo Aon products, a “personality” assessment test and its automated video interviewing tool, which integrate algorithmic or AI-related features, are marketed to employers across industries as cost-effective, efficient, and less discriminatory than traditional methods of assessing workers and applicants. However, these products assess very general personality traits such as positivity, emotional awareness, liveliness, ambition, and drive that are not clearly job related or necessary for a specific job and can unfairly screen out people based on disabilities. The automated features of these tools exacerbate these fundamental problems, particularly as Aon incorporated artificial intelligence elements in its video interviewing tool that are also likely to discriminate based on disability, race, and other protected characteristics./p pCognitive ability assessments, another staple in hiring, must also be subject to scrutiny, as they have long been shown to disadvantage Black job candidates and other candidates of color and may also unfairly exclude individuals based on disability. These tests, touted to measure aspects of memory, as well as several others it markets, have racial disparities in performance./p pFor autistic and other neurodivergent job applicants and applicants of color, cognitive ability assessments pose a significant barrier to employment. Not only do they fail to accommodate diverse needs, but they also perpetuate discrimination based on race, disability, and other traits. Employers should not use assessments that carry a high risk of discrimination. Employers risk screening out people who could be successful employees, impacting diversity in the workplace, and could face legal liability, even where the assessments are designed and administered by third-party vendors. Employers have a legal obligation to thoroughly vet any assessments they use for compliance with anti-discrimination laws, and if they decide to use an assessment, they must provide meaningful notice so that disabled workers can make an informed choice whether to seek accommodations or alternative processes./p pBut vendors must also be accountable for the tools they market. Employers can hold vendors accountable by demanding that vendors truly design their products to be inclusive – including by incorporating the perspectives and experiences of people with disabilities and other protected groups into their design process #8212; and conduct thorough auditing for discrimination based on race, disability and other protected characteristics. They can also demand transparency and decline to purchase their products if they fail to do so. And vendors can and should also be held legally accountable for their discriminatory products and deceptively marketing them. As the EEOC recently a href=https://www.eeoc.gov/litigation/briefs/mobley-v-workday-incargued/a in a federal case about discrimination in an online hiring product, vendors can be held accountable under employment discrimination laws, and our FTC complaint should serve as notice to vendors that we will seek to hold them accountable under consumer protection laws as well./p pAs the hiring landscape continues to change and job applicants face new hiring tools, we must strive for a future where skills and potential, not bias, determines our opportunities. The ACLU stands ready to defend the rights of individuals wronged by discriminatory practices. Together, we can dismantle discriminatory barriers and build a more inclusive workforce for all./p

AI-directed drones could help find lost hikers faster

If a hiker gets lost in the rugged Scottish Highlands, rescue teams sometimes send up a drone to search for clues of the individual’s route—trampled vegetation, dropped clothing, food wrappers. But with vast terrain to cover and limited battery life, picking the right area to search is critical.

Traditionally, expert drone pilots use a combination of intuition and statistical “search theory”—a strategy with roots in World War II–era hunting of German submarines—to prioritize certain search locations over others. Jan-Hendrik Ewers and a team from the University of Glasgow recently set out to see if a machine-learning system could do better.

Ewers grew up skiing and hiking in the Highlands, giving him a clear idea of the complicated challenges involved in rescue operations there. “There wasn’t much to do growing up, other than spending time outdoors or sitting in front of my computer,” he says. “I ended up doing a lot of both.”

To start, Ewers took data sets of search-and-rescue cases from around the world, which include details such as an individual’s age, whether they were hunting, horseback riding, or hiking, and if they suffered from dementia, along with information about the location where the person was eventually found—by water, buildings, open ground, trees, or roads. He trained an AI model with this data, in addition to geographical data from Scotland. The model runs millions of simulations to reveal the routes a missing person would be most likely to take under the specific circumstances. The result is a probability distribution—a heat map of sorts—indicating the priority search areas. 

With this kind of probability map, the team showed that deep learning could be used to design more efficient search paths for drones. In research published last week on arXiv, which has not yet been peer reviewed, the team tested its algorithm against two common search patterns: the “lawn mower,” in which a drone would fly over a target area in a series of simple stripes, and an algorithm similar to Ewers’s but less adept at working with probability distribution maps.

In virtual testing, Ewers’s algorithm beat both of those approaches on two key measures: the distance a drone would have to fly to locate the missing person, and the likelihood that the person was found. While the lawn mower and the existing algorithmic approach found the person 8% of the time and 12% of the time, respectively, Ewers’s approach found them 19% of the time. If it proves successful in real rescue situations, the new system could speed up response times, and save more lives, in scenarios where every minute counts. 

“The search-and-rescue domain in Scotland is extremely varied, and also quite dangerous,” Ewers says. Emergencies can arise in thick forests on the Isle of Arran, the steep mountains and slopes around the Cairngorm Plateau, or the faces of Ben Nevis, one of the most revered but dangerous rock climbing destinations in Scotland. “Being able to send up a drone and efficiently search with it could potentially save lives,” he adds.

Search-and-rescue experts say that using deep learning to design more efficient drone routes could help locate missing persons faster in a variety of wilderness areas, depending on how well suited the environment is for drone exploration (it’s harder for drones to explore dense canopy than open brush, for example).

“That approach in the Scottish Highlands certainly sounds like a viable one, particularly in the early stages of search when you’re waiting for other people to show up,” says David Kovar, a director at the US National Association for Search and Rescue in Williamsburg, Virginia, who has used drones for everything from disaster response in California to wilderness search missions in New Hampshire’s White Mountains. 

But there are caveats. The success of such a planning algorithm will hinge on how accurate the probability maps are. Overreliance on these maps could mean that drone operators spend too much time searching the wrong areas. 

Ewers says a key next step to making the probability maps as accurate as possible will be obtaining more training data. To do that, he hopes to use GPS data from more recent rescue operations to run simulations, essentially helping his model to understand the connections between the location where someone was last seen and where they were ultimately found. 

Not all rescue operations contain rich enough data for him to work with, however. “We have this problem in search and rescue where the training data is extremely sparse, and we know from machine learning that we want a lot of high-quality data,” Ewers says. “If an algorithm doesn’t perform better than a human, you are potentially risking someone’s life.”

Drones are becoming more common in the world of search and rescue. But they are still a relatively new technology, and regulations surrounding their use are still in flux.

In the US, for example, drone pilots are required to have a constant line of sight between them and their drone. In Scotland, meanwhile, operators aren’t permitted to be more than 500 meters away from their drone. These rules are meant to prevent accidents, such as a drone falling and endangering people, but in rescue settings such rules severely curtail ground rescuers’ ability to survey for clues. 

“Oftentimes we’re facing a regulatory problem rather than a technical problem,” Kovar says. “Drones are capable of doing far more than we’re allowed to use them for.”

Ewers hopes that models like his might one day expand the capabilities of drones even more. For now, he is in conversation with the Police Scotland Air Support Unit to see what it would take to test and deploy his system in real-world settings. 

A NIST AI RMF Summary – Source: securityboulevard.com

a-nist-ai-rmf-summary-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Cameron Delfin Artificial intelligence (AI) is revolutionizing numerous sectors, but its integration into cybersecurity is particularly transformative. AI enhances threat detection, automates responses, and predicts potential security breaches, offering a proactive approach to cybersecurity. However, it also introduces new challenges, such as AI-driven attacks and the complexities of securing AI systems. […]

La entrada A NIST AI RMF Summary – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Once a Sheriff’s Deputy in Florida, Now a Source of Disinformation From Russia

In 2016, Russia used an army of trolls to interfere in the U.S. presidential election. This year, an American given asylum in Moscow may be accomplishing much the same thing all by himself.

© Alexander Zemlianichenko/Associated Press

John Mark Dougan, who has been granted asylum in Moscow, above, has become a key player in the Kremlin’s information operations against the West.

AI-readiness for C-suite leaders

Generative AI, like predictive AI before it, has rightly seized the attention of business executives. The technology has the potential to add trillions of dollars to annual global economic activity, and its adoption for business applications is expected to improve the top or bottom lines—or both—at many organizations.

While generative AI offers an impressive and powerful new set of capabilities, its business value is not a given. While some powerful foundational models are open to public use, these do not serve as a differentiator for those looking to get ahead of the competition and unlock AI’s full potential. To gain those advantages, organizations must look to enhance AI models with their own data to create unique business insights and opportunities.

Preparing an organization’s data for AI, however, unlocks a new set of challenges and opportunities. This MIT Technology Review Insights survey report investigates whether companies’ data foundations are ready to garner benefits from generative AI, as well as the challenges of building the necessary data infrastructure for this technology. In doing so, it draws on insights from a survey of 300 C-suite executives and senior technology leaders, as well on in-depth interviews with four leading experts.

Its key findings include the following:

Data integration is the leading priority for AI readiness. In our survey, 82% of C-suite and other senior executives agree that “scaling AI or generative AI use cases to create business value is a top priority for our organization.” The number-one challenge in achieving that AI readiness, survey respondents say, is data integration and pipelines (45%). Asked about challenging aspects of data integration, respondents named four: managing data volume, moving data from on-premises to the cloud, enabling real-time access, and managing changes to data.

Executives are laser-focused on data management challenges—and lasting solutions. Among survey respondents, 83% say that their “organization has identified numerous sources of data that we must bring together in order to enable our AI initiatives.” Though data-dependent technologies of recent decades drove data integration and aggregation programs, these were typically tailored to specific use cases. Now, however, companies are looking for something more scalable and use-case agnostic: 82% of respondents are prioritizing solutions “that will continue to work in the future, regardless of other changes to our data strategy and partners.”

Data governance and security is a top concern for regulated sectors. Data governance and security concerns are the second most common data readiness challenge (cited by 44% of respondents). Respondents from highly regulated sectors were two to three times more likely to cite data governance and security as a concern, and chief data officers (CDOs) say this is a challenge at twice the rate of their C-suite peers. And our experts agree: Data governance and security should be addressed from the beginning of any AI strategy to ensure data is used and accessed properly.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Download the full report.

OpenAI Says It Has Begun Training a New Flagship A.I. Model

The advanced A.I. system would succeed GPT-4, which powers ChatGPT. The company has also created a new safety committee to address A.I.’s risks.

© Jason Redmond/Agence France-Presse — Getty Images

As Sam Altman’s OpenAI trains its new model, its new Safety and Security committee will work to hone policies and processes for safeguarding the technology, the company said.

Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com

attempts-to-regulate-ai’s-hidden-hand-in-americans’-lives-flounder-in-us-statehouses-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Associated Press The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide. Only one of seven bills aimed at preventing AI’s penchant to discriminate when making […]

La entrada Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com

averlon-emerges-from-stealth-mode-with-$8-million-in-funding-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Ionut Arghire Cloud security startup Averlon has emerged from stealth mode with $8 million in seed funding, which brings the total raised by the company to $10.5 million. The new investment round was led by Voyager Capital, with additional funding from Outpost Ventures, Salesforce Ventures, and angel investors. Co-founded by […]

La entrada Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent – Source: www.securityweek.com

us-intelligence-agencies’-embrace-of-generative-ai-is-at-once-wary-and-urgent-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Associated Press Long before generative AI’s boom, a Silicon Valley firm contracted to collect and analyze non-classified data on illicit Chinese fentanyl trafficking made a compelling case for its embrace by U.S. intelligence agencies. The operation’s results far exceeded human-only analysis, finding twice as many companies and 400% more people […]

La entrada US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

The Rise and Risks of Shadow AI

 

Shadow AI, the internal
use of AI tools and services without the enterprise oversight teams expressly
knowing about it (ex. IT, legal,
cybersecurity, compliance, and privacy teams, just to name a few), is becoming a problem!

Workers are flocking to use 3rd party AI services
(ex. websites like ChatGPT) but also there are often savvy technologists who
are importing models and building internal AI systems (it really is not that
difficult) without telling the enterprise ops teams. Both situations are
increasing and many organizations are blind to the risks.

According to a recent Cyberhaven
report
:

  • AI is Accelerating:  Corporate data
    input into AI tools surged by 485%
  • Increased Data Risks:  Sensitive data
    submission jumped 156%, led by customer support data
  • Threats are Hidden:  Majority of AI use
    on personal accounts lacks enterprise safeguards
  • Security Vulnerabilities:  Increased
    risk of data breaches and exposure through AI tool use.


The risks are real and
the problem is growing.

Now is the time to get ahead of this problem.
1. Establish policies for use and
development/deployment

2. Define and communicate an AI Ethics posture
3. Incorporate cybersecurity/privacy/compliance
teams early into such programs

4. Drive awareness and compliance by including
these AI topics in the employee/vendor training


Overall, the goal is to build awareness and
collaboration. Leveraging AI can bring tremendous benefits, but should be done
in a controlled way that aligns with enterprise oversight requirements.


"Do what is great, while it is small" -
A little effort now can help avoid serious mishaps in the future!

The post The Rise and Risks of Shadow AI appeared first on Security Boulevard.

OpenAI backpedals on scandalous tactic to silence former employees

OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Former and current OpenAI employees received a memo this week that the AI company hopes to end the most embarrassing scandal that Sam Altman has ever faced as OpenAI's CEO.

The memo finally clarified for employees that OpenAI would not enforce a non-disparagement contract that employees since at least 2019 were pressured to sign within a week of termination or else risk losing their vested equity. For an OpenAI employee, that could mean losing millions for expressing even mild criticism about OpenAI's work.

You can read the full memo below in a post on X (formerly Twitter) from Andrew Carr, a former OpenAI employee whose LinkedIn confirms that he left the company in 2021.

Read 22 remaining paragraphs | Comments

ScarJo vs. ChatGPT, Neuralink’s First Patient Opens Up, and Microsoft’s A.I. PCs

“Did you ever think we would have a literal Avenger fighting back against the relentless march of A.I.? Because that’s sort of what this story is about.”

© Photo Illustration by The New York Times; Photo: Evan Agostini/Invision, via Associated Press

Anthropic’s Generative AI Research Reveals More About How LLMs Affect Security and Bias – Source: www.techrepublic.com

anthropic’s-generative-ai-research-reveals-more-about-how-llms-affect-security-and-bias-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Megan Crouse Because large language models operate using neuron-like structures that may link many different concepts and modalities together, it can be difficult for AI developers to adjust their models to change the models’ behavior. If you don’t know what neurons connect what concepts, you won’t know which neurons to change. […]

La entrada Anthropic’s Generative AI Research Reveals More About How LLMs Affect Security and Bias – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Personal AI Assistants and Privacy – Source: www.schneier.com

personal-ai-assistants-and-privacy-–-source:-wwwschneier.com

Source: www.schneier.com – Author: Bruce Schneier Microsoft is trying to create a personal digital assistant: At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records […]

La entrada Personal AI Assistants and Privacy – Source: www.schneier.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

How the Internet of Things (IoT) became a dark web target – and what to do about it – Source: www.cybertalk.org

how-the-internet-of-things-(iot)-became-a-dark-web-target-–-and-what-to-do-about-it-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau By Antoinette Hodes, Office of the CTO, Check Point Software Technologies. The dark web has evolved into a clandestine marketplace where illicit activities flourish under the cloak of anonymity. Due to its restricted accessibility, the dark web exhibits a decentralized structure with minimal enforcement of security controls, making it a […]

La entrada How the Internet of Things (IoT) became a dark web target – and what to do about it – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023. (credit: Sean Zanni / Contributor | Patrick McMullan)

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson's voice when seeking an actor for ChatGPT's "Sky" voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman's defense against Johansson's claims that Sky was made to sound "eerily similar" to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

Read 40 remaining paragraphs | Comments

Noise-canceling headphones use AI to let a single voice through

Modern life is noisy. If you don’t like it, noise-canceling headphones can reduce the sounds in your environment. But they muffle sounds indiscriminately, so you can easily end up missing something you actually want to hear.

A new prototype AI system for such headphones aims to solve this. Called Target Speech Hearing, the system gives users the ability to select a person whose voice will remain audible even when all other sounds are canceled out.

Although the technology is currently a proof of concept, its creators say they are in talks to embed it in popular brands of noise-canceling earbuds and are also working to make it available for hearing aids.

“Listening to specific people is such a fundamental aspect of how we communicate and how we interact in the world with other humans,” says Shyam Gollakota, a professor at the University of Washington, who worked on the project. “But it can get really challenging, even if you don’t have any hearing loss issues, to focus on specific people when it comes to noisy situations.” 

The same researchers previously managed to train a neural network to recognize and filter out certain sounds, such as babies crying, birds tweeting, or alarms ringing. But separating out human voices is a tougher challenge, requiring much more complex neural networks.

That complexity is a problem when AI models need to work in real time in a pair of headphones with limited computing power and battery life. To meet such constraints, the neural networks needed to be small and energy efficient. So the team used an AI compression technique called knowledge distillation. This meant taking a huge AI model that had been trained on millions of voices (the “teacher”) and having it train a much smaller model (the “student”) to imitate its behavior and performance to the same standard.   

The student was then taught to extract the vocal patterns of specific voices from the surrounding noise captured by microphones attached to a pair of commercially available noise-canceling headphones.

To activate the Target Speech Hearing system, the wearer holds down a button on the headphones for several seconds while facing the person to be focused on. During this “enrollment” process, the system captures an audio sample from both headphones and uses this recording to extract the speaker’s vocal characteristics, even when there are other speakers and noises in the vicinity.

These characteristics are fed into a second neural network running on a microcontroller computer connected to the headphones via USB cable. This network runs continuously, keeping the chosen voice separate from those of other people and playing it back to the listener. Once the system has locked onto a speaker, it keeps prioritizing that person’s voice, even if the wearer turns away. The more training data the system gains by focusing on a speaker’s voice, the better its ability to isolate it becomes. 

For now, the system is only able to successfully enroll a targeted speaker whose voice is the only loud one present, but the team aims to make it work even when the loudest voice in a particular direction is not the target speaker.

Singling out a single voice in a loud environment is very tough, says Sefik Emre Eskimez, a senior researcher at Microsoft who works on speech and AI, but who did not work on the research. “I know that companies want to do this,” he says. “If they can achieve it, it opens up lots of applications, particularly in a meeting scenario.”

While speech separation research tends to be more theoretical than practical, this work has clear real-world applications, says Samuele Cornell, a researcher at Carnegie Mellon University’s Language Technologies Institute, who did not work on the research. “I think it’s a step in the right direction,” Cornell says. “It’s a breath of fresh air.”

Personal AI Assistants and Privacy

Microsoft is trying to create a personal digital assistant:

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.

I wrote about this AI trust problem last year:

One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—­or at least like an animal. It will interact with the whole of your existence, just like another person would.

[…]

And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.

It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.

We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.

[…]

All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.

The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.

We are going to need some sort of public AI to counterbalance all of these corporate AIs.

EDITED TO ADD (5/24): Lots of comments about Microsoft Recall and security:

This:

Because Recall is “default allow” (it relies on a list of things not to record) … it’s going to vacuum up huge volumes and heretofore unknown types of data, most of which are ephemeral today. The “we can’t avoid saving passwords if they’re not masked” warning Microsoft included is only the tip of that iceberg. There’s an ocean of data that the security ecosystem assumes is “out of reach” because it’s either never stored, or it’s encrypted in transit. All of that goes out the window if the endpoint is just going to…turn around and write it to disk. (And local encryption at rest won’t help much here if the data is queryable in the user’s own authentication context!)

This:

The fact that Microsoft’s new Recall thing won’t capture DRM content means the engineers do understand the risk of logging everything. They just chose to preference the interests of corporates and money over people, deliberately.

This:

Microsoft Recall is going to make post-breach impact analysis impossible. Right now IR processes can establish a timeline of data stewardship to identify what information may have been available to an attacker based on the level of access they obtained. It’s not trivial work, but IR folks can do it. Once a system with Recall is compromised, all data that has touched that system is potentially compromised too, and the ML indirection makes it near impossible to confidently identify a blast radius.

This:

You may be in a position where leaders in your company are hot to turn on Microsoft Copilot Recall. Your best counterargument isn’t threat actors stealing company data. It’s that opposing counsel will request the recall data and demand it not be disabled as part of e-discovery proceedings.

Meta says AI-generated election content is not happening at a “systemic level”

Meta has seen strikingly little AI-generated misinformation around the 2024 elections despite major votes in countries such as Indonesia, Taiwan, and Bangladesh, said the company’s president of global affairs, Nick Clegg, on Wednesday. 

“The interesting thing so far—I stress, so far—is not how much but how little AI-generated content [there is],” said Clegg during an interview at MIT Technology Review’s EmTech Digital conference in Cambridge, Massachusetts.  

“It is there; it is discernible. It’s really not happening on … a volume or a systemic level,” he said. Clegg said Meta has seen attempts at interference in, for example, the Taiwanese election, but that the scale of that interference is at a “manageable amount.” 

As voters will head to polls this year in more than 50 countries, experts have raised the alarm over AI-generated political disinformation and the prospect that malicious actors will use generative AI and social media to interfere with elections. Meta has previously faced criticism over its content moderation policies around past elections—for example, when it failed to prevent the January 6 rioters from organizing on its platforms. 

Clegg defended the company’s efforts at preventing violent groups from organizing, but he also stressed the difficulty of keeping up. “This is a highly adversarial space. You play Whack-a-Mole, candidly. You remove one group, they rename themselves, rebrand themselves, and so on,” he said. 

Clegg argued that compared with 2016, the company is now “utterly different” when it comes to moderating election content. Since then, it has removed over 200 “networks of coordinated inauthentic behavior,” he said. The company now relies on fact checkers and AI technology to identify unwanted groups on its platforms. 

Earlier this year, Meta announced it would label AI-generated images on Facebook, Instagram, and Threads. Meta has started adding visible markers to such images, as well as invisible watermarks and metadata in the image file. The watermarks will be added to images created using Meta’s generative AI systems or ones that carry invisible industry-standard markers. The company says its measures are in line with best practices laid out by the Partnership on AI, an AI research nonprofit.

But at the same time, Clegg admitted that tools to detect AI-generated content are still imperfect and immature. Watermarks in AI systems are not adopted industry-wide, and they are easy to tamper with. They are also hard to implement robustly in AI-generated text, audio, and video. 

Ultimately that should not matter, Clegg said, because Meta’s systems should be able to catch and detect mis- and disinformation regardless of its origins. 

“AI is a sword and a shield in this,” he said.

Clegg also defended the company’s decision to allow ads claiming that the 2020 US election was stolen, noting that these kinds of claims are common throughout the world and saying it’s “not feasible” for Meta to relitigate past elections. Just this month, eight state secretaries of state wrote a letter to Meta CEO Mark Zuckerberg arguing that the ads could still be dangerous, and that they have the potential to further threaten public trust in elections and the safety of individual election workers.

You can watch the full interview with Nick Clegg and MIT Technology Review executive editor Amy Nordrum below.

OpenAI’s latest blunder shows the challenges facing Chinese AI models

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Last week’s release of GPT-4o, a new AI “omnimodel” that you can interact with using voice, text, or video, was supposed to be a big moment for OpenAI. But just days later, it feels as if the company is in big trouble. From the resignation of most of its safety team to Scarlett Johansson’s accusation that it replicated her voice for the model against her consent, it’s now in damage-control mode. 

Add to that another thing OpenAI fumbled with GPT-4o: the data it used to train its tokenizer—a tool that helps the model parse and process text more efficiently—is polluted by Chinese spam websites. As a result, the model’s Chinese token library is full of phrases related to pornography and gambling. This could worsen some problems that are common with AI models: hallucinations, poor performance, and misuse. 

I wrote about it on Friday after several researchers and AI industry insiders flagged the problem. They took a look at GPT-4o’s public token library, which has been significantly updated with the new model to improve support of non-English languages, and saw that more than 90 of the 100 longest Chinese tokens in the model are from spam websites. These are phrases like “_free Japanese porn video to watch,” “Beijing race car betting,” and “China welfare lottery every day.”

Anyone who reads Chinese could spot the problem with this list of tokens right away. Some such phrases inevitably slip into training data sets because of how popular adult content is online, but for them to account for 90% of the Chinese language used to train the model? That’s alarming.

“It’s an embarrassing thing to see as a Chinese person. Is that just how the quality of the [Chinese] data is? Is it because of insufficient data cleaning or is the language just like that?” says Zhengyang Geng, a PhD student in computer science at Carnegie Mellon University. 

It could be tempting to draw a conclusion about a language or a culture from the tokens OpenAI chose for GPT-4o. After all, these are selected as commonly seen and significant phrases from the respective languages. There’s an interesting blog post by a Hong Kong–based researcher named Henry Luo, who queried the longest GPT-4o tokens in various different languages and found that they seem to have different themes. While the tokens in Russian reflect language about the government and public institutions, the tokens in Japanese have a lot of different ways to say “thank you.”

But rather than reflecting the differences between cultures or countries, I think this explains more about what kind of training data is readily available online, and the websites OpenAI crawled to feed into GPT-4o.

After I published the story, Victor Shih, a political science professor at the University of California, San Diego, commented on it on X: “When you try not [to] train on Chinese state media content, this is what you get.”

It’s half a joke, and half a serious point about the two biggest problems in training large language models to speak Chinese: the readily available data online reflects either the “official,” sanctioned way of talking about China or the omnipresent spam content that drowns out real conversations.

In fact, among the few long Chinese tokens in GPT-4o that aren’t either pornography or gambling nonsense, two are “socialism with Chinese characteristics” and “People’s Republic of China.” The presence of these phrases suggests that a significant part of the training data actually is from Chinese state media writings, where formal, long expressions are extremely common.

OpenAI has historically been very tight-lipped about the data it uses to train its models, and it probably will never tell us how much of its Chinese training database is state media and how much is spam. (OpenAI didn’t respond to MIT Technology Review’s detailed questions sent on Friday.)

But it is not the only company struggling with this problem. People inside China who work in its AI industry agree there’s a lack of quality Chinese text data sets for training LLMs. One reason is that the Chinese internet used to be, and largely remains, divided up by big companies like Tencent and ByteDance. They own most of the social platforms and aren’t going to share their data with competitors or third parties to train LLMs. 

In fact, this is also why search engines, including Google, kinda suck when it comes to searching in Chinese. Since WeChat content can only be searched on WeChat, and content on Douyin (the Chinese TikTok) can only be searched on Douyin, this data is not accessible to a third-party search engine, let alone an LLM. But these are the platforms where actual human conversations are happening, instead of some spam website that keeps trying to draw you into online gambling.

The lack of quality training data is a much bigger problem than the failure to filter out the porn and general nonsense in GPT-4o’s token-training data. If there isn’t an existing data set, AI companies have to put in significant work to identify, source, and curate their own data sets and filter out inappropriate or biased content. 

It doesn’t seem OpenAI did that, which in fairness makes some sense, given that people in China can’t use its AI models anyway. 

Still, there are many people living outside China who want to use AI services in Chinese. And they deserve a product that works properly as much as speakers of any other language do. 

How can we solve the problem of the lack of good Chinese LLM training data? Tell me your idea at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. China launched an anti-dumping investigation into imports of polyoxymethylene copolymer—a widely used plastic in electronics and cars—from the US, the EU, Taiwan, and Japan. It’s widely seen as a response to the new US tariff announced on Chinese EVs. (BBC)

  • Meanwhile, Latin American countries, including Mexico, Chile, and Brazil, have increased tariffs on Chinese-imported steel, testing China’s relationship with the region. (Bloomberg $)

2. China’s solar-industry boom is incentivizing farmers to install solar panels and make some extra cash by selling the electricity they generate. (Associated Press)

3. Hedging against the potential devaluation of the RMB, Chinese buyers are pushing the price of gold to all-time highs. (Financial Times $)

4. The Shanghai government set up a pilot project that allows data to be transferred out of China without going through the much-dreaded security assessments, a move that has been sought by companies like Tesla. (Reuters $)

5. China’s central bank fined seven businesses—including a KFC and branches of state-owned corporations—for rejecting cash payments. The popularization of mobile payment has been a good thing, but the dwindling support for cash is also making life harder for people like the elderly and foreign tourists. (Business Insider $)

6. Alibaba and Baidu are waging an LLM price war in China to attract more users. (Bloomberg $

7. The Chinese government has sanctioned Mike Gallagher, a former Republican congressman who chaired the Select Committee on China and remains a fierce critic of Beijing. (NBC News)

Lost in translation

China’s National Health Commission is exploring the relaxation of stringent rules around human genetic data to boost the biotech industry, according to the Chinese publication Caixin. A regulation enacted in 1998 required any research that involves the use of this data to clear an approval process. And there’s even more scrutiny if the research involves foreign institutions. 

In the early years of human genetic research, the regulation helped prevent the nonconsensual collection of DNA. But as the use of genetic data becomes increasingly important in discovering new treatments, the industry has been complaining about the bureaucracy, which can add an extra two to four months to research projects. Now the government is holding discussions on how to revise the regulation, potentially lifting the approval process for smaller-scale research and more foreign entities, as part of a bid to accelerate the growth of biotech research in China.

One more thing

Did you know that the Beijing Capital International Airport has been employing birds of prey to chase away other birds since 2019? This month, the second generation of Beijing’s birdy employees started their work driving away the migratory birds that could endanger aircraft. The airport even has different kinds of raptors—Eurasian hobbies, Eurasian goshawks, and Eurasian sparrowhawks—to deal with the different bird species that migrate to Beijing at different times.

Microsoft AI “Recall” feature records everything, secures far less

Developing an AI-powered threat to security, privacy, and identity is certainly a choice, but it’s one that Microsoft was willing to make this week at its “Build” developer conference.

On Monday, the computing giant unveiled a new line of PCs that integrate Artificial Intelligence (AI) technology to promise faster speeds, enhanced productivity, and a powerful data collection and search tool that screenshots a device’s activity—including password entry—every few seconds.

This is “Recall,” a much-advertised feature within what Microsoft is calling its “Copilot+ PCs,” a reference to the AI assistant and companion which the company released in late 2023. With Recall on the new Copilot+ PCs, users no longer need to manage and remember their own browsing and chat activity. Instead, by regularly taking and storing screenshots of a user’s activity, the Copilot+ PCs can comb through that visual data to deliver answers to natural language questions, such as “Find the site with the white sneakers,” and “blue pantsuit with a sequin lace from abuelita.”

As any regularly updated repository of device activity poses an enormous security threat—imagine hackers getting access to a Recall database and looking for, say, Social Security Numbers, bank account info, and addresses—Microsoft has said that all Recall screenshots are encrypted and stored locally on a device.

But, in terms of security, that’s about all users will get, as Recall will not detect and obscure passwords, shy away from recording pornographic material, or turn a blind eye to sensitive information.

According to Microsoft:

“Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.”

The consequences of such a system could be enormous.

With Recall, a CEO’s personal laptop could become an even more enticing target for hackers equipped with infostealers, a journalist’s protected sources could be within closer grasp of an oppressive government that isn’t afraid to target dissidents with malware, and entire identities could be abused and impersonated by a separate device user.

In fact, Recall seems to only work best in a one-device-per-person world. Though Microsoft explained that its Copilot+ PCs will only record Recall snapshots to specific device accounts, plenty of people share devices and accounts. For the domestic abuse survivor who is forced to share an account with their abuser, for the victim of theft who—like many people—used a weak device passcode that can easily be cracked, and for the teenager who questions their identity on the family computer, Recall could be more of a burden than a benefit.

For Malwarebytes General Manager of Consumer Business Unit Mark Beare, Recall raises yet another issue:

“I worry that we are heading to a social media 2.0 like world.”

When users first raced to upload massive quantities of sensitive, personal data onto social media platforms more than 10 years ago, they couldn’t predict how that data would be scrutinized in the future, or how it would be scoured and weaponized by cybercriminals, Beare said.

“With AI there will be a strong pull to put your full self into a model (so it knows you),” Beare said. “I don’t think it’s easy to understand all the negative aspects of what can happen from doing that and how bad actors can benefit.”


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Palo Alto Networks Looks for Growth Amid Changing Cybersecurity Market

Palo Alto Networks

After years of hypergrowth, Palo Alto Networks’ (PANW) revenue growth has been slowing, suggesting major shifts in cybersecurity spending patterns and raising investor concerns about the cybersecurity giant’s long-term growth potential. Even as overall cybersecurity spending is predicted to remain strong, Palo Alto’s revenue growth has dropped to roughly half of the 30% growth rate investors have enjoyed for the last several years. Also Read: Digital Transformation Market to grow to $1247.5 Billion by 2026 Those concerns came to a head in February, when Palo Alto’s stock plunged 28% in a single day after the company slashed its growth outlook amid a move to “platformization,” with the company essentially giving away some products in hopes of luring more customers to its broader platform. Investor caution continued yesterday after the company merely reaffirmed its financial guidance, suggesting the possibility of a longer road back to hypergrowth. PANW shares were down 3% in recent trading after initially falling 10% after the company’s latest earnings report was released yesterday. Fortinet (FTNT), Palo Alto’s long-term network security rival, is also struggling amid cybersecurity market uncertainty, as analysts expect the company’s growth rate to slow from greater than 30% to around 10%.

SIEM, AI Signal Major Market Shifts

The changes in cybersecurity spending patterns show up most clearly in SIEM market consolidation and AI cybersecurity tools. Buyers may be waiting to see what cybersecurity vendors do with AI. On the company’s earnings call late Monday, Palo Alto CEO Nikesh Arora told analysts that he expects the company “will be first to market with capabilities to protect the range of our customers' AI security needs.” Seismic changes in the market for security information and event management (SIEM) systems are another sign of a rapidly changing cybersecurity market. Cisco’s (CSCO) acquisition of Splunk in March was just the start of major consolidation among legacy SIEM vendors. Last week, LogRhythm and Exabeam announced merger plans, and on the same day Palo Alto announced plans to acquire QRadar assets from IBM. AI and platformization factored strongly into those announcements. Palo Alto will transition QRadar customers to its Cortex XSIAM next-gen security operations (SOC) platform, and Palo Alto will incorporate IBM’s watsonx large language models (LLMs) in Cortex XSIAM “to deliver additional Precision AI solutions.” Palo Alto will also become IBM’s preferred cybersecurity partner across cloud, network and SOC. Forrester analysts said of the Palo Alto-IBM deal, “This is the biggest concession of a SIEM vendor to an XDR vendor so far and signals a sea change for the threat detection and response market. Security buyers may be finally getting the SIEM alternative they’ve been seeking for years.” The moves may yet be enough to return Palo Alto to better-than-expected growth, but one data point on Monday’s earnings call suggests buyers may be cautious. “We have initiated way more conversations in our platformization than we expected,” said Arora. “If meetings were a measure of outcome, they have gone up 30%, and a majority of them have been centered on platform opportunities.” It remains to be seen if sales will follow the same growth trajectory as meetings. For now, it’s clear that even as the overall cybersecurity market remains strong, the undercurrents suggest rapid changes in where that money is going. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

AI’s Black Boxes Just Got a Little Less Mysterious

Researchers at the A.I. company Anthropic claim to have found clues about the inner workings of large language models, possibly helping to prevent their misuse and to curb their potential threats.

© Marissa Leshnov for The New York Times

Anthropic researchers found that turning certain features on or off in the company’s chatbot could change how the A.I. system behaved.

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.
❌