Normal view

There are new articles available, click to refresh the page.
Today — 1 June 2024Main stream

Google Rolls Back A.I. Search Feature After Flubs and Flaws

1 June 2024 at 05:04
Google appears to have turned off its new A.I. Overviews for a number of searches as it works to minimize errors.

© Jeff Chiu/Associated Press

Sundar Pichai, Google’s chief executive, introduced A.I. Overviews, an A.I. feature in its search engine, last month.

Google’s A.I. Search Leaves Publishers Scrambling

Since Google overhauled its search engine, publishers have tried to assess the danger to their brittle business models while calling for government intervention.

© Jason Henry for The New York Times

Google’s chief executive, Sundar Pichai, last year. A new A.I.-generated feature in Google search results “is greatly detrimental to everyone apart from Google,” a newspaper executive said.

The Cassandra of American intelligence

By: chavenet
1 June 2024 at 03:53
Intelligence analysis is a notoriously difficult craft. Practitioners have to make predictions and assessments with limited information, under huge time pressure, on issues where the stakes involve millions of lives and the fates of nations. If this small bureau tucked in the State Department's Foggy Bottom headquarters has figured out some tricks for doing it better, those insights may not just matter for intelligence, but for any job that requires making hard decisions under uncertainty. from The obscure federal intelligence bureau that got Vietnam, Iraq, and Ukraine right [Vox]
Yesterday — 31 May 2024Main stream

Andariel APT Using DoraRAT and Nestdoor Malware to Spy on South Korean Businesses

Andariel APT, Remote Access Trojan, RAT, North Korea

Researchers have uncovered new attacks by a North Korean advanced persistent threat actor – Andariel APT group – targeting Korean corporations and other organizations. The victims include educational institutions and companies in the manufacturing and construction sectors. The attackers employed keyloggers, infostealers, and proxy tools alongside backdoors to control and extract data from compromised systems, said researchers at the AhnLab Security Intelligence Center (ASEC). The malware used in these attacks includes strains previously attributed to the Andariel APT group, including the backdoor "Nestdoor." Additional tools include web shells and proxy tools linked to the North Korean Lazarus group that now contain modifications compared to earlier versions. Researchers first observed a confirmed attack case where a malware was distributed via a web server running an outdated 2013 version of Apache Tomcat, which is vulnerable to various attacks. "The threat actor used the web server to install backdoors, proxy tools, etc.," the researchers said. [caption id="attachment_73866" align="aligncenter" width="1000"]Andariel APT Apache Tomcat compromised to spread malware by Andariel APT. (Credit: Ahnlab)[/caption]

Malware Used by Andariel APT in this Campaign

The first of the two malware strains used in the latest campaign was Nestdoor, a remote access trojan (RAT) that has been active since May 2022. This RAT can execute commands from the threat actor to control infected systems. Nestdoor has been found in numerous Andariel attacks, including those exploiting the VMware Horizon product’s Log4Shell vulnerability (CVE-2021-44228). The malware is developed in C++ and features capabilities such as file upload/download, reverse shell, command execution, keylogging, clipboard logging, and proxy functionalities. A specific case in 2022 involved Nestdoor being distributed alongside TigerRAT using the same command and control (C&C) server. Another incident in early 2024 saw Nestdoor disguised as an OpenVPN installer. This version maintained persistence via the Task Scheduler and communicated with a C&C server. The Andariel APT has been developing new malware strains in the Go language for each campaign. Dora RAT, a recent discovery is one such malware strain. The backdoor malware supports reverse shell and file transfer operations and exists in two forms: a standalone executable and an injected process within "explorer.exe." The latter variant uses an executable in WinRAR SFX format, which includes an injector malware. The Dora RAT has been signed with a valid certificate from a UK software developer in an attempt to make it look legitimate.

Additional Malware Strains

  • Keylogger/Cliplogger: Performs basic functions like logging keystrokes and clipboard contents, stored in the “%TEMP%” directory.
  • Stealer: It is designed to exfiltrate files from the system, potentially handling large quantities of data.
  • Proxy: Includes both custom-created proxy tools and open-source Socks5 proxy tools. Some proxies are similar to those used by the Lazarus group in past attacks.
The Andariel group, part of the larger Lazarus umbrella, has shifted from targeting national security information to also pursuing financial gains. Last month, the South Korean National Police Agency revealed a targeted campaign of the Andariel APT aimed at stealing the country’s defense technology. Andariel APT hackers gained access to defense industry data by compromising an employee account, which was used in maintaining servers of a defense industry partner. The hackers injected malicious code into the partner’s servers around October 2022, and extracted stored defense technology data. This breach exploited a loophole in how employees used their personal and professional email accounts for official system access. Andariel APT's initial attack methodology primarily includes spear phishing, watering hole attacks, and exploiting software vulnerabilities. Users should remain cautious with email attachments from unknown sources and executable files from websites. Security administrators are advised to keep software patched and updated, including operating systems and browsers, to mitigate the risk of malware infections, the researchers recommended.

IoCs to Watch for Signs of Andariel APT Attacks

IoCs to monitor for attacks from Andariel APT group include: MD5s – 7416ea48102e2715c87edd49ddbd1526: Nestdoor – Recent attack case (nest.exe) – a2aefb7ab6c644aa8eeb482e27b2dbc4: Nestdoor – TigerRAT attack case (psfile.exe) – e7fd7f48fbf5635a04e302af50dfb651: Nestdoor – OpenVPN attack case (openvpnsvc.exe) – 33b2b5b7c830c34c688cf6ced287e5be: Nestdoor launcher (FirewallAPI.dll) – 4bc571925a80d4ae4aab1e8900bf753c: Dora RAT dropper (spsvc.exe) – 951e9fcd048b919516693b25c13a9ef2: Dora RAT dropper (emaupdate.exe) – fee610058c417b6c4b3054935b7e2730: Dora RAT injector (version.dll) – afc5a07d6e438880cea63920277ed270: Dora RAT injector (version.dll) – d92a317ef4d60dc491082a2fe6eb7a70: Dora RAT (emaupdate.exe) – 5df3c3e1f423f1cce5bf75f067d1d05c: Dora RAT (msload.exe) – 094f9a757c6dbd6030bc6dae3f8feab3: Dora RAT (emagent.exe) – 468c369893d6fc6614d24ea89e149e80: Keylogger/Cliplogger (conhosts.exe) – 5e00df548f2dcf7a808f1337f443f3d9: Stealer (msload.exe) C&Cs – 45.58.159[.]237:443: Nestdoor – Recent attack case – 4.246.149[.]227:1443: Nestdoor – TigerRAT attack case – 209.127.19[.]223:443: Nestdoor – OpenVPN attack case – kmobile.bestunif[.]com:443 – Dora RAT – 206.72.205[.]117:443 – Dora RAT

How AI Will Change Democracy

31 May 2024 at 07:04

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale.

It gets interesting when changes in degree can become changes in kind. High-speed trading is fundamentally different than regular human trading. AIs have invented fundamentally new strategies in the game of Go. Millions of AI-controlled social media accounts could fundamentally change the nature of propaganda.

It’s these sorts of changes and how AI will affect democracy that I want to talk about.

To start, I want to list some of AI’s core competences. First, it is really good as a summarizer. Second, AI is good at explaining things, teaching with infinite patience. Third, and related, AI can persuade. Propaganda is an offshoot of this. Fourth, AI is fundamentally a prediction technology. Predictions about whether turning left or right will get you to your destination faster. Predictions about whether a tumor is cancerous might improve medical diagnoses. Predictions about which word is likely to come next can help compose an email. Fifth, AI can assess. Assessing requires outside context and criteria. AI is less good at assessing, but it’s getting better. Sixth, AI can decide. A decision is a prediction plus an assessment. We are already using AI to make all sorts of decisions.

How these competences translate to actual useful AI systems depends a lot on the details. We don’t know how far AI will go in replicating or replacing human cognitive functions. Or how soon that will happen. In constrained environments it can be easy. AIs already play chess and Go better than humans. Unconstrained environments are harder. There are still significant challenges to fully AI-piloted automobiles. The technologist Jaron Lanier has a nice quote, that AI does best when “human activities have been done many times before, but not in exactly the same way.”

In this talk, I am going to be largely optimistic about the technology. I’m not going to dwell on the details of how the AI systems might work. Much of what I am talking about is still in the future. Science fiction, but not unrealistic science fiction.

Where I am going to be less optimistic—and more realistic—is about the social implications of the technology. Again, I am less interested in how AI will substitute for humans. I’m looking more at the second-order effects of those substitutions: How the underlying systems will change because of changes in speed, scale, scope and sophistication. My goal is to imagine the possibilities. So that we might be prepared for their eventuality.

And as I go through the possibilities, keep in mind a few questions: Will the change distribute or consolidate power? Will it make people more or less personally involved in democracy? What needs to happen before people will trust AI in this context? What could go wrong if a bad actor subverted the AI in this context? And what can we do, as security technologists, to help?

I am thinking about democracy very broadly. Not just representations, or elections. Democracy as a system for distributing decisions evenly across a population. It’s a way of converting individual preferences into group decisions. And that includes bureaucratic decisions.

To that end, I want to discuss five different areas where AI will affect democracy: Politics, lawmaking, administration, the legal system and, finally, citizens themselves.

I: AI-assisted politicians

I’ve already said that AIs are good at persuasion. Politicians will make use of that. Pretty much everyone talks about AI propaganda. Politicians will make use of that, too. But let’s talk about how this might go well.

In the past, candidates would write books and give speeches to connect with voters. In the future, candidates will also use personalized chatbots to directly engage with voters on a variety of issues. AI can also help fundraise. I don’t have to explain the persuasive power of individually crafted appeals. AI can conduct polls. There’s some really interesting work into having large language models assume different personas and answer questions from their points of view. Unlike people, AIs are always available, will answer thousands of questions without getting tired or bored and are more reliable. This won’t replace polls, but it can augment them. AI can assist human campaign managers by coordinating campaign workers, creating talking points, doing media outreach and assisting get-out-the-vote efforts. These are all things that humans already do. So there’s no real news there.

The changes are largely in scale. AIs can engage with voters, conduct polls and fundraise at a scale that humans cannot—for all sizes of elections. They can also assist in lobbying strategies. AIs could also potentially develop more sophisticated campaign and political strategies than humans can. I expect an arms race as politicians start using these sorts of tools. And we don’t know if the tools will favor one political ideology over another.

More interestingly, future politicians will largely be AI-driven. I don’t mean that AI will replace humans as politicians. Absent a major cultural shift—and some serious changes in the law—that won’t happen. But as AI starts to look and feel more human, our human politicians will start to look and feel more like AI. I think we will be OK with it, because it’s a path we’ve been walking down for a long time. Any major politician today is just the public face of a complex socio-technical system. When the president makes a speech, we all know that they didn’t write it. When a legislator sends out a campaign email, we know that they didn’t write that either—even if they signed it. And when we get a holiday card from any of these people, we know that it was signed by an autopen. Those things are so much a part of politics today that we don’t even think about it. In the future, we’ll accept that almost all communications from our leaders will be written by AI. We’ll accept that they use AI tools for making political and policy decisions. And for planning their campaigns. And for everything else they do. None of this is necessarily bad. But it does change the nature of politics and politicians—just like television and the internet did.

II: AI-assisted legislators

AIs are already good at summarization. This can be applied to listening to constituents:  summarizing letters, comments and making sense of constituent inputs. Public meetings might be summarized. Here the scale of the problem is already overwhelming, and AI can make a big difference. Beyond summarizing, AI can highlight interesting arguments or detect bulk letter-writing campaigns. They can aid in political negotiating.

AIs can also write laws. In November 2023, Porto Alegre, Brazil became the first city to enact a law that was entirely written by AI. It had to do with water meters. One of the councilmen prompted ChatGPT, and it produced a complete bill. He submitted it to the legislature without telling anyone who wrote it. And the humans passed it without any changes.

A law is just a piece of generated text that a government agrees to adopt. And as with every other profession, policymakers will turn to AI to help them draft and revise text. Also, AI can take human-written laws and figure out what they actually mean. Lots of laws are recursive, referencing paragraphs and words of other laws. AIs are already good at making sense of all that.

This means that AI will be good at finding legal loopholes—or at creating legal loopholes. I wrote about this in my latest book, A Hacker’s Mind. Finding loopholes is similar to finding vulnerabilities in software. There’s also a concept called “micro-legislation.” That’s the smallest unit of law that makes a difference to someone. It could be a word or a punctuation mark. AIs will be good at inserting micro-legislation into larger bills. More positively, AI can help figure out unintended consequences of a policy change—by simulating how the change interacts with all the other laws and with human behavior.

AI can also write more complex law than humans can. Right now, laws tend to be general. With details to be worked out by a government agency. AI can allow legislators to propose, and then vote on, all of those details. That will change the balance of power between the legislative and the executive branches of government. This is less of an issue when the same party controls the executive and the legislative branches. It is a big deal when those branches of government are in the hands of different parties. The worry is that AI will give the most powerful groups more tools for propagating their interests.

AI can write laws that are impossible for humans to understand. There are two kinds of laws: specific laws, like speed limits, and laws that require judgment, like those that address reckless driving. Imagine that we train an AI on lots of street camera footage to recognize reckless driving and that it gets better than humans at identifying the sort of behavior that tends to result in accidents. And because it has real-time access to cameras everywhere, it can spot it … everywhere. The AI won’t be able to explain its criteria: It would be a black-box neural net. But we could pass a law defining reckless driving by what that AI says. It would be a law that no human could ever understand. This could happen in all sorts of areas where judgment is part of defining what is illegal. We could delegate many things to the AI because of speed and scale. Market manipulation. Medical malpractice. False advertising. I don’t know if humans will accept this.

III: AI-assisted bureaucracy

Generative AI is already good at a whole lot of administrative paperwork tasks. It will only get better. I want to focus on a few places where it will make a big difference. It could aid in benefits administration—figuring out who is eligible for what. Humans do this today, but there is often a backlog because there aren’t enough humans. It could audit contracts. It could operate at scale, auditing all human-negotiated government contracts. It could aid in contracts negotiation. The government buys a lot of things and has all sorts of complicated rules. AI could help government contractors navigate those rules.

More generally, it could aid in negotiations of all kinds. Think of it as a strategic adviser. This is no different than a human but could result in more complex negotiations. Human negotiations generally center around only a few issues. Mostly because that’s what humans can keep in mind. AI versus AI negotiations could potentially involve thousands of variables simultaneously. Imagine we are using an AI to aid in some international trade negotiation and it suggests a complex strategy that is beyond human understanding. Will we blindly follow the AI? Will we be more willing to do so once we have some history with its accuracy?

And one last bureaucratic possibility: Could AI come up with better institutional designs than we have today? And would we implement them?

IV: AI-assisted legal system

When referring to an AI-assisted legal system, I mean this very broadly—both lawyering and judging and all the things surrounding those activities.

AIs can be lawyers. Early attempts at having AIs write legal briefs didn’t go well. But this is already changing as the systems get more accurate. Chatbots are now able to properly cite their sources and minimize errors. Future AIs will be much better at writing legalese, drastically reducing the cost of legal counsel. And there’s every indication that it will be able to do much of the routine work that lawyers do. So let’s talk about what this means.

Most obviously, it reduces the cost of legal advice and representation, giving it to people who currently can’t afford it. An AI public defender is going to be a lot better than an overworked not very good human public defender. But if we assume that human-plus-AI beats AI-only, then the rich get the combination, and the poor are stuck with just the AI.

It also will result in more sophisticated legal arguments. AI’s ability to search all of the law for precedents to bolster a case will be transformative.

AI will also change the meaning of a lawsuit. Right now, suing someone acts as a strong social signal because of the cost. If the cost drops to free, that signal will be lost. And orders of magnitude more lawsuits will be filed, which will overwhelm the court system.

Another effect could be gutting the profession. Lawyering is based on apprenticeship. But if most of the apprentice slots are filled by AIs, where do newly minted attorneys go to get training? And then where do the top human lawyers come from? This might not happen. AI-assisted lawyers might result in more human lawyering. We don’t know yet.

AI can help enforce the law. In a sense, this is nothing new. Automated systems already act as law enforcement—think speed trap cameras and Breathalyzers. But AI can take this kind of thing much further, like automatically identifying people who cheat on tax returns, identifying fraud on government service applications and watching all of the traffic cameras and issuing citations.

Again, the AI is performing a task for which we don’t have enough humans. And doing it faster, and at scale. This has the obvious problem of false positives. Which could be hard to contest if the courts believe that the computer is always right. This is a thing today: If a Breathalyzer says you’re drunk, it can be hard to contest the software in court. And also the problem of bias, of course: AI law enforcers may be more and less equitable than their human predecessors.

But most importantly, AI changes our relationship with the law. Everyone commits driving violations all the time. If we had a system of automatic enforcement, the way we all drive would change—significantly. Not everyone wants this future. Lots of people don’t want to fund the IRS, even though catching tax cheats is incredibly profitable for the government. And there are legitimate concerns as to whether this would be applied equitably.

AI can help enforce regulations. We have no shortage of rules and regulations. What we have is a shortage of time, resources and willpower to enforce them, which means that lots of companies know that they can ignore regulations with impunity. AI can change this by decoupling the ability to enforce rules from the resources necessary to do it. This makes enforcement more scalable and efficient. Imagine putting cameras in every slaughterhouse in the country looking for animal welfare violations or fielding an AI in every warehouse camera looking for labor violations. That could create an enormous shift in the balance of power between government and corporations—which means that it will be strongly resisted by corporate power.

AIs can provide expert opinions in court. Imagine an AI trained on millions of traffic accidents, including video footage, telemetry from cars and previous court cases. The AI could provide the court with a reconstruction of the accident along with an assignment of fault. AI could do this in a lot of cases where there aren’t enough human experts to analyze the data—and would do it better, because it would have more experience.

AIs can also perform judging tasks, weighing evidence and making decisions, probably not in actual courtrooms, at least not anytime soon, but in other contexts. There are many areas of government where we don’t have enough adjudicators. Automated adjudication has the potential to offer everyone immediate justice. Maybe the AI does the first level of adjudication and humans handle appeals. Probably the first place we’ll see this is in contracts. Instead of the parties agreeing to binding arbitration to resolve disputes, they’ll agree to binding arbitration by AI. This would significantly decrease cost of arbitration. Which would probably significantly increase the number of disputes.

So, let’s imagine a world where dispute resolution is both cheap and fast. If you and I are business partners, and we have a disagreement, we can get a ruling in minutes. And we can do it as many times as we want—multiple times a day, even. Will we lose the ability to disagree and then resolve our disagreements on our own? Or will this make it easier for us to be in a partnership and trust each other?

V: AI-assisted citizens

AI can help people understand political issues by explaining them. We can imagine both partisan and nonpartisan chatbots. AI can also provide political analysis and commentary. And it can do this at every scale. Including for local elections that simply aren’t important enough to attract human journalists. There is a lot of research going on right now on AI as moderator, facilitator, and consensus builder. Human moderators are still better, but we don’t have enough human moderators. And AI will improve over time. AI can moderate at scale, giving the capability to every decision-making group—or chatroom—or local government meeting.

AI can act as a government watchdog. Right now, much local government effectively happens in secret because there are no local journalists covering public meetings. AI can change that, providing summaries and flagging changes in position.

AIs can help people navigate bureaucracies by filling out forms, applying for services and contesting bureaucratic actions. This would help people get the services they deserve, especially disadvantaged people who have difficulty navigating these systems. Again, this is a task that we don’t have enough qualified humans to perform. It sounds good, but not everyone wants this. Administrative burdens can be deliberate.

Finally, AI can eliminate the need for politicians. This one is further out there, but bear with me. Already there is research showing AI can extrapolate our political preferences. An AI personal assistant trained on and continuously attuned to your political preferences could advise you, including what to support and who to vote for. It could possibly even vote on your behalf or, more interestingly, act as your personal representative.

This is where it gets interesting. Our system of representative democracy empowers elected officials to stand in for our collective preferences. But that has obvious problems. Representatives are necessary because people don’t pay attention to politics. And even if they did, there isn’t enough room in the debate hall for everyone to fit. So we need to pick one of us to pass laws in our name. But that selection process is incredibly inefficient. We have complex policy wants and beliefs and can make complex trade-offs. The space of possible policy outcomes is equally complex. But we can’t directly debate the policies. We can only choose one of two—or maybe a few more—candidates to do that for us. This has been called democracy’s “lossy bottleneck.” AI can change this. We can imagine a personal AI directly participating in policy debates on our behalf along with millions of other personal AIs and coming to a consensus on policy.

More near term, AIs can result in more ballot initiatives. Instead of five or six, there might be five or six hundred, as long as the AI can reliably advise people on how to vote. It’s hard to know whether this is a good thing. I don’t think we want people to become politically passive because the AI is taking care of it. But it could result in more legislation that the majority actually wants.

Where will AI take us?

That’s my list. Again, watch where changes of degree result in changes in kind. The sophistication of AI lawmaking will mean more detailed laws, which will change the balance of power between the executive and the legislative branches. The scale of AI lawyering means that litigation becomes affordable to everyone, which will mean an explosion in the amount of litigation. The speed of AI adjudication means that contract disputes will get resolved much faster, which will change the nature of settlements. The scope of AI enforcement means that some laws will become impossible to evade, which will change how the rich and powerful think about them.

I think this is all coming. The time frame is hazy, but the technology is moving in these directions.

All of these applications need security of one form or another. Can we provide confidentiality, integrity and availability where it is needed? AIs are just computers. As such, they have all the security problems regular computers have—plus the new security risks stemming from AI and the way it is trained, deployed and used. Like everything else in security, it depends on the details.

First, the incentives matter. In some cases, the user of the AI wants it to be both secure and accurate. In some cases, the user of the AI wants to subvert the system. Think about prompt injection attacks. In most cases, the owners of the AIs aren’t the users of the AI. As happened with search engines and social media, surveillance and advertising are likely to become the AI’s business model. And in some cases, what the user of the AI wants is at odds with what society wants.

Second, the risks matter. The cost of getting things wrong depends a lot on the application. If a candidate’s chatbot suggests a ridiculous policy, that’s easily corrected. If an AI is helping someone fill out their immigration paperwork, a mistake can get them deported. We need to understand the rate of AI mistakes versus the rate of human mistakes—and also realize that AI mistakes are viewed differently than human mistakes. There are also different types of mistakes: false positives versus false negatives. But also, AI systems can make different kinds of mistakes than humans do—and that’s important. In every case, the systems need to be able to correct mistakes, especially in the context of democracy.

Many of the applications are in adversarial environments. If two countries are using AI to assist in trade negotiations, they are both going to try to hack each other’s AIs. This will include attacks against the AI models but also conventional attacks against the computers and networks that are running the AIs. They’re going to want to subvert, eavesdrop on or disrupt the other’s AI.

Some AI applications will need to run in secure environments. Large language models work best when they have access to everything, in order to train. That goes against traditional classification rules about compartmentalization.

Fourth, power matters. AI is a technology that fundamentally magnifies power of the humans who use it, but not equally across users or applications. Can we build systems that reduce power imbalances rather than increase them? Think of the privacy versus surveillance debate in the context of AI.

And similarly, equity matters. Human agency matters.

And finally, trust matters. Whether or not to trust an AI is less about the AI and more about the application. Some of these AI applications are individual. Some of these applications are societal. Whether something like “fairness” matters depends on this. And there are many competing definitions of fairness that depend on the details of the system and the application. It’s the same with transparency. The need for it depends on the application and the incentives. Democratic applications are likely to require more transparency than corporate ones and probably AI models that are not owned and run by global tech monopolies.

All of these security issues are bigger than AI or democracy. Like all of our security experience, applying it to these new systems will require some new thinking.

AI will be one of humanity’s most important inventions. That’s probably true. What we don’t know is if this is the moment we are inventing it. Or if today’s systems are yet more over-hyped technologies. But these are security conversations we are going to need to have eventually.

AI is fundamentally a power-enhancing technology. We need to ensure that it distributes power and doesn’t further concentrate it.

AI is coming for democracy. Whether the changes are a net positive or negative depends on us. Let’s help tilt things to the positive.

This essay is adapted from a keynote speech delivered at the RSA Conference in San Francisco on May 7, 2024. It originally appeared in Cyberscoop.

 

How AI Will Change Democracy

31 May 2024 at 07:04

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale...

The post How AI Will Change Democracy appeared first on Security Boulevard.

Why Google’s AI Overviews gets things wrong

31 May 2024 at 06:15

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

When Google announced it was rolling out its artificial-intelligence-powered search feature earlier this month, the company promised that “Google will do the googling for you.” The new feature, called AI Overviews, provides brief, AI-generated summaries highlighting key information and links on top of search results.

Unfortunately, AI systems are inherently unreliable. Within days of AI Overviews’ release in the US, users were sharing examples of responses that were strange at best. It suggested that users add glue to pizza or eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875. 

On Thursday, Liz Reid, head of Google Search, announced that the company has been making technical improvements to the system to make it less likely to generate incorrect answers, including better detection mechanisms for nonsensical queries. It is also limiting the inclusion of satirical, humorous, and user-generated content in responses, since such material could result in misleading advice.

But why is AI Overviews returning unreliable, potentially dangerous information? And what, if anything, can be done to fix it?

How does AI Overviews work?

In order to understand why AI-powered search engines get things wrong, we need to look at how they’ve been optimized to work. We know that AI Overviews uses a new generative AI model in Gemini, Google’s family of large language models (LLMs), that’s been customized for Google Search. That model has been integrated with Google’s core web ranking systems and designed to pull out relevant results from its index of websites.

Most LLMs simply predict the next word (or token) in a sequence, which makes them appear fluent but also leaves them prone to making things up. They have no ground truth to rely on, but instead choose each word purely on the basis of a statistical calculation. That leads to hallucinations. It’s likely that the Gemini model in AI Overviews gets around this by using an AI technique called retrieval-augmented generation (RAG), which allows an LLM to check specific sources outside of the data it’s been trained on, such as certain web pages, says Chirag Shah, a professor at the University of Washington who specializes in online search.

Once a user enters a query, it’s checked against the documents that make up the system’s information sources, and a response is generated. Because the system is able to match the original query to specific parts of web pages, it’s able to cite where it drew its answer from—something normal LLMs cannot do.

One major upside of RAG is that the responses it generates to a user’s queries should be more up to date, more factually accurate, and more relevant than those from a typical model that just generates an answer based on its training data. The technique is often used to try to prevent LLMs from hallucinating. (A Google spokesperson would not confirm whether AI Overviews uses RAG.)

So why does it return bad answers?

But RAG is far from foolproof. In order for an LLM using RAG to come up with a good answer, it has to both retrieve the information correctly and generate the response correctly. A bad answer results when one or both parts of the process fail.

In the case of AI Overviews’ recommendation of a pizza recipe that contains glue—drawing from a joke post on Reddit—it’s likely that the post appeared relevant to the user’s original query about cheese not sticking to pizza, but something went wrong in the retrieval process, says Shah. “Just because it’s relevant doesn’t mean it’s right, and the generation part of the process doesn’t question that,” he says.

Similarly, if a RAG system comes across conflicting information, like a policy handbook and an updated version of the same handbook, it’s unable to work out which version to draw its response from. Instead, it may combine information from both to create a potentially misleading answer. 

“The large language model generates fluent language based on the provided sources, but fluent language is not the same as correct information,” says Suzan Verberne, a professor at Leiden University who specializes in natural-language processing.

The more specific a topic is, the higher the chance of misinformation in a large language model’s output, she says, adding: “This is a problem in the medical domain, but also education and science.”

According to the Google spokesperson, in many cases when AI Overviews returns incorrect answers it’s because there’s not a lot of high-quality information available on the web to show for the query—or because the query most closely matches satirical sites or joke posts.

The spokesperson says the vast majority of AI Overviews provide high-quality information and that many of the examples of bad answers were in response to uncommon queries, adding that AI Overviews containing potentially harmful, obscene, or otherwise unacceptable content came up in response to less than one in every 7 million unique queries. Google is continuing to remove AI Overviews on certain queries in accordance with its content policies. 

It’s not just about bad training data

Although the pizza glue blunder is a good example of a case where AI Overviews pointed to an unreliable source, the system can also generate misinformation from factually correct sources. Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico, googled “How many Muslim presidents has the US had?’” AI Overviews responded: “The United States has had one Muslim president, Barack Hussein Obama.” 

While Barack Obama is not Muslim, making AI Overviews’ response wrong, it drew its information from a chapter in an academic book titled Barack Hussein Obama: America’s First Muslim President? So not only did the AI system miss the entire point of the essay, it interpreted it in the exact opposite of the intended way, says Mitchell. “There’s a few problems here for the AI; one is finding a good source that’s not a joke, but another is interpreting what the source is saying correctly,” she adds. “This is something that AI systems have trouble doing, and it’s important to note that even when it does get a good source, it can still make errors.”

Can the problem be fixed?

Ultimately, we know that AI systems are unreliable, and so long as they are using probability to generate text word by word, hallucination is always going to be a risk. And while AI Overviews is likely to improve as Google tweaks it behind the scenes, we can never be certain it’ll be 100% accurate.

Google has said that it’s adding triggering restrictions for queries where AI Overviews were not proving to be especially helpful and has added additional “triggering refinements” for queries related to health. The company could add a step to the information retrieval process designed to flag a risky query and have the system refuse to generate an answer in these instances, says Verberne. Google doesn’t aim to show AI Overviews for explicit or dangerous topics, or for queries that indicate a vulnerable situation, the company spokesperson says.

Techniques like reinforcement learning from human feedback, which incorporates such feedback into an LLM’s training, can also help improve the quality of its answers. 

Similarly, LLMs could be trained specifically for the task of identifying when a question cannot be answered, and it could also be useful to instruct them to carefully assess the quality of a retrieved document before generating an answer, Verbene says: “Proper instruction helps a lot!” 

Although Google has added a label to AI Overviews answers reading “Generative AI is experimental,” it should consider making it much clearer that the feature is in beta and emphasizing that it is not ready to provide fully reliable answers, says Shah. “Until it’s no longer beta—which it currently definitely is, and will be for some time— it should be completely optional. It should not be forced on us as part of core search.”

The New ChatGPT Offers a Lesson in AI Hype

31 May 2024 at 05:07
OpenAI released GPT-4o, its latest chatbot technology, in a partly finished state. It has much to prove.

© Arsenii Vaselenko for The New York Times

ChatGPT-4o trying to solve a geometry problem

OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit

31 May 2024 at 06:10

Altman spent part of his virtual appearance fending off thorny questions about governance, an AI voice controversy and criticism from ousted board members.

The post OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit appeared first on SecurityWeek.

Before yesterdayMain stream

OpenAI Says Russia and China Used Its A.I. in Covert Campaigns

By: Cade Metz
30 May 2024 at 13:24
Iran and an Israeli company also exploited the tools in online influence efforts, but none gained much traction, an OpenAI report said.

© Jason Henry for The New York Times

The OpenAI offices in San Francisco.

Microsoft’s Windows Recall: Cutting-Edge Search Tech or Creepy Overreach?

30 May 2024 at 12:07

SecurityWeek editor-at-large Ryan Naraine examines the broad tension between tech innovation and privacy rights at a time when ChatGPT-like bots and generative-AI apps are starting to dominate the landscape. 

The post Microsoft’s Windows Recall: Cutting-Edge Search Tech or Creepy Overreach? appeared first on SecurityWeek.

The Long History of Discrimination in Job Hiring Assessments

pApplying for jobs can be a difficult and frustrating experience: you’re putting forward your qualifications to be judged by a prospective employer. We all want to be treated fairly. We want our qualifications to speak for themselves. But for job seekers who have been historically excluded or discriminated against because of their race, gender identity, or disability, there can be another question lurking in the background: Am I being judged, not for my ability to do the job, but for my identity?/p pAutomated decision-making tools, including those using artificial intelligence, or AI, and algorithms, have been widely adopted in hiring. Today seven out of 10 employers use them. We have a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hiredpreviously written/a about AI and some of the newer ways that it’s impacting hiring, including how it lacks transparency and can harbor serious flaws that lead to bias and discrimination. But these tools are just the latest frontier in a long history of employment tests that can discriminate and harm job seekers. For example, one of the landmark civil rights cases, a href=https://supreme.justia.com/cases/federal/us/401/424/Griggs v. Duke Power Co (1971)/a, was about a company’s use of bogus tests to a href=https://www.eeoc.gov/meetings/meeting-january-31-2023-navigating-employment-discrimination-ai-and-automated-systems-new/mooreblock the promotion of Black workers/a./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d.jpg class=attachment-4x3_full size-4x3_full alt= decoding=async srcset=https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/05/70424f4c0d4ad921d1e27da6125a765d-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired target=_blank How Artificial Intelligence Might Prevent You From Getting Hired /a /div div class=wp-link__description a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletAI-based tools are used throughout hiring processes, increasing the odds of discrimination in the workplace./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pWhen tests and tools that have a long history of problems are combined with new technologies like AI, risks of harm only increase, exacerbating harmful barriers to employment based on race, gender, disability, and other protected characteristics. While the harm of racial discrimination in employment tests has long been recognized and challenged, there has been less awareness about how these tests impact applicants who, in addition to facing racial discrimination, face discrimination based on their disabilities./p pThe use of personality assessments in hiring processes has become increasingly common. Yet these tests often ask general questions that may have little to do with the ability to do the job and capture traits that are directly linked with characteristics commonly associated with autism and mental health conditions such as depression and anxiety. This creates a high risk that qualified workers with these disabilities will be disadvantaged compared to other workers and may be unfairly and illegally screened out./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring target=_blank tabindex=-1 img width=750 height=375 src=https://www.aclu.org/wp-content/uploads/2023/11/9adf74e5819f7726f6dd759d712b47eb.jpg class=attachment-4x3_full size-4x3_full alt=A graphic featuring a diverse group of individuals. decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2023/11/9adf74e5819f7726f6dd759d712b47eb.jpg 750w, https://www.aclu.org/wp-content/uploads/2023/11/9adf74e5819f7726f6dd759d712b47eb-400x200.jpg 400w, https://www.aclu.org/wp-content/uploads/2023/11/9adf74e5819f7726f6dd759d712b47eb-600x300.jpg 600w sizes=(max-width: 750px) 100vw, 750px / /a /div div class=wp-link__title a href=https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring target=_blank Know Your Rights | Know Your Digital Rights: Digital Discrimination in Hiring /a /div div class=wp-link__description a href=https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletEqual access to job opportunities is a core component of economic justice. Increasingly, employers are using automated tools in their hiring.../p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/know-your-rights/know-your-digital-rights-digital-discrimination-in-hiring target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pTo push back, we a class=Hyperlink SCXW161865474 BCX0 href=https://www.aclu.org/documents/aclu-complaint-to-the-ftc-regarding-aon-consulting-inc target=_blank rel=noreferrer noopenerfiled a complaint/a to the Federal Trade Commission (FTC) against Aon, a major hiring technology vendor, alleging that Aon is deceptively marketing widely used online hiring tests as “bias-free” even though the tests discriminate against job seekers based on traits like their race or disability. The ACLU and co-counsel have also filed charges with the Equal Employment Opportunity Commission (EEOC) against both Aon and an employer that uses Aon’s assessments on behalf of a biracial (Black/white) autistic job applicant who was required to take Aon assessments as part of the employer’s hiring process./p pTwo Aon products, a “personality” assessment test and its automated video interviewing tool, which integrate algorithmic or AI-related features, are marketed to employers across industries as cost-effective, efficient, and less discriminatory than traditional methods of assessing workers and applicants. However, these products assess very general personality traits such as positivity, emotional awareness, liveliness, ambition, and drive that are not clearly job related or necessary for a specific job and can unfairly screen out people based on disabilities. The automated features of these tools exacerbate these fundamental problems, particularly as Aon incorporated artificial intelligence elements in its video interviewing tool that are also likely to discriminate based on disability, race, and other protected characteristics./p pCognitive ability assessments, another staple in hiring, must also be subject to scrutiny, as they have long been shown to disadvantage Black job candidates and other candidates of color and may also unfairly exclude individuals based on disability. These tests, touted to measure aspects of memory, as well as several others it markets, have racial disparities in performance./p pFor autistic and other neurodivergent job applicants and applicants of color, cognitive ability assessments pose a significant barrier to employment. Not only do they fail to accommodate diverse needs, but they also perpetuate discrimination based on race, disability, and other traits. Employers should not use assessments that carry a high risk of discrimination. Employers risk screening out people who could be successful employees, impacting diversity in the workplace, and could face legal liability, even where the assessments are designed and administered by third-party vendors. Employers have a legal obligation to thoroughly vet any assessments they use for compliance with anti-discrimination laws, and if they decide to use an assessment, they must provide meaningful notice so that disabled workers can make an informed choice whether to seek accommodations or alternative processes./p pBut vendors must also be accountable for the tools they market. Employers can hold vendors accountable by demanding that vendors truly design their products to be inclusive – including by incorporating the perspectives and experiences of people with disabilities and other protected groups into their design process #8212; and conduct thorough auditing for discrimination based on race, disability and other protected characteristics. They can also demand transparency and decline to purchase their products if they fail to do so. And vendors can and should also be held legally accountable for their discriminatory products and deceptively marketing them. As the EEOC recently a href=https://www.eeoc.gov/litigation/briefs/mobley-v-workday-incargued/a in a federal case about discrimination in an online hiring product, vendors can be held accountable under employment discrimination laws, and our FTC complaint should serve as notice to vendors that we will seek to hold them accountable under consumer protection laws as well./p pAs the hiring landscape continues to change and job applicants face new hiring tools, we must strive for a future where skills and potential, not bias, determines our opportunities. The ACLU stands ready to defend the rights of individuals wronged by discriminatory practices. Together, we can dismantle discriminatory barriers and build a more inclusive workforce for all./p

AI-directed drones could help find lost hikers faster

30 May 2024 at 11:26

If a hiker gets lost in the rugged Scottish Highlands, rescue teams sometimes send up a drone to search for clues of the individual’s route—trampled vegetation, dropped clothing, food wrappers. But with vast terrain to cover and limited battery life, picking the right area to search is critical.

Traditionally, expert drone pilots use a combination of intuition and statistical “search theory”—a strategy with roots in World War II–era hunting of German submarines—to prioritize certain search locations over others. Jan-Hendrik Ewers and a team from the University of Glasgow recently set out to see if a machine-learning system could do better.

Ewers grew up skiing and hiking in the Highlands, giving him a clear idea of the complicated challenges involved in rescue operations there. “There wasn’t much to do growing up, other than spending time outdoors or sitting in front of my computer,” he says. “I ended up doing a lot of both.”

To start, Ewers took data sets of search-and-rescue cases from around the world, which include details such as an individual’s age, whether they were hunting, horseback riding, or hiking, and if they suffered from dementia, along with information about the location where the person was eventually found—by water, buildings, open ground, trees, or roads. He trained an AI model with this data, in addition to geographical data from Scotland. The model runs millions of simulations to reveal the routes a missing person would be most likely to take under the specific circumstances. The result is a probability distribution—a heat map of sorts—indicating the priority search areas. 

With this kind of probability map, the team showed that deep learning could be used to design more efficient search paths for drones. In research published last week on arXiv, which has not yet been peer reviewed, the team tested its algorithm against two common search patterns: the “lawn mower,” in which a drone would fly over a target area in a series of simple stripes, and an algorithm similar to Ewers’s but less adept at working with probability distribution maps.

In virtual testing, Ewers’s algorithm beat both of those approaches on two key measures: the distance a drone would have to fly to locate the missing person, and the likelihood that the person was found. While the lawn mower and the existing algorithmic approach found the person 8% of the time and 12% of the time, respectively, Ewers’s approach found them 19% of the time. If it proves successful in real rescue situations, the new system could speed up response times, and save more lives, in scenarios where every minute counts. 

“The search-and-rescue domain in Scotland is extremely varied, and also quite dangerous,” Ewers says. Emergencies can arise in thick forests on the Isle of Arran, the steep mountains and slopes around the Cairngorm Plateau, or the faces of Ben Nevis, one of the most revered but dangerous rock climbing destinations in Scotland. “Being able to send up a drone and efficiently search with it could potentially save lives,” he adds.

Search-and-rescue experts say that using deep learning to design more efficient drone routes could help locate missing persons faster in a variety of wilderness areas, depending on how well suited the environment is for drone exploration (it’s harder for drones to explore dense canopy than open brush, for example).

“That approach in the Scottish Highlands certainly sounds like a viable one, particularly in the early stages of search when you’re waiting for other people to show up,” says David Kovar, a director at the US National Association for Search and Rescue in Williamsburg, Virginia, who has used drones for everything from disaster response in California to wilderness search missions in New Hampshire’s White Mountains. 

But there are caveats. The success of such a planning algorithm will hinge on how accurate the probability maps are. Overreliance on these maps could mean that drone operators spend too much time searching the wrong areas. 

Ewers says a key next step to making the probability maps as accurate as possible will be obtaining more training data. To do that, he hopes to use GPS data from more recent rescue operations to run simulations, essentially helping his model to understand the connections between the location where someone was last seen and where they were ultimately found. 

Not all rescue operations contain rich enough data for him to work with, however. “We have this problem in search and rescue where the training data is extremely sparse, and we know from machine learning that we want a lot of high-quality data,” Ewers says. “If an algorithm doesn’t perform better than a human, you are potentially risking someone’s life.”

Drones are becoming more common in the world of search and rescue. But they are still a relatively new technology, and regulations surrounding their use are still in flux.

In the US, for example, drone pilots are required to have a constant line of sight between them and their drone. In Scotland, meanwhile, operators aren’t permitted to be more than 500 meters away from their drone. These rules are meant to prevent accidents, such as a drone falling and endangering people, but in rescue settings such rules severely curtail ground rescuers’ ability to survey for clues. 

“Oftentimes we’re facing a regulatory problem rather than a technical problem,” Kovar says. “Drones are capable of doing far more than we’re allowed to use them for.”

Ewers hopes that models like his might one day expand the capabilities of drones even more. For now, he is in conversation with the Police Scotland Air Support Unit to see what it would take to test and deploy his system in real-world settings. 

NIST Struggles with NVD Backlog as 93% of Flaws Remain Unanalyzed

29 May 2024 at 17:32
NIST CSF vulnerabilities ransomware

The funding cutbacks announced in February have continued to hobble NIST’s ability to keep the government’s National Vulnerabilities Database (NVD) up to date, with one cybersecurity company finding that more than 93% of the flaws added have not been analyzed or enhanced, a problem that will make organizations less safe. “With the recent slowdown of..

The post NIST Struggles with NVD Backlog as 93% of Flaws Remain Unanalyzed appeared first on Security Boulevard.

A NIST AI RMF Summary – Source: securityboulevard.com

a-nist-ai-rmf-summary-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Cameron Delfin Artificial intelligence (AI) is revolutionizing numerous sectors, but its integration into cybersecurity is particularly transformative. AI enhances threat detection, automates responses, and predicts potential security breaches, offering a proactive approach to cybersecurity. However, it also introduces new challenges, such as AI-driven attacks and the complexities of securing AI systems. […]

La entrada A NIST AI RMF Summary – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Contextual Intelligence is the Key – Source: securityboulevard.com

contextual-intelligence-is-the-key-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: NSFOCUS With the increasing complexity and frequency of cybersecurity threats, organizations face many network threats. The importance of threat intelligence has become increasingly prominent. During this year’s RSA Conference, Sierra Stanczyk, the Senior Manager of Global Threat intelligence at PwC, and Allison Wikoff, the Director of Global Threat Intelligence for the […]

La entrada Contextual Intelligence is the Key – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Once a Sheriff’s Deputy in Florida, Now a Source of Disinformation From Russia

29 May 2024 at 10:00
In 2016, Russia used an army of trolls to interfere in the U.S. presidential election. This year, an American given asylum in Moscow may be accomplishing much the same thing all by himself.

© Alexander Zemlianichenko/Associated Press

John Mark Dougan, who has been granted asylum in Moscow, above, has become a key player in the Kremlin’s information operations against the West.

AI-readiness for C-suite leaders

Generative AI, like predictive AI before it, has rightly seized the attention of business executives. The technology has the potential to add trillions of dollars to annual global economic activity, and its adoption for business applications is expected to improve the top or bottom lines—or both—at many organizations.

While generative AI offers an impressive and powerful new set of capabilities, its business value is not a given. While some powerful foundational models are open to public use, these do not serve as a differentiator for those looking to get ahead of the competition and unlock AI’s full potential. To gain those advantages, organizations must look to enhance AI models with their own data to create unique business insights and opportunities.

Preparing an organization’s data for AI, however, unlocks a new set of challenges and opportunities. This MIT Technology Review Insights survey report investigates whether companies’ data foundations are ready to garner benefits from generative AI, as well as the challenges of building the necessary data infrastructure for this technology. In doing so, it draws on insights from a survey of 300 C-suite executives and senior technology leaders, as well on in-depth interviews with four leading experts.

Its key findings include the following:

Data integration is the leading priority for AI readiness. In our survey, 82% of C-suite and other senior executives agree that “scaling AI or generative AI use cases to create business value is a top priority for our organization.” The number-one challenge in achieving that AI readiness, survey respondents say, is data integration and pipelines (45%). Asked about challenging aspects of data integration, respondents named four: managing data volume, moving data from on-premises to the cloud, enabling real-time access, and managing changes to data.

Executives are laser-focused on data management challenges—and lasting solutions. Among survey respondents, 83% say that their “organization has identified numerous sources of data that we must bring together in order to enable our AI initiatives.” Though data-dependent technologies of recent decades drove data integration and aggregation programs, these were typically tailored to specific use cases. Now, however, companies are looking for something more scalable and use-case agnostic: 82% of respondents are prioritizing solutions “that will continue to work in the future, regardless of other changes to our data strategy and partners.”

Data governance and security is a top concern for regulated sectors. Data governance and security concerns are the second most common data readiness challenge (cited by 44% of respondents). Respondents from highly regulated sectors were two to three times more likely to cite data governance and security as a concern, and chief data officers (CDOs) say this is a challenge at twice the rate of their C-suite peers. And our experts agree: Data governance and security should be addressed from the beginning of any AI strategy to ensure data is used and accessed properly.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Download the full report.

What to Know About the Open Versus Closed Software Debate

29 May 2024 at 05:02
A.I. companies are divided over whether the technology should be freely available to anyone for modifying and copying, or kept close for safekeeping.

© Loren Elliott for The New York Times

Meta’s open-source A.I. system is available to any developer to download and use.

Mark Zuckerberg is Popular Again Thanks to Meta’s Open-Source AI

29 May 2024 at 05:00
After some trying years during which Mr. Zuckerberg could do little right, many developers and technologists have embraced the Meta chief as their champion of “open-source” artificial intelligence.

© Amanda Cotan

‘Microsoft’ Scammers Steal the Most, the FTC Says

28 May 2024 at 12:54
A pig in a muddy farm field

Scammers impersonating Microsoft, Publishers Clearing House, Amazon and Apple are at the top of the FTC’s “who’s who” list. Based on consumer reports and complaints to the agency, hundreds of millions of dollars were stolen by bad actors pretending to be brands.

The post ‘Microsoft’ Scammers Steal the Most, the FTC Says appeared first on Security Boulevard.

Anatsa Banking Trojan Found in PDF and QR Code Reader Apps on Google Play Store

Anatsa Banking Trojan, Banking Trojan, Malware

Researchers have observed a significant increase in attempts to spread the Anatsa Banking Trojan under the veil of legitimate-looking PDF and QR code reader apps on the Google Play store. Also known as TeaBot, the malware employs dropper applications that appear harmless to users, deceiving them into unwittingly installing the malicious payload, said researchers at cybersecurity firm Zscaler. Once installed, Anatsa extracts sensitive banking credentials and financial information from various global financial applications. It achieves this through overlay and accessibility techniques, allowing it to discreetly intercept and collect data.

Distribution and Impact of Anatsa Banking Trojan

Two malicious payloads linked to Anatsa were found in the Google Play store, distributed by threat actors. The campaign impersonated PDF reader and QR code reader applications to attract numerous installations. The high number of installations, which had surpassed 70,000 at the time of analysis, further convinced victims of the applications' legitimacy. Anatsa employs remote payloads retrieved from Command and Control (C&C) servers to perform additional malicious activities. The dropper application contains encoded links to remote servers, from which the subsequent stage payload is downloaded. Along with the payload, the malware fetches a configuration file from the remote server to execute the next stage of the attack.

Anatsa Infection Steps

The Anatsa banking trojan works by employing a dropper application and executing a payload to launch its malicious activities. Dropper Application:
  • The fake QR code application downloads and loads the DEX file.
  • The application uses reflection to invoke code from the loaded DEX file.
  • Configuration for loading the DEX file is downloaded from the C&C server.
Payload Execution:
  • After downloading the next stage payload, Anatsa performs checks on the device environment to detect analysis environments and malware sandboxes.
  • Upon successful verification, it downloads the third and final stage payload from the remote server.
Malicious Activities:
  • The malware injects uncompressed raw manifest data into the APK, deliberately corrupting the compression parameters in the manifest file to hinder analysis.
  • Upon execution, the malware decodes all encoded strings, including those for C&C communication.
  • It connects with the C&C server to register the infected device and retrieve a list of targeted applications for code injections.
Data Theft:
  • After receiving a list of package names for financial applications, Anatsa scans the device for these applications.
  • If a targeted application is found, Anatsa communicates this to the C&C server.
  • The C&C server then supplies a counterfeit login page for the banking operation.
  • This fake login page, displayed within a JavaScript Interface (JSI) enabled web view, tricks users into entering their banking credentials, which are then transmitted back to the C&C server.
[caption id="attachment_71735" align="aligncenter" width="1038"]Anatsa Banking Trojan Attack Chain Anatsa Banking Trojan Attack Chain (Source: Zscaler)[/caption] The Anatsa banking trojan is increasing in prevalence and infiltrates the Google Play store disguised as benign applications. Using advanced techniques such as overlay and accessibility, it stealthily exfiltrates sensitive banking credentials and financial data. By injecting malicious payloads and employing deceptive login pages, Anatsa poses a significant threat to mobile banking security.

Best Practices to Stop the Anatsa Trojan

To protect against such threats, Cyble's Research and Intelligence Labs suggests following essential cybersecurity best practices:
  • Install Software from Official Sources: Only download software from official app stores like the Google Play Store or the iOS App Store.
  • Use Reputable Security Software: Ensure devices, including PCs, laptops, and mobile devices, use reputable antivirus and internet security software.
  • Strong Passwords and Multi-Factor Authentication: Use strong passwords and enable multi-factor authentication whenever possible.
  • Be Cautious with Links: Be careful when opening links received via SMS or emails.
  • Enable Google Play Protect: Always have Google Play Protect enabled on Android devices.
  • Monitor App Permissions: Be wary of permissions granted to applications.
  • Regular Updates: Keep devices, operating systems, and applications up to date.
By adhering to these practices, users can establish a robust first line of defense against malware and other cyber threats, Cyble researchers said. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model

28 May 2024 at 09:57

OpenAI is setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The post OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model appeared first on SecurityWeek.

Social Distortion: The Threat of Fear, Uncertainty and Deception in Creating Security Risk

By: Tom Eston
28 May 2024 at 09:32

While Red Teams can expose and root out organization specific weaknesses, there is another growing class of vulnerability at an industry level.

The post Social Distortion: The Threat of Fear, Uncertainty and Deception in Creating Security Risk appeared first on SecurityWeek.

Black Basta Ransomware Attack: Microsoft Quick Assist Flaw – Source: securityboulevard.com

black-basta-ransomware-attack:-microsoft-quick-assist-flaw-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Wajahat Raja Recent reports claim that the Microsoft Threat Intelligence team stated that a cybercriminal group, identified as Storm-1811, has been exploiting Microsoft’s Quick Assist tool in a series of social engineering attacks. This group is known for deploying the Black Basta ransomware attack. On May 15, 2024, Microsoft released details […]

La entrada Black Basta Ransomware Attack: Microsoft Quick Assist Flaw – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

OpenAI Says It Has Begun Training a New Flagship A.I. Model

By: Cade Metz
28 May 2024 at 23:31
The advanced A.I. system would succeed GPT-4, which powers ChatGPT. The company has also created a new safety committee to address A.I.’s risks.

© Jason Redmond/Agence France-Presse — Getty Images

As Sam Altman’s OpenAI trains its new model, its new Safety and Security committee will work to hone policies and processes for safeguarding the technology, the company said.

Black Basta Ransomware Attack: Microsoft Quick Assist Flaw

28 May 2024 at 03:00

Recent reports claim that the Microsoft Threat Intelligence team stated that a cybercriminal group, identified as Storm-1811, has been exploiting Microsoft’s Quick Assist tool in a series of social engineering attacks. This group is known for deploying the Black Basta ransomware attack. On May 15, 2024, Microsoft released details about how this financially motivated group […]

The post Black Basta Ransomware Attack: Microsoft Quick Assist Flaw appeared first on TuxCare.

The post Black Basta Ransomware Attack: Microsoft Quick Assist Flaw appeared first on Security Boulevard.

Alert: Google Chrome Zero-Day Patch Fixes Critical Flaw

27 May 2024 at 12:08

In recent cybersecurity news, Google has swiftly addressed a critical security concern by releasing an emergency update for its Chrome browser. This update targets the third zero-day vulnerability detected in less than a week. Let’s have a look at the details of this Google Chrome zero-day patch and understand its implications for user safety.   […]

The post Alert: Google Chrome Zero-Day Patch Fixes Critical Flaw appeared first on TuxCare.

The post Alert: Google Chrome Zero-Day Patch Fixes Critical Flaw appeared first on Security Boulevard.

Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy

By: Tom Eston
27 May 2024 at 00:00

Episode 331 of the Shared Security Podcast discusses privacy and security concerns related to two major technological developments: the introduction of Windows PC’s new feature ‘Recall,’ part of Microsoft’s Copilot+, which captures desktop screenshots for AI-powered search tools, and Slack’s policy of using user data to train machine learning features with users opted in by […]

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Shared Security Podcast.

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Security Boulevard.

💾

Elon Musk’s xAI Raises $6 Billion

27 May 2024 at 14:11
Elon Musk, who founded xAI last year, has said the business “still has a lot of catching up to do” as it looks to compete with well-funded companies like OpenAI.

© Nina Westervelt for The New York Times

Elon Musk in New York last month.

Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com

attempts-to-regulate-ai’s-hidden-hand-in-americans’-lives-flounder-in-us-statehouses-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Associated Press The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide. Only one of seven bills aimed at preventing AI’s penchant to discriminate when making […]

La entrada Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com

averlon-emerges-from-stealth-mode-with-$8-million-in-funding-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Ionut Arghire Cloud security startup Averlon has emerged from stealth mode with $8 million in seed funding, which brings the total raised by the company to $10.5 million. The new investment round was led by Voyager Capital, with additional funding from Outpost Ventures, Salesforce Ventures, and angel investors. Co-founded by […]

La entrada Averlon Emerges From Stealth Mode With $8 Million in Funding – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent – Source: www.securityweek.com

us-intelligence-agencies’-embrace-of-generative-ai-is-at-once-wary-and-urgent-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Associated Press Long before generative AI’s boom, a Silicon Valley firm contracted to collect and analyze non-classified data on illicit Chinese fentanyl trafficking made a compelling case for its embrace by U.S. intelligence agencies. The operation’s results far exceeded human-only analysis, finding twice as many companies and 400% more people […]

La entrada US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Courtroom Recording Software Compromised in Supply Chain Attack

24 May 2024 at 17:43
software supply chain malware

Threat actors compromised a popular audio-visual software package used in courtrooms, prisons, government, and lecture rooms around the world by injecting a loader malware that gives the hackers remote access to infected systems, collecting data about the host computer and downloading more malicious payloads along the way. The software supply chain attack targeted Justice AV..

The post Courtroom Recording Software Compromised in Supply Chain Attack appeared first on Security Boulevard.

Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses

24 May 2024 at 12:36

Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed.

The post Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses appeared first on SecurityWeek.

Google AI Overviews Search Errors Cause Furor Online

24 May 2024 at 23:10
The company’s latest A.I. search feature has erroneously told users to eat glue and rocks, provoking a backlash among users.

© Jeff Chiu/Associated Press

Sundar Pichai, the Alphabet chief executive, spoke about Gemini at a Google I/O event in May.

Black Basta Ascension Attack Redux — can Patients Die of Ransomware?

24 May 2024 at 13:45
Psychedelic doctor image, titled “Bad Medicine”

Inglorious Basta(rds): 16 days on, huge hospital system continues to be paralyzed by ransomware—and patient safety is at risk.

The post Black Basta Ascension Attack Redux — can Patients Die of Ransomware? appeared first on Security Boulevard.

The Rise and Risks of Shadow AI

24 May 2024 at 13:20

 

Shadow AI, the internal
use of AI tools and services without the enterprise oversight teams expressly
knowing about it (ex. IT, legal,
cybersecurity, compliance, and privacy teams, just to name a few), is becoming a problem!

Workers are flocking to use 3rd party AI services
(ex. websites like ChatGPT) but also there are often savvy technologists who
are importing models and building internal AI systems (it really is not that
difficult) without telling the enterprise ops teams. Both situations are
increasing and many organizations are blind to the risks.

According to a recent Cyberhaven
report
:

  • AI is Accelerating:  Corporate data
    input into AI tools surged by 485%
  • Increased Data Risks:  Sensitive data
    submission jumped 156%, led by customer support data
  • Threats are Hidden:  Majority of AI use
    on personal accounts lacks enterprise safeguards
  • Security Vulnerabilities:  Increased
    risk of data breaches and exposure through AI tool use.


The risks are real and
the problem is growing.

Now is the time to get ahead of this problem.
1. Establish policies for use and
development/deployment

2. Define and communicate an AI Ethics posture
3. Incorporate cybersecurity/privacy/compliance
teams early into such programs

4. Drive awareness and compliance by including
these AI topics in the employee/vendor training


Overall, the goal is to build awareness and
collaboration. Leveraging AI can bring tremendous benefits, but should be done
in a controlled way that aligns with enterprise oversight requirements.


"Do what is great, while it is small" -
A little effort now can help avoid serious mishaps in the future!

The post The Rise and Risks of Shadow AI appeared first on Security Boulevard.

OpenAI backpedals on scandalous tactic to silence former employees

24 May 2024 at 11:32
OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Former and current OpenAI employees received a memo this week that the AI company hopes to end the most embarrassing scandal that Sam Altman has ever faced as OpenAI's CEO.

The memo finally clarified for employees that OpenAI would not enforce a non-disparagement contract that employees since at least 2019 were pressured to sign within a week of termination or else risk losing their vested equity. For an OpenAI employee, that could mean losing millions for expressing even mild criticism about OpenAI's work.

You can read the full memo below in a post on X (formerly Twitter) from Andrew Carr, a former OpenAI employee whose LinkedIn confirms that he left the company in 2021.

Read 22 remaining paragraphs | Comments

❌
❌