Reading view

There are new articles available, click to refresh the page.

How AI Will Change Democracy

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. The problem with AIs trading stocks isn’t that they’re better than humans—it’s that they’re faster. But computers are better at chess and Go because they use more sophisticated strategies than humans. We’re worried about AI-controlled social media accounts because they operate on a superhuman scale...

The post How AI Will Change Democracy appeared first on Security Boulevard.

A NIST AI RMF Summary – Source: securityboulevard.com

a-nist-ai-rmf-summary-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Cameron Delfin Artificial intelligence (AI) is revolutionizing numerous sectors, but its integration into cybersecurity is particularly transformative. AI enhances threat detection, automates responses, and predicts potential security breaches, offering a proactive approach to cybersecurity. However, it also introduces new challenges, such as AI-driven attacks and the complexities of securing AI systems. […]

La entrada A NIST AI RMF Summary – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy

Episode 331 of the Shared Security Podcast discusses privacy and security concerns related to two major technological developments: the introduction of Windows PC’s new feature ‘Recall,’ part of Microsoft’s Copilot+, which captures desktop screenshots for AI-powered search tools, and Slack’s policy of using user data to train machine learning features with users opted in by […]

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Shared Security Podcast.

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Security Boulevard.

💾

The Rise and Risks of Shadow AI

 

Shadow AI, the internal
use of AI tools and services without the enterprise oversight teams expressly
knowing about it (ex. IT, legal,
cybersecurity, compliance, and privacy teams, just to name a few), is becoming a problem!

Workers are flocking to use 3rd party AI services
(ex. websites like ChatGPT) but also there are often savvy technologists who
are importing models and building internal AI systems (it really is not that
difficult) without telling the enterprise ops teams. Both situations are
increasing and many organizations are blind to the risks.

According to a recent Cyberhaven
report
:

  • AI is Accelerating:  Corporate data
    input into AI tools surged by 485%
  • Increased Data Risks:  Sensitive data
    submission jumped 156%, led by customer support data
  • Threats are Hidden:  Majority of AI use
    on personal accounts lacks enterprise safeguards
  • Security Vulnerabilities:  Increased
    risk of data breaches and exposure through AI tool use.


The risks are real and
the problem is growing.

Now is the time to get ahead of this problem.
1. Establish policies for use and
development/deployment

2. Define and communicate an AI Ethics posture
3. Incorporate cybersecurity/privacy/compliance
teams early into such programs

4. Drive awareness and compliance by including
these AI topics in the employee/vendor training


Overall, the goal is to build awareness and
collaboration. Leveraging AI can bring tremendous benefits, but should be done
in a controlled way that aligns with enterprise oversight requirements.


"Do what is great, while it is small" -
A little effort now can help avoid serious mishaps in the future!

The post The Rise and Risks of Shadow AI appeared first on Security Boulevard.

New Attack Against Self-Driving Car AI

This is another attack that convinces the AI to ignore road signs:

Due to the way CMOS cameras operate, rapidly changing light from fast flashing diodes can be used to vary the color. For example, the shade of red on a stop sign could look different on each line depending on the time between the diode flash and the line capture.

The result is the camera capturing an image full of lines that don’t quite match each other. The information is cropped and sent to the classifier, usually based on deep neural networks, for interpretation. Because it’s full of lines that don’t match, the classifier doesn’t recognize the image as a traffic sign...

The post New Attack Against Self-Driving Car AI appeared first on Security Boulevard.

Emerald Divide Uses GenAI to Exploit Social, Political Divisions in Israel Using Disinformation

pinocchio puppet

Bad actors are always ready to exploit political strife to their own ends. Right now, they’re doing so with the conflict in the Middle East. A holistic defense against influence networks requires collaboration between government, technology companies and security research organizations.

The post Emerald Divide Uses GenAI to Exploit Social, Political Divisions in Israel Using Disinformation appeared first on Security Boulevard.

❌