Normal view

There are new articles available, click to refresh the page.
Before yesterdayCybersecurity News and Magazine

Cyberattack Hits Dubai: Daixin Team Claims to Steal Confidential Data, Residents at Risk

City of Dubai Ransomware Attack

The city of Dubai, known for its affluence and wealthy residents, has allegedly been hit by a ransomware attack claimed by the cybercriminal group Daixin Team. The group announced the city of Dubai ransomware attack on its dark web leak site on Wednesday, claiming to have stolen between 60-80GB of data from the Government of Dubai’s network systems. According to the Daixin Team's post, the stolen data includes ID cards, passports, and other personally identifiable information (PII). Although the group noted that the 33,712 files have not been fully analyzed or dumped on the leak site, the potential exposure of such sensitive information is concerning. Dubai, a city with over three million residents and the highest concentration of millionaires globally, presents a rich target for cybercriminals. [caption id="attachment_77008" align="aligncenter" width="504"]City of Dubai Ransomware Attack Source: Dark Web[/caption]

Potential Impact City of Dubai Ransomware Attack

The stolen data reportedly contains extensive personal information, such as full names, dates of birth, nationalities, marital statuses, job descriptions, supervisor names, housing statuses, phone numbers, addresses, vehicle information, primary contacts, and language preferences. Additionally, the databases appear to include business records, hotel records, land ownership details, HR records, and corporate contacts. [caption id="attachment_77010" align="aligncenter" width="1024"]Daixin Team Source: Dark Web[/caption] Given that over 75% of Dubai's residents are expatriates, the stolen data provides a treasure of information that could be used for targeted spear phishing attacks, vishing attacks, identity theft, and other malicious activities. The city's status as a playground for the wealthy, including 212 centi-millionaires and 15 billionaires, further heightens the risk of targeted attacks.

Daixin Team: A Persistent Threat

The Daixin Team, a Russian-speaking ransomware and data extortion group, has been active since at least June 2022. Known primarily for its cyberattacks on the healthcare sector, Daixin has recently expanded its operations to other industries, employing sophisticated hacking techniques. A 2022 report by the US Cybersecurity and Infrastructure Security Agency (CISA) highlights Daixin Team's focus on the healthcare sector in the United States. However, the group has also targeted other sectors, including the hospitality industry. Recently, Daixin claimed responsibility for a cyberattack on Omni Hotels & Resorts, exfiltrating sensitive data, including records of all visitors dating back to 2017. In another notable case, Bluewater Health, a prominent hospital network in Ontario, Canada, fell victim to a cyberattack attributed to Daixin Team. The attack affected several hospitals, including Windsor Regional Hospital, Erie Shores Healthcare, Chatham-Kent Health, and Hôtel-Dieu Grace Healthcare. The Government of Dubai has yet to release an official statement regarding the ransomware attack. However, on accessing the official website of the Dubai government, no foul play was sensed as the websites were fully functional. This leaves the alleged ransomware attack unverified. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Apple Launches ‘Private Cloud Compute’ Along with Apple Intelligence AI

By: Alan J
11 June 2024 at 19:14

Private Cloud Compute Apple Intelligence AI

In a bold attempt to redefine cloud security and privacy standards, Apple has unveiled Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed to back its new Apple Intelligence with safety and transparency while integrating Apple devices into the cloud. The move comes after recognition of the widespread concerns surrounding the combination of artificial intelligence and cloud technology.

Private Cloud Compute Aims to Secure Cloud AI Processing

Apple has stated that its new Private Cloud Compute (PCC) is designed to enforce privacy and security standards over AI processing of private information. For the first time ever, Private Cloud Compute brings the same level of security and privacy that our users expect from their Apple devices to the cloud," said an Apple spokesperson. [caption id="attachment_76690" align="alignnone" width="1492"]Private Cloud Compute Apple Intelligence Source: security.apple.com[/caption] At the heart of PCC is Apple's stated commitment to on-device processing. When Apple is responsible for user data in the cloud, we protect it with state-of-the-art security in our services," the spokesperson explained. "But for the most sensitive data, we believe end-to-end encryption is our most powerful defense." Despite this commitment, Apple has stated that for more sophisticated AI requests, Apple Intelligence needs to leverage larger, more complex models in the cloud. This presented a challenge to the company, as traditional cloud AI security models were found lacking in meeting privacy expectations. Apple stated that PCC is designed with several key features to ensure the security and privacy of user data, claiming the following implementations:
  • Stateless computation: PCC processes user data only for the purpose of fulfilling the user's request, and then erases the data.
  • Enforceable guarantees: PCC is designed to provide technical enforcement for the privacy of user data during processing.
  • No privileged access: PCC does not allow Apple or any third party to access user data without the user's consent.
  • Non-targetability: PCC is designed to prevent targeted attacks on specific users.
  • Verifiable transparency: PCC provides transparency and accountability, allowing users to verify that their data is being processed securely and privately.

Apple Invites Experts to Test Standards; Online Reactions Mixed

At this week's Apple Annual Developer Conference, Apple's CEO Tim Cook described Apple Intelligence as a "personal intelligence system" that could understand and contextualize personal data to deliver results that are "incredibly useful and relevant," making "devices even more useful and delightful." Apple Intelligence mines and processes data across apps, software and services across Apple devices. This mined data includes emails, images, messages, texts, messages, documents, audio files, videos, contacts, calendars, Siri conversations, online preferences and past search history. The new PCC system attempts to ease consumer privacy and safety concerns. In its description of 'Verifiable transparency,' Apple stated:
"Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees."
However, despite Apple's assurances, the announcement of Apple Intelligence drew mixed reactions online, with some already likening it to Microsoft's Recall. In reaction to Apple's announcement, Elon Musk took to X to announce that Apple devices may be banned from his companies, citing the integration of OpenAI as an 'unacceptable security violation.' Others have also raised questions about the information that might be sent to OpenAI. [caption id="attachment_76692" align="alignnone" width="596"]Private Cloud Compute Apple Intelligence 1 Source: X.com[/caption] [caption id="attachment_76693" align="alignnone" width="418"]Private Cloud Compute Apple Intelligence 2 Source: X.com[/caption] [caption id="attachment_76695" align="alignnone" width="462"]Private Cloud Compute Apple Intelligence 3 Source: X.com[/caption] According to Apple's statements, requests made on its devices are not stored by OpenAI, and users’ IP addresses are obscured. Apple stated that it would also add “support for other AI models in the future.” Andy Wu, an associate professor at Harvard Business School, who researches the usage of AI by tech companies, highlighted the challenges of running powerful generative AI models while limiting their tendency to fabricate information. “Deploying the technology today requires incurring those risks, and doing so would be at odds with Apple’s traditional inclination toward offering polished products that it has full control over.”   Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Leveraging AI to Enhance Threat Detection and Response Anomalies

Threat Detection

By Srinivas Shekar, CEO and Co-Founder, Pantherun Technologies In the first quarter of 2024, the global threat landscape continued to present significant challenges across various sectors. According to an insight report by Accenture & World Economic Forum, professional services remained the primary target for cyberattacks, accounting for 24% of cases; the manufacturing sector followed, with 13% of incidents, while financial services and healthcare sectors also faced substantial threats, with 9% and 8% of cases respectively. These statistics underscore the escalating complexity and frequency of cyberattacks, highlighting the urgent need for advanced cybersecurity measures. Traditional threat detection methods are increasingly inadequate, prompting a shift towards innovative solutions such as artificial intelligence (AI) to enhance threat detection, response, and data protection in real time.

Understanding AI and Cybersecurity Anomalies

Artificial intelligence has emerged as a powerful tool in cybersecurity, primarily due to its ability to identify and respond to anomalies. Research by Capgemini reveals that 69% of organizations believe AI is essential for detecting and responding to cybersecurity threats. AI-driven systems analyze data in real time, flagging unusual activities that might go unnoticed by conventional methods. This capability is vital as the volume of cyber threats continues to grow, with an estimated 15.4 million data records being compromised worldwide in the third quarter of 2022 alone. At its core, AI involves the use of algorithms and machine learning to analyze vast amounts of data and identify patterns. In the context of cybersecurity, AI can distinguish between normal and abnormal behavior within a network. These abnormalities, often referred to as anomalies, are critical in identifying potential security risks. For instance, AI can detect unusual login attempts, unexpected data transfers, or irregular user behaviors that might indicate a breach. The ability to spot these anomalies is crucial because many cyberattacks involve subtle and sophisticated methods that traditional security systems might miss. By continuously monitoring network activity and learning from each interaction, AI can provide a dynamic and proactive defense against threats, safeguarding both encrypted and unencrypted data.

Using AI to Enhance Threat Detection

Traditional threat detection methods rely heavily on predefined rules and signatures of known threats. While effective to some extent, these methods are often reactive, meaning they can only identify threats that have been previously encountered and documented. AI, on the other hand, enhances threat detection by leveraging its pattern recognition capabilities to identify anomalies more quickly and accurately. For example, AI can analyze network traffic in real time, learning what constitutes normal behavior and flagging anything that deviates from this baseline. This allows for the detection of zero-day attacks much faster than conventional methods. By doing so, AI reduces the time it takes to identify and respond to potential threats, significantly enhancing the overall security posture of an organization.

AI-Powered Response Mechanisms

 Once a threat is detected, the speed and efficiency of the response are critical in minimizing damage. AI plays a pivotal role in automating response mechanisms, ensuring quicker and more effective actions are taken when a threat is recognized. Automated responses can include isolating affected systems, alerting security teams, and initiating countermeasures to neutralize the threat. Moreover, AI can assist in managing encryption keys and applying real-time data protection strategies. By incorporating AI and machine learning, encryption techniques become more adaptive and resilient, making it harder for attackers to decrypt sensitive information. These automated, AI-driven responses help contain threats swiftly, reducing the impact of security breaches.

AI in Encryption and Data Protection

The role of AI in encryption and data protection is particularly significant. AI can enhance encryption techniques by optimizing key generation and management processes. Traditional encryption methods often rely on static keys, which can be vulnerable to attacks if not managed properly. AI introduces dynamic key generation, creating unique and complex keys for each session, making it exponentially harder for attackers to crack. Additionally, AI can continuously monitor encrypted data for signs of tampering or unauthorized access. This proactive approach ensures data integrity and confidentiality, providing an extra layer of security that evolves alongside emerging threats. By leveraging AI in encryption, organizations can better protect their sensitive information and maintain trust with their customers and stakeholders.

Understanding Challenges and Opportunities for the Future

Despite its potential, integrating AI with cybersecurity is not without challenges. Privacy concerns, false positives, and ethical dilemmas are significant hurdles that need to be addressed. For instance, the vast amount of data required for AI to function effectively raises questions about user privacy and data protection. Additionally, AI systems can sometimes generate false positives, leading to unnecessary alerts and potentially desensitizing security teams to real threats. However, the opportunities for AI in cybersecurity are vast. As AI technology continues to evolve and the ability to reduce Its need to have large volumes of data for decision-making Improves, it will become even more adept at identifying and mitigating threats. Future advancements may include more sophisticated AI models capable of predicting attacks before they occur, and enhanced collaboration between AI systems and human security experts, while also accelerating it in silicon for faster response. The integration of AI into cybersecurity represents a monumental shift in how we approach threat detection and response. By leveraging AI's capabilities, organizations can enhance their defenses against increasingly sophisticated cyber threats, ensuring the safety and integrity of their data in the digital age. As we continue to navigate the complexities of cybersecurity, the role of AI will undoubtedly become even more crucial, paving the way for a more secure and resilient digital future. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.

AI Prompt Engineering for Cybersecurity: The Details Matter

AI prompt engineering for security

AI has been a major focus of the Gartner Security and Risk Management Summit in National Harbor, Maryland this week, and the consensus has been that while large language models (LLMs) have so far overpromised and under-delivered, there are still AI threats and defensive use cases that cybersecurity pros need to be aware of. Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that hacker uses of AI so far include improved phishing and social engineering – with deepfakes a particular concern. But D’Hoinne and Director Analyst Kevin Schmidt agreed in a joint panel that there haven’t been any novel attack technique arising from AI yet, just improvements on existing attack techniques like business email compromise (BEC) or voice scams. AI security tools likewise remain underdeveloped, with AI assistants perhaps the most promising cybersecurity application so far, able to potentially help with patching, mitigations, alerts and interactive threat intelligence. D’Hoinne cautions that the tools should be used as an adjunct to security staffers so they don’t lose their ability to think critically.

AI Prompt Engineering for Cybersecurity: Precision Matters

Using AI assistants and LLMs for cybersecurity use cases was the focus of a separate presentation by Schmidt, who cautioned that AI prompt engineering needs to be very specific for security uses to overcome the limitations of LLMs, and even then the answer may only get you 70%-80% toward your goal. Outputs need to be validated, and junior staff will require the oversight of senior staff, who will more quickly be able to determine the significance of the output. Schmidt also cautioned that chatbots like ChatGPT should only be used for noncritical data. Schmidt gave examples of good and bad AI security prompts for helping security operations teams. “Create a query in my <name of SIEM> to identify suspicious logins” is too vague, he said. He gave an example of a better way to craft a SIEM query: “Create a detection rule in <name of SIEM> to identify suspicious logins from multiple locations within the last 24 hours. Provide the <SIEM> query language and explain the logic behind it and place the explanations in tabular format.” That prompt should produce something like the following output: [caption id="attachment_75212" align="alignnone" width="300"]SIEM query AI prompt output SIEM query AI prompt output (source: Gartner)[/caption] Analyzing firewall logs was another example. Schmidt gave the following as an example of an ineffective prompt: “Analyze the firewall logs for any unusual patterns or anomalies.” A better prompt would be: “Analyze the firewall logs from the past 24 hours and identify any unusual patterns or anomalies. Summarize your findings in a report format suitable for a security team briefing.” That produced the following output: [caption id="attachment_75210" align="alignnone" width="300"]Firewall log prompt output Firewall log prompt output (source: Gartner)[/caption] Another example involved XDR tools. Instead of a weak prompt like “Summarize the top two most critical security alerts in a vendor’s XDR,” Schmidt recommended something along these lines: “Summarize the top two most critical security alerts in a vendor’s XDR, including the alert ID, description, severity and affected entities. This will be used for the monthly security review report. Provide the response in tabular form.” That prompt produced the following output: [caption id="attachment_75208" align="alignnone" width="300"]XDR alert prompt output XDR alert prompt output (source: Gartner)[/caption]

Other Examples of AI Security Prompts

Schmidt gave two more examples of good AI prompts, one on incident investigation and another on web application vulnerabilities. For security incident investigations, an effective prompt might be “Provide a detailed explanation of incident DB2024-001. Include the timeline of events, methods used by the attacker and the impact on the organization. This information is needed for an internal investigation report. Produce the output in tabular form.” That prompt should lead to something like the following output: [caption id="attachment_75206" align="alignnone" width="300"]Incident response prompt output Incident response AI prompt output (source: Gartner)[/caption] For web application vulnerabilities, Schmidt recommended the following approach: “Identify and list the top five vulnerabilities in our web application that could be exploited by attackers. Provide a brief description of each vulnerability and suggest mitigation steps. This will be used to prioritize our security patching efforts. Produce this in tabular format.” That should produce something like this output: [caption id="attachment_75205" align="alignnone" width="300"]Application vulnerability prompt output Web application vulnerability prompt output (source: Gartner)[/caption]

Tools for AI Security Assistants

Schmidt listed some of the GenAI tools that security teams might use, ranging from chatbots to SecOps AI assistants – such as CrowdStrike Charlotte AI, Microsoft Copilot for Security, SentinelOne Purple AI and Splunk AI – and startups such as AirMDR, Crogl, Dropzone and Radiant Security (see Schmidt’s slide below). [caption id="attachment_75202" align="alignnone" width="300"]GenAI security assistants GenAI tools for possible cybersecurity use (source: Gartner)[/caption]

Generative AI and Data Privacy: Navigating the Complex Landscape

Generative AI

By Neelesh Kripalani, Chief Technology Officer, Clover Infotech Generative AI, which includes technologies such as deep learning, natural language processing, and speech recognition for generating text, images, and audio, is transforming various sectors from entertainment to healthcare. However, its rapid advancement has raised significant concerns about data privacy. To navigate this intricate landscape, it is crucial to understand the intersection of AI capabilities, ethical considerations, legal frameworks, and technological safeguards.

Data Privacy Challenges Raised by Generative AI

Not securing data while collection or processing- Generative AI raises significant data privacy concerns due to its need for vast amounts of diverse data, often including sensitive personal information, collected without explicit consent and difficult to anonymize effectively. Model inversion attacks and data leakage risks can expose private information, while biases in training data can lead to unfair or discriminatory outputs. The risk of generated content - The ability of generative AI to produce highly realistic fake content raises serious concerns about its potential for misuse. Whether creating convincing deepfake videos or generating fabricated text and images, there is a significant risk of this content being used for impersonation, spreading disinformation, or damaging individuals' reputations. Lack of Accountability and transparency - Since GenAI models operate through complex layers of computation, it is difficult to get visibility and clarity into how these systems arrive at their outputs. This complexity makes it difficult to track the specific steps and factors that lead to a particular decision or output. This not only hinders trust and accountability but also complicates the tracing of data usage and makes it tedious to ensure compliance with data privacy regulations. Additionally, unidentified biases in the training data can lead to unfair outputs, and the creation of highly realistic but fake content, like deepfakes, poses risks to content authenticity and verification. Addressing these issues requires improved explainability, traceability, and adherence to regulatory frameworks and ethical guidelines. Lack of fairness and ethical considerations - Generative AI models can perpetuate or even exacerbate existing biases present in their training data. This can lead to unfair treatment or misrepresentation of certain groups, raising ethical issues.

Here’s How Enterprises Can Navigate These Challenges

Understand and map the data flow - Enterprises must maintain a comprehensive inventory of the data that their GenAI systems process, including data sources, types, and destinations. Also, they should create a detailed data flow map to understand how data moves through their systems. Implement strong data governance - As per the data minimization regulation, enterprises must collect, process, and retain only the minimum amount of personal data necessary to fulfill a specific purpose. In addition to this, they should develop and enforce robust data privacy policies and procedures that comply with relevant regulations. Ensure data anonymization and pseudonymization – Techniques such as anonymization and pseudonymization can be implemented to reduce the chances of data reidentification. Strengthen security measures – Implement other security measures such as encryption for data at rest and in transit, access controls for protecting against unauthorized access, and regular monitoring and auditing to detect and respond to potential privacy breaches. To summarize, organizations must begin by complying with the latest data protection laws and practices, and strive to use data responsibly and ethically. Further, they should regularly train employees on data privacy best practices to effectively manage the challenges posed by Generative AI while leveraging its benefits responsibly and ethically. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

Hugging Face Discloses Unauthorized Access to Spaces Platform

Hugging Face, Hugging Face AI

Hackers penetrated artificial intelligence (AI) company Hugging Face's platform to access its user secrets, the company revealed in a blog post. The Google and Amazon-funded Hugging Face detected unauthorized access to its Spaces platform, which is a hosting service for showcasing AI/machine learning (ML) applications and collaborative model development. In short, the platform allows users to create, host, and share AI and ML applications, as well as discover AI apps made by others.

Hugging Face Hack Exploited Tokens

Hugging Face suspects that a subset of Spaces' secrets may have been accessed without authorization. In response to this security event, the company revoked several HF tokens present in those secrets and notified affected users via email.
"We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default," Hugging Face said.
The company has not disclosed the number of users impacted by the incident, which remains under investigation. Hugging Face said it has made "significant" improvements to tighten Spaces' security in the past few days, including org tokens that offer better traceability and audit capabilities, implementing key management service, and expanding its systems' ability to identify leaked tokens and invalidate them. It is also investigating the breach with external cybersecurity experts and reported the incident to law enforcement and data protection agencies.

Growing Threats Against AI-as-a-Service Providers

Risks faced by AI-as-a-service (AIaaS) providers like Hugging Face are increasing rapidly, as the explosive growth of this sector makes them a lucrative target for attackers who seek to exploit the platforms for malicious purposes. In early April, cloud security firm Wiz detailed two security issues in Hugging Face that could allow adversaries to gain cross-tenant access and poison AI/ML models by taking over the continuous integration and continuous deployment (CI/CD) pipelines. “If a malicious actor were to compromise Hugging Face's platform, they could potentially gain access to private AI models, datasets and critical applications, leading to widespread damage and potential supply chain risk," Wiz said in a report detailing the threat. One of the security issues that the Wiz researchers identified was related to the Hugging Face Spaces platform. Wiz found that an attacker could execute arbitrary code during application build time, enabling them to scrutinize network connections from their machine. Its examination revealed a connection to a shared container registry that housed images belonging to other customers, which the researchers could manipulate. Previous research by HiddenLayer identified flaws in the Hugging Face Safetensors conversion service, which could enable attackers to hijack AI models submitted by users and stage supply chain attacks. Hugging Face also confirmed in December that it fixed critical API flaws that were reported by Lasso Security. Hugging Face said it is actively addressing these security concerns and continues to investigate the recent unauthorized access to ensure the safety and integrity of its platform and users.

OpenAI Announces Safety and Security Committee Amid New AI Model Development

OpenAI Announces Safety and Security Committee

OpenAI announced a new safety and security committee as it begins training a new AI model intended to replace the GPT-4 system that currently powers its ChatGPT chatbot. The San Francisco-based startup announced the formation of the committee in a blog post on Tuesday, highlighting its role in advising the board on crucial safety and security decisions related to OpenAI’s projects and operations. The creation of the committee comes amid ongoing debates about AI safety at OpenAI. The company faced scrutiny after Jan Leike, a researcher, resigned, criticizing OpenAI for prioritizing product development over safety. Following this, co-founder and chief scientist Ilya Sutskever also resigned, leading to the disbandment of the "superalignment" team that he and Leike co-led, which was focused on addressing AI risks. Despite these controversies, OpenAI emphasized that its AI models are industry leaders in both capability and safety. The company expressed openness to robust debate during this critical period.

OpenAI's Safety and Security Committee Composition and Responsibilities

The safety committee comprises company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four OpenAI technical and policy experts. It also features board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, a former general counsel for Sony.
"A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days." 
The committee's initial task is to evaluate and further develop OpenAI’s existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.

Development of the New AI Model

OpenAI also announced that it has recently started training a new AI model, described as a "frontier model." These frontier models represent the most advanced AI systems, capable of generating text, images, video, and human-like conversations based on extensive datasets. The company also recently launched its newest flagship model GPT-4o ('o' stands for omni), which is a multilingual, multimodal generative pre-trained transformer designed by OpenAI. It was announced by OpenAI CTO Mira Murati during a live-streamed demo on May 13 and released the same day. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. GPT-4o has a context window supporting up to 128,000 tokens, which helps it maintain coherence over longer conversations or documents, making it suitable for detailed analysis. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

U.S. House Panel Takes On AI Security and Misuse

AI, Misuse of AI, AI security

The difficulty of defending against the misuse of AI – and possible solutions – was the topic of a U.S. congressional hearing today. Data security and privacy officials and advocates were among those testifying before the House Committee on Homeland Security at a hearing titled, “Advancing Innovation (AI): Harnessing Artificial Intelligence to Defend and Secure the Homeland.” The committee plans to include AI in legislation that it’s drafting, said chairman Mark E. Green (R-TN). From cybersecurity and privacy threats to election interference and nation-state attacks, the hearing highlighted AI’s wide-ranging threats and the challenges of mounting a defense. Nonetheless, the four panelists at the hearing – representing technology and cybersecurity companies and a public interest group – put forth some ideas, both technological and regulatory.

Cybercrime Gets Easier

Much of the testimony – and concerns raised by committee members – focused on the advantages that AI has given cybercriminals and nation-state actors, advantages that cybersecurity officials say must be countered by increasingly building AI into products. “AI is democratizing the threat landscape by providing any aspiring cybercriminal with easy-to-use, advanced tools capable of achieving sophisticated outcomes,” said Ajay Amlani, senior vice president at biometric company iProov.
“The crime as a service dark web is very affordable. The only way to combat AI-based attacks is to harness the power of AI in our cybersecurity strategies.”
AI can also help cyber defenders make sense of the overwhelming amount of data and alerts they have to contend with, said Michael Sikorski, CTO of Palo Alto Networks’ Unit 42. “To stop the bad guys from winning, we must aggressively leverage AI for cyber defense,” said Sikorski, who detailed some of the “transformative results” customers have achieved from AI-enhanced products.
“Outcomes like these are necessary to stop threat actors before they can encrypt systems or steal sensitive information, and none of this would be possible without AI,” Sikorski added.
Sikorski said organizations must adopt “secure AI by design” principles and AI usage oversight. “Organizations will need to secure every step of the AI application development lifecycle and supply chain to protect AI data from unauthorized access and leakage at all times,” he said, noting that the principles align with the NIST AI risk management framework released last month.

Election Security and Disinformation Loom Large

Ranking member Bennie Thompson (D-MS) asked the panelists what can be done to improve election security and defend against interference, issues of critical importance in a presidential election year. Amlani said digital identity could play an important role in battling disinformation and interference, principles included in section 4.5 of President Biden’s National Cyber Security Strategy that have yet to be implemented.
“Our country is one of the only ones in the western world that doesn't have a digital identity strategy,” Amlani said.
“Making sure that it's the right person, it's a real person that's actually posting and communicating, and making sure that that person is in fact right there at that time, is a very important component to make sure that we know who it is that's actually generating content online. There is no identity layer to the internet currently today.”

Safe AI Use Guidelines Proposed by Public Policy Advocate

The most detailed proposal for addressing the AI threat came from Jake Laperruque, deputy director of the Security and Surveillance Project at the Center for Democracy and Technology, who argued that the “AI arms race” should proceed responsibly.
“Principles for responsible use of AI technologies should be applied broadly across development and deployment,” Laperruque said.
Laperruque gave the Department of Homeland Security credit for starting the process with its recently published AI roadmap. He said government use of AI should be based on seven principles:
  1. Built upon proper training data
  2. Subject to independent testing and high performance standards
  3. Deployed only within the bounds of the technology’s designed function
  4. Used exclusively by trained staff and corroborated by human review
  5. Subject to internal governance mechanisms that define and promote responsible use
  6. Bound by safeguards to protect human rights and constitutional values
  7. Regulated by institutional mechanisms for ensuring transparency and oversight
“If we rush to deploy AI quickly rather than carefully, it will harm security and civil liberties alike,” Laperruque concluded. “But if we establish a strong foundation now for responsible use, we can reap benefits well into the future.” Media Disclaimer: This article is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Palo Alto Networks Looks for Growth Amid Changing Cybersecurity Market

Palo Alto Networks

After years of hypergrowth, Palo Alto Networks’ (PANW) revenue growth has been slowing, suggesting major shifts in cybersecurity spending patterns and raising investor concerns about the cybersecurity giant’s long-term growth potential. Even as overall cybersecurity spending is predicted to remain strong, Palo Alto’s revenue growth has dropped to roughly half of the 30% growth rate investors have enjoyed for the last several years. Also Read: Digital Transformation Market to grow to $1247.5 Billion by 2026 Those concerns came to a head in February, when Palo Alto’s stock plunged 28% in a single day after the company slashed its growth outlook amid a move to “platformization,” with the company essentially giving away some products in hopes of luring more customers to its broader platform. Investor caution continued yesterday after the company merely reaffirmed its financial guidance, suggesting the possibility of a longer road back to hypergrowth. PANW shares were down 3% in recent trading after initially falling 10% after the company’s latest earnings report was released yesterday. Fortinet (FTNT), Palo Alto’s long-term network security rival, is also struggling amid cybersecurity market uncertainty, as analysts expect the company’s growth rate to slow from greater than 30% to around 10%.

SIEM, AI Signal Major Market Shifts

The changes in cybersecurity spending patterns show up most clearly in SIEM market consolidation and AI cybersecurity tools. Buyers may be waiting to see what cybersecurity vendors do with AI. On the company’s earnings call late Monday, Palo Alto CEO Nikesh Arora told analysts that he expects the company “will be first to market with capabilities to protect the range of our customers' AI security needs.” Seismic changes in the market for security information and event management (SIEM) systems are another sign of a rapidly changing cybersecurity market. Cisco’s (CSCO) acquisition of Splunk in March was just the start of major consolidation among legacy SIEM vendors. Last week, LogRhythm and Exabeam announced merger plans, and on the same day Palo Alto announced plans to acquire QRadar assets from IBM. AI and platformization factored strongly into those announcements. Palo Alto will transition QRadar customers to its Cortex XSIAM next-gen security operations (SOC) platform, and Palo Alto will incorporate IBM’s watsonx large language models (LLMs) in Cortex XSIAM “to deliver additional Precision AI solutions.” Palo Alto will also become IBM’s preferred cybersecurity partner across cloud, network and SOC. Forrester analysts said of the Palo Alto-IBM deal, “This is the biggest concession of a SIEM vendor to an XDR vendor so far and signals a sea change for the threat detection and response market. Security buyers may be finally getting the SIEM alternative they’ve been seeking for years.” The moves may yet be enough to return Palo Alto to better-than-expected growth, but one data point on Monday’s earnings call suggests buyers may be cautious. “We have initiated way more conversations in our platformization than we expected,” said Arora. “If meetings were a measure of outcome, they have gone up 30%, and a majority of them have been centered on platform opportunities.” It remains to be seen if sales will follow the same growth trajectory as meetings. For now, it’s clear that even as the overall cybersecurity market remains strong, the undercurrents suggest rapid changes in where that money is going. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.
❌
❌