Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

OpenAI Announces Safety and Security Committee Amid New AI Model Development

OpenAI Announces Safety and Security Committee

OpenAI announced a new safety and security committee as it begins training a new AI model intended to replace the GPT-4 system that currently powers its ChatGPT chatbot. The San Francisco-based startup announced the formation of the committee in a blog post on Tuesday, highlighting its role in advising the board on crucial safety and security decisions related to OpenAI’s projects and operations. The creation of the committee comes amid ongoing debates about AI safety at OpenAI. The company faced scrutiny after Jan Leike, a researcher, resigned, criticizing OpenAI for prioritizing product development over safety. Following this, co-founder and chief scientist Ilya Sutskever also resigned, leading to the disbandment of the "superalignment" team that he and Leike co-led, which was focused on addressing AI risks. Despite these controversies, OpenAI emphasized that its AI models are industry leaders in both capability and safety. The company expressed openness to robust debate during this critical period.

OpenAI's Safety and Security Committee Composition and Responsibilities

The safety committee comprises company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four OpenAI technical and policy experts. It also features board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, a former general counsel for Sony.
"A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days." 
The committee's initial task is to evaluate and further develop OpenAI’s existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.

Development of the New AI Model

OpenAI also announced that it has recently started training a new AI model, described as a "frontier model." These frontier models represent the most advanced AI systems, capable of generating text, images, video, and human-like conversations based on extensive datasets. The company also recently launched its newest flagship model GPT-4o ('o' stands for omni), which is a multilingual, multimodal generative pre-trained transformer designed by OpenAI. It was announced by OpenAI CTO Mira Murati during a live-streamed demo on May 13 and released the same day. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. GPT-4o has a context window supporting up to 128,000 tokens, which helps it maintain coherence over longer conversations or documents, making it suitable for detailed analysis. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com

attempts-to-regulate-ai’s-hidden-hand-in-americans’-lives-flounder-in-us-statehouses-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Associated Press The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide. Only one of seven bills aimed at preventing AI’s penchant to discriminate when making […]

La entrada Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses

24 May 2024 at 12:36

Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed.

The post Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses appeared first on SecurityWeek.

SEC Fines NYSE Owner ICE for Delay in Reporting VPN Breach

VPN breach, NYSE, New York Stock Exchange

The U.S. Securities and Exchange Commission (SEC) announced today that a major player in the U.S. financial system has agreed to pay a $10 million penalty for failing to timely report an April 2021 VPN breach. The Intercontinental Exchange Inc. (ICE), which owns the New York Stock Exchange (NYSE) and a number of other major financial interests, will pay the penalty to settle charges that it “caused the failure of nine wholly-owned subsidiaries, including the New York Stock Exchange, to timely inform the SEC of a cyber intrusion as required by Regulation Systems Compliance and Integrity (Regulation SCI),” the agency said in a press release. The SEC found out about the ICE breach after contacting the company while assessing reports of similar vulnerabilities several days after the breach occurred. Regulation SCI requires immediate reporting of cybersecurity incidents and an update within 24 hours if the incident is significant. The SEC said in its order that a third party identified only as “Company A” informed ICE that it was potentially impacted by a system intrusion involving a zero-day VPN vulnerability. The following day, ICE “identified malicious code associated with the threat actor that exploited the vulnerability on one of its VPN concentrators, reasonably concluding that it was ... indeed subject to the Intrusion.” Over the next several days, ICE and its internal InfoSec team took steps to analyze and respond to the intrusion, including taking the compromised VPN device offline, forensically examining it, and reviewing user VPN sessions to identify any intrusions or data exfiltration, the SEC said. ICE also retained a cybersecurity firm to conduct a parallel forensic investigation, and also worked with the VPN device manufacturer “to confirm the integrity of ICE’s network environment.” Five days after being notified of the vulnerability, ICE InfoSec personnel concluded that the threat actor’s access was limited to the compromised VPN device. At that point – “four days after first having had a reasonable basis to conclude that unauthorized entry ... had occurred” – legal and compliance personnel at ICE’s regulated subsidiaries were finally notified of the intrusion, the SEC order said. “As a result of ICE’s failures, those subsidiaries did not properly assess the intrusion to fulfill their independent regulatory disclosure obligations under Regulation SCI,” the SEC press release said. “The reasoning behind the rule is simple: if the SEC receives multiple reports across a number of these types of entities, then it can take swift steps to protect markets and investors,” Gurbir S. Grewal, Director of the SEC’s Division of Enforcement, said in a statement. “Here, the respondents subject to Reg SCI failed to notify the SEC of the intrusion at issue as required. Rather, it was Commission staff that contacted the respondents in the process of assessing reports of similar cyber vulnerabilities.” The order and penalty reflect not only the seriousness of the violations, but also that several of them have been the subject of prior SEC enforcement actions, including for violations of Reg SCI, Grewal added. Among the ICE subsidiaries involved in the case were Archipelago Trading Services, Inc., NYSE Arca, Inc., ICE Clear Credit LLC, and the Securities Industry Automation Corporation (SIAC), all of which agreed to a cease-and-desist order in addition to ICE’s monetary penalty. VPN devices have come under increased scrutiny in recent days. The Norwegian National Cyber Security Centre issued last week an advisory to replace SSLVPN and WebVPN solutions with more secure alternatives, due to the repeated exploitation of vulnerabilities in edge network devices. The advisory followed a notice from the NCSC about a targeted attack against SSLVPN products in which attackers exploited multiple zero-day vulnerabilities in Cisco ASA VPN used to power critical infrastructure facilities. The campaign had been observed since November 2023. Media Disclaimer: This article is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

U.S. House Panel Takes On AI Security and Misuse

AI, Misuse of AI, AI security

The difficulty of defending against the misuse of AI – and possible solutions – was the topic of a U.S. congressional hearing today. Data security and privacy officials and advocates were among those testifying before the House Committee on Homeland Security at a hearing titled, “Advancing Innovation (AI): Harnessing Artificial Intelligence to Defend and Secure the Homeland.” The committee plans to include AI in legislation that it’s drafting, said chairman Mark E. Green (R-TN). From cybersecurity and privacy threats to election interference and nation-state attacks, the hearing highlighted AI’s wide-ranging threats and the challenges of mounting a defense. Nonetheless, the four panelists at the hearing – representing technology and cybersecurity companies and a public interest group – put forth some ideas, both technological and regulatory.

Cybercrime Gets Easier

Much of the testimony – and concerns raised by committee members – focused on the advantages that AI has given cybercriminals and nation-state actors, advantages that cybersecurity officials say must be countered by increasingly building AI into products. “AI is democratizing the threat landscape by providing any aspiring cybercriminal with easy-to-use, advanced tools capable of achieving sophisticated outcomes,” said Ajay Amlani, senior vice president at biometric company iProov.
“The crime as a service dark web is very affordable. The only way to combat AI-based attacks is to harness the power of AI in our cybersecurity strategies.”
AI can also help cyber defenders make sense of the overwhelming amount of data and alerts they have to contend with, said Michael Sikorski, CTO of Palo Alto Networks’ Unit 42. “To stop the bad guys from winning, we must aggressively leverage AI for cyber defense,” said Sikorski, who detailed some of the “transformative results” customers have achieved from AI-enhanced products.
“Outcomes like these are necessary to stop threat actors before they can encrypt systems or steal sensitive information, and none of this would be possible without AI,” Sikorski added.
Sikorski said organizations must adopt “secure AI by design” principles and AI usage oversight. “Organizations will need to secure every step of the AI application development lifecycle and supply chain to protect AI data from unauthorized access and leakage at all times,” he said, noting that the principles align with the NIST AI risk management framework released last month.

Election Security and Disinformation Loom Large

Ranking member Bennie Thompson (D-MS) asked the panelists what can be done to improve election security and defend against interference, issues of critical importance in a presidential election year. Amlani said digital identity could play an important role in battling disinformation and interference, principles included in section 4.5 of President Biden’s National Cyber Security Strategy that have yet to be implemented.
“Our country is one of the only ones in the western world that doesn't have a digital identity strategy,” Amlani said.
“Making sure that it's the right person, it's a real person that's actually posting and communicating, and making sure that that person is in fact right there at that time, is a very important component to make sure that we know who it is that's actually generating content online. There is no identity layer to the internet currently today.”

Safe AI Use Guidelines Proposed by Public Policy Advocate

The most detailed proposal for addressing the AI threat came from Jake Laperruque, deputy director of the Security and Surveillance Project at the Center for Democracy and Technology, who argued that the “AI arms race” should proceed responsibly.
“Principles for responsible use of AI technologies should be applied broadly across development and deployment,” Laperruque said.
Laperruque gave the Department of Homeland Security credit for starting the process with its recently published AI roadmap. He said government use of AI should be based on seven principles:
  1. Built upon proper training data
  2. Subject to independent testing and high performance standards
  3. Deployed only within the bounds of the technology’s designed function
  4. Used exclusively by trained staff and corroborated by human review
  5. Subject to internal governance mechanisms that define and promote responsible use
  6. Bound by safeguards to protect human rights and constitutional values
  7. Regulated by institutional mechanisms for ensuring transparency and oversight
“If we rush to deploy AI quickly rather than carefully, it will harm security and civil liberties alike,” Laperruque concluded. “But if we establish a strong foundation now for responsible use, we can reap benefits well into the future.” Media Disclaimer: This article is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

UK’s ICO Warns Not to Ignore Data Privacy as ‘My AI’ Bot Investigation Concludes

ICO Warns, Chat GPT, Chat Bot

UK data watchdog has warned against ignoring the data protection risks in generative artificial intelligence and recommended ironing out these issues before the public release of such products. The warning comes on the back of the conclusion of an investigation from the U.K.’s Information Commissioner’s Office (ICO) into Snap, Inc.'s launch of the ‘My AI’ chatbot. The investigation focused on the company's approach to assessing data protection risks. The ICO's early actions underscore the importance of protecting privacy rights in the realm of generative AI. In June 2023, the ICO began investigating Snapchat’s ‘My AI’ chatbot following concerns that the company had not fulfilled its legal obligations of proper evaluation into the data protection risks associated with its latest chatbot integration. My AI was an experimental chatbot built into the Snapchat app that has 414 million daily active users, who on a daily average share over 4.75 billion Snaps. The My AI bot uses OpenAI's GPT technology to answer questions, provide recommendations and chat with users. It can respond to typed or spoken information and can search databases to find details and formulate a response. Initially available to Snapchat+ subscribers since February 27, 2023, “My AI” was later released to all Snapchat users on April 19. The ICO issued a Preliminary Enforcement Notice to Snap on October 6, over “potential failure” to assess privacy risks to several million ‘My AI’ users in the UK including children aged 13 to 17. “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” said John Edwards, the Information Commissioner, at the time.
“We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.”
On the basis of the ICO’s investigation that followed, Snap took substantial measures to perform a more comprehensive risk assessment for ‘My AI’. Snap demonstrated to the ICO that it had implemented suitable mitigations. “The ICO is satisfied that Snap has now undertaken a risk assessment relating to My AI that is compliant with data protection law. The ICO will continue to monitor the rollout of My AI and how emerging risks are addressed,” the data watchdog said. Snapchat has made it clear that, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” The social media platform has integrated safeguards and tools like blocking results for certain keywords like “drugs,” as is the case with the original Snapchat app. “We’re also working on adding additional tools to our Family Center around My AI that would give parents more visibility and control around their teen’s usage of My AI,” the company noted.

‘My AI’ Investigation Sounds Warning Bells

Stephen Almond, ICO Executive Director of Regulatory Risk said, “Our investigation into ‘My AI’ should act as a warning shot for industry. Organizations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Generative AI remains a top priority for the ICO, which has initiated several consultations to clarify how data protection laws apply to the development and use of generative AI models. This effort builds on the ICO’s extensive guidance on data protection and AI. The ICO’s investigation into Snap’s ‘My AI’ chatbot highlights the critical need for thorough data protection risk assessments in the development and deployment of generative AI technologies. Organizations must consider data protection from the outset to safeguard individuals' data privacy and protection rights. The final Commissioner’s decision regarding Snap's ‘My AI’ chatbot will be published in the coming weeks. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

SEC Updates 24-Year-Old Rule to Scale Customers’ Financial Data Protection

Financial data, financial data protection, SEC

The SEC is tightening its focus on financial data breach response mechanisms of very specific set of financial institutions, with an update to a 24-year-old rule. The amendments announced on Thursday mandate that broker-dealers, funding portals, investment companies, registered investment advisers and transfer agents develop comprehensive plans for detecting and addressing data breaches involving customers’ financial information. Under the new rules, covered institutions are required to formulate, implement, and uphold written policies and procedures specifically tailored to identifying and mitigating breaches affecting customer data. Additionally, firms must establish protocols for promptly notifying affected customers in the event of a breach, ensuring transparency and facilitating swift remedial actions. “Over the last 24 years, the nature, scale, and impact of data breaches has transformed substantially,” said SEC Chair Gary Gensler. “These amendments to Regulation S-P will make critical updates to a rule first adopted in 2000 and help protect the privacy of customers’ financial data. The basic idea for covered firms is if you’ve got a breach, then you’ve got to notify. That’s good for investors.” According to the amendments, organizations subject to the regulations must notify affected individuals expeditiously with a deadline of no later than 30 days following the discovery of a data breach. The notification must include comprehensive details regarding the incident, the compromised data and actionable steps for affected parties to safeguard their information. While the amendments are set to take effect two months after publication in the Federal Register, larger entities will have an 18-month grace period to achieve compliance, whereas smaller organizations will be granted a two-year window. However, the SEC has not provided explicit criteria for distinguishing between large and small entities, leaving room for further clarification.

The Debate on SEC's Tight Guidelines

The introduction of these amendments coincides with the implementation of new incident reporting regulations for public companies, compelling timely disclosure of “material“ cybersecurity incidents to the SEC. Public companies in the U.S. now have four days to disclose cybersecurity breaches that could impact their financial standing. SEC’s interest in the matter stems from a major concern: breach information leads to a stock market activity called informed trading, currently a grey area in the eyes of law. Several prominent companies including Hewlett Packard and Frontier, have already submitted requisite filings under these regulations, highlighting the increasing scrutiny on cybersecurity disclosures. Despite pushback from some quarters, including efforts by Rep. Andrew Garbarino to The SEC’s incident reporting rule has however received pushback from close quarters including Congressman Andrew Garbarino, Chairman of the Cybersecurity and Infrastructure Protection Subcommittee of the House Homeland Security Committee and a Member of the House Financial Services Committee. Garbarino in November introduced a joint resolution with Senator Thom Tillis to disapprove SEC’s new rules. “This cybersecurity disclosure rule is a complete overreach on the part of the SEC and one that is in direct conflict with congressional intent. CISA, as the lead civilian cybersecurity agency, has been tasked with developing and issuing regulations for cyber incident reporting as it relates to covered entities. Despite this, the SEC took it upon itself to create duplicative requirements that not only further burden an understaffed cybersecurity workforce with additional and unnecessary reporting requirements, but also increase cybersecurity risk without a congressional mandate and in direct contradiction to public law that is intended to secure the homeland,” Garbarino said, at the time. Senator Tillis added to it saying the SEC was doing its “best to hurt market participants by overregulating firms into oblivion.” Businesses and industry leaders across the spectrum have expressed intense opposition to the new rules but the White House has signaled its commitment to upholding the regulatory framework. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Crypto Mixer Money Laundering: Samourai Founders Arrested

9 May 2024 at 03:00

The recent crackdown on the crypto mixer money laundering, Samourai, has unveiled a sophisticated operation allegedly involved in facilitating illegal transactions and laundering criminal proceeds. The cryptocurrency community was shocked by the sudden Samourai Wallet shutdown. The U.S Department of Justice (DoJ) revealed the arrest of two co-founders, shedding light on the intricacies of their […]

The post Crypto Mixer Money Laundering: Samourai Founders Arrested appeared first on TuxCare.

The post Crypto Mixer Money Laundering: Samourai Founders Arrested appeared first on Security Boulevard.

❌
❌