India Brings AI-Generated Content Under Formal Regulation with IT Rules Amendment
![]()
![]()
AI is revolutionizing cybersecurity, raising the stakes for CISOs who must balance innovation with risk management. As adversaries leverage AI to enhance attacks, effective cybersecurity requires visibility, adaptive strategies, and leadership alignment at the board level.
The post AI is Rewriting the Rules of Risk: Three Ways CISOs Can Lead the Next Chapter appeared first on Security Boulevard.
![]()
Cloud adoption didn’t just change where workloads run. It fundamentally changed how security and compliance must be managed. Enterprises are moving faster than ever across AWS, Azure, GCP, and hybrid...
The post Cloud Security and Compliance: What It Is and Why It Matters for Your Business appeared first on Security Boulevard.
A global survey of 4,149 IT and security practitioners finds that while three-quarters (75%) expect a quantum computer will be capable of breaking traditional public key encryption within five years, only 38% at this point in time are preparing to adopt post-quantum cryptography. Conducted by the Ponemon Institute on behalf of Entrust, a provider of..
The post Survey Sees Little Post-Quantum Computing Encryption Progress appeared first on Security Boulevard.
Permission sprawl is colliding with AI regulations, creating new compliance risks across hybrid and multi-cloud environments.
The post The Compliance Convergence Challenge: Permission Sprawl and AI Regulations in Hybrid Environments appeared first on Security Boulevard.
![]()
Hospitals are adopting Gen AI across EHR workflows, but hallucinations, bias, and weak governance pose real patient safety risks.
The post How Hospitals’ Use of GenAI is Putting Patients at Risk Without Realizing It appeared first on Security Boulevard.
![]()
![]()
MIND extends its data loss prevention platform to secure agentic AI, enabling organizations to discover, monitor, and govern AI agents in real time to prevent sensitive data exposure, shadow AI risks, and prompt injection attacks.
The post MIND Extends DLP Reach to AI Agents appeared first on Security Boulevard.
Master cryptographic agility for AI resource governance. Learn how to secure Model Context Protocol (MCP) with post-quantum security and granular policy control.
The post Cryptographic Agility for Contextual AI Resource Governance appeared first on Security Boulevard.
![]()
Cyber fusion centers make zero-trust more effective by improving visibility, automating response, and shrinking the window for attacks.
The post Why Cyber Fusion Centers and Zero-Trust Work Better Together appeared first on Security Boulevard.
![]()
![]()
![]()
![]()
![]()
President Donald Trump has ordered the immediate withdrawal of the United States from several premier international bodies dedicated to cybersecurity, digital human rights, and countering hybrid warfare, as part of a major restructuring of American defense and diplomatic posture. The directive is part of a memorandum issued on Monday, targeting 66 international organizations deemed "contrary to the interests of the United States."
While the memorandum’s cuts to climate and development sectors have grabbed headlines, national security experts will be worries of the targeted dismantling of U.S. participation in key security alliances in the digital realm. The President has explicitly directed withdrawal from the European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE), the Global Forum on Cyber Expertise (GFCE), and the Freedom Online Coalition (FOC).
"I have considered the Secretary of State’s report... and have determined that it is contrary to the interests of the United States to remain a member," President Trump said. The U.S. Secretary of State Marco Rubio backed POTUS' move calling these coalitions "wasteful, ineffective, and harmful."
"These institutions (are found) to be redundant in their scope, mismanaged, unnecessary, wasteful, poorly run, captured by the interests of actors advancing their own agendas contrary to our own, or a threat to our nation’s sovereignty, freedoms, and general prosperity," Rubio said. "President Trump is clear: It is no longer acceptable to be sending these institutions the blood, sweat, and treasure of the American people, with little to nothing to show for it. The days of billions of dollars in taxpayer money flowing to foreign interests at the expense of our people are over."
Perhaps the most significant strategic loss is the U.S. exit from the European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE). Based in Helsinki, the Hybrid CoE is unique as the primary operational bridge between NATO and the European Union.
The Centre was established to analyze and counter "hybrid" threats—ambiguous, non-military attacks such as election interference, disinformation campaigns, and economic coercion, tactics frequently attributed to state actors like Russia and China. By withdrawing, the U.S. is effectively blinding the shared intelligence and coordinated response mechanisms that European allies rely on to detect these sub-threshold attacks. The U.S. participation was seen as a key deterrent; without it, the trans-Atlantic unified front against hybrid warfare could be severely fractured.
The administration is also pulling out of the Global Forum on Cyber Expertise (GFCE). Unlike a military alliance, the GFCE is a pragmatic, multi-stakeholder platform that consists of 260+ members and partners bringing together governments, private tech companies, and NGOs to build cyber capacity in developing nations.
The GFCE’s mission is to strengthen global cyber defenses by helping nations develop their own incident response teams, cyber crime laws, and critical infrastructure protection. A U.S. exit here opens a power vacuum. As the U.S. retreats from funding and guiding the capacity-building efforts, rival powers may step in to offer their own support, potentially embedding authoritarian standards into the digital infrastructure of the Global South.
The GFCE on thinks otherwise. A GFCE spokesperson told The Cyber Express "(It) respects the decision of the US government and recognizes the United States as one of the founding members of the GFCE since 2015."
"The US has been an important contributor to international cyber capacity building efforts over time," the spokesperson added when asked about US' role in the Forum. However the pull-out won't be detrimental as "the GFCE’s work is supported by a broad and diverse group of members and partners. The GFCE remains operational and committed to continuing its mission."
Finally, the withdrawal from the Freedom Online Coalition (FOC) marks an ideological shift. The FOC is a partnership of 42 governments committed to advancing human rights online, specifically fighting against internet shutdowns, censorship, and digital authoritarianism.
The U.S. has historically been a leading voice in the FOC, using the coalition to pressure regimes that restrict internet access or persecute digital dissidents. Leaving the FOC suggests the Trump administration is deprioritizing the promotion of digital human rights as a foreign policy objective. This could embolden authoritarian regimes to tighten control over their domestic internets without fear of a coordinated diplomatic backlash from the West.
The administration argues these withdrawals are necessary to stop funding globalist bureaucracies that constrain U.S. action. By exiting, the White House aims to reallocate resources to bilateral partnerships where the U.S. can exert more direct leverage. However, critics could argue that in the interconnected domain of cyberspace, isolation is a vulnerability. By ceding the chair at these tables, the United States may find itself writing the rules of the next digital conflict alone, while the rest of the world—friend and foe alike—organizes without it.
The article was updated to include GFCE spokesperson's response and U.S. Secretary of State Marco Rubio's statement.
![]()
![]()
![]()
“Well, I can say that Indian Companies so far has been rather negligent of customer's privacy. Anywhere you go, they ask for your mobile number.”The DPDP Act is designed to ensure that such casual indifference to personal data does not survive the next decade. Below are eight fundamental ways the DPDP Act will change how Indian companies handle data in 2026, with real-world implications for businesses, consumers, and the digital economy.
According to Shashank Bajpai, CISO & CTSO at YOTTA, “The DPDP Act (2023) becomes operational through Rules notified in November 2025; the result is a staggered compliance timetable that places 2026 squarely in the execution phase. That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.”In 2026, privacy decisions will increasingly sit with boards, CXOs, and risk committees. Metrics such as consent opt-out rates, breach response time, and third-party risk exposure will become leadership-level conversations, not IT footnotes.
As Gauravdeep Singh, State Head (Digital Transformation), e-Mission Team, MeitY, explains, “Data Principal = YOU.”Whether it’s a food delivery app requesting location access or a fintech platform processing transaction history, individuals gain the right to control how their data is used—and to change their mind later.
Shukla highlights how deeply embedded poor practices have been, “Hotels take your aadhaar card or driving license and copy and keep it in the drawers inside files without ever telling the customer about their policy regarding the disposal of such PII data safely and securely.”In 2026, undefined retention is no longer acceptable.
As Shukla notes, “The shops, E-commerce establishments, businesses, utilities collect so much customer PII, and often use third party data processor for billing, marketing and outreach. We hardly ever get to know how they handle the data.”In 2026, companies will be required to audit vendors, strengthen contracts, and ensure processors follow DPDP-compliant practices, because liability remains with the fiduciary.
As Bajpai notes, “The practical effect is immediate: companies must move from policy documents to implemented consent systems, security controls, breach workflows, and vendor governance.”Tabletop exercises, breach simulations, and forensic readiness will become standard—not optional.
As Bajpai observes, “This is not just regulation; it is an economic strategy to build domestic capability in cloud, identity, security and RegTech.”Consent Managers, auditors, privacy tech vendors, and compliance platforms will grow rapidly in 2026. For Indian startups, DPDP compliance itself becomes a business opportunity.
One Reddit user captured the risk succinctly, “On paper, the DPDP Act looks great… But a law is only as strong as public awareness around it.”Companies that communicate transparently and respect user choice will win trust. Those that don’t will lose customers long before regulators step in.
As Hareesh Tibrewala, CEO at Anhad, notes, “Organizations now have the opportunity to prepare a roadmap for DPDP implementation.”For many businesses, however, the challenge lies in turning awareness into action, especially when clarity around timelines and responsibilities is still evolving. The concern extends beyond citizens to companies themselves, many of which are still grappling with core concepts such as consent management, data fiduciary obligations, and breach response requirements. With penalties tiered by the nature and severity of violations—ranging from significant fines to amounts running into hundreds of crores, this lack of understanding could prove costly. In 2026, regulators will no longer be looking for intent, they will be looking for evidence of execution. As Bajpai points out, “That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.”
As Sandeep Shukla cautions, “It will probably take years before a proper implementation at all levels of organizations would be seen.”But the direction is clear. Personal data in India can no longer be treated casually. The DPDP Act marks the end of informal data handling, and the beginning of a more disciplined, transparent, and accountable digital economy.
![]()
The FBI Anchorage Field Office has issued a public warning after seeing a sharp increase in fraud cases targeting residents across Alaska. According to federal authorities, scammers are posing as law enforcement officers and government officials in an effort to extort money or steal sensitive personal information from unsuspecting victims.
The warning comes as reports continue to rise involving unsolicited phone calls where criminals falsely claim to represent agencies such as the FBI or other local, state, and federal law enforcement bodies operating in Alaska. These scams fall under a broader category of law enforcement impersonation scams, which rely heavily on fear, urgency, and deception.
Scammers typically contact victims using spoofed phone numbers that appear legitimate. In many cases, callers accuse individuals of failing to report for jury duty or missing a court appearance. Victims are then told that an arrest warrant has been issued in their name.
To avoid immediate arrest or legal consequences, the caller demands payment of a supposed fine. Victims are pressured to act quickly, often being told they must resolve the issue immediately. According to the FBI, these criminals may also provide fake court documents or reference personal details about the victim to make the scam appear more convincing.
In more advanced cases, scammers may use artificial intelligence tools to enhance their impersonation tactics. This includes generating realistic voices or presenting professionally formatted documents that appear to come from official government sources. These methods have contributed to the growing sophistication of government impersonation scams nationwide.
Authorities note that these scams most often occur through phone calls and emails. Criminals commonly use aggressive language and insist on speaking only with the targeted individual. Victims are often told not to discuss the call with family members, friends, banks, or law enforcement agencies.
Payment requests are another key red flag. Scammers typically demand money through methods that are difficult to trace or reverse. These include cash deposits at cryptocurrency ATMs, prepaid gift cards, wire transfers, or direct cryptocurrency payments. The FBI has emphasized that legitimate government agencies never request payment through these channels.
The FBI has reiterated that it does not call members of the public to demand payment or threaten arrest over the phone. Any call claiming otherwise should be treated as fraudulent. This clarification is a central part of the FBI’s broader FBI scam warning Alaska residents are being urged to take seriously.
Data from the FBI’s Internet Crime Complaint Center (IC3) highlights the scale of the problem. In 2024 alone, IC3 received more than 17,000 complaints related to government impersonation scams across the United States. Reported losses from these incidents exceeded $405 million nationwide.
Alaska has not been immune. Reported victim losses in the state surpassed $1.3 million, underscoring the financial and emotional impact these scams can have on individuals and families.
To reduce the risk of falling victim, the FBI urges residents to “take a beat” before responding to any unsolicited communication. Individuals should resist pressure tactics and take time to verify claims independently.
The FBI strongly advises against sharing or confirming personally identifiable information with anyone contacted unexpectedly. Alaskans are also cautioned never to send money, gift cards, cryptocurrency, or other assets in response to unsolicited demands.
Anyone who believes they may have been targeted or victimized should immediately stop communicating with the scammer. Victims should notify their financial institutions, secure their accounts, contact local law enforcement, and file a complaint with the FBI’s Internet Crime Complaint Center at www.ic3.gov. Prompt reporting can help limit losses and prevent others from being targeted.
![]()
![]()
![]()
“The social media ban only really addresses on set of risks for young people, which is algorithmic amplification of inappropriate content and the doomscrolling or infinite scroll. Many risks remain. The ban does nothing to address cyberbullying since messaging platforms are exempt from the ban, so cyberbullying will simply shift from one platform to another.”
Leaver also noted that restricting access to popular platforms will not drive children offline. Due to ban on social media young users will explore whatever digital spaces remain, which could be less regulated and potentially riskier.
“Young people are not leaving the digital world. If we take some apps and platforms away, they will explore and experiment with whatever is left. If those remaining spaces are less known and more risky, then the risks for young people could definitely increase. Ideally the ban will lead to more conversations with parents and others about what young people explore and do online, which could mitigate many of the risks.”
From a broader perspective, Leaver emphasized that the ban on social media will only be fully beneficial if accompanied by significant investment in digital literacy and digital citizenship programs across schools:
“The only way this ban could be fully beneficial is if there is a huge increase in funding and delivery of digital literacy and digital citizenship programs across the whole K-12 educational spectrum. We have to formally teach young people those literacies they might otherwise have learnt socially, otherwise the ban is just a 3 year wait that achieves nothing.”
He added that platforms themselves should take a proactive role in protecting children:
“There is a global appetite for better regulation of platforms, especially regarding children and young people. A digital duty of care which requires platforms to examine and proactively reduce or mitigate risks before they appear on platforms would be ideal, and is something Australia and other countries are exploring. Minimizing risks before they occur would be vastly preferable to the current processes which can only usually address harm once it occurs.”
Looking at the global stage, Leaver sees Australia ban on social media as a potential learning opportunity for other nations:
“There is clearly global appetite for better and more meaningful regulation of digital platforms. For countries considered their own bans, taking the time to really examine the rollout in Australia, to learn from our mistakes as much as our ambitions, would seem the most sensible path forward.”
Other specialists continue to warn that the ban on social media could isolate vulnerable teenagers or push them toward more dangerous, unregulated corners of the internet.
![]()
![]()
Australia's fragmented approach to AI oversight—with responsibilities scattered across privacy commissioners, consumer watchdogs, online safety regulators, and sector-specific agencies—required coordination to keep pace with rapidly evolving AI capabilities and their potential to amplify existing harms while creating entirely new threats.
The Australian Government announced establishment of the AI Safety Institute backed by $29.9 million in funding, to monitor emerging AI capabilities, test advanced systems, and share intelligence across government while supporting regulators to ensure AI companies comply with Australian law. The setting up of the AI safety institute is part of the larger National AI Plan that the Australian government officially released on Tuesday.
The Institute will become operational in early 2026 as the centerpiece of the government's strategy to keep Australians safe while capturing economic opportunities from AI adoption. The approach maintains existing legal frameworks as the foundation for addressing AI-related risks rather than introducing standalone AI legislation, with the Institute supporting portfolio agencies and regulators to adapt laws when necessary.
The AI Safety Institute will focus on both upstream AI risks and downstream AI harms. Upstream risks involve model capabilities and the ways AI systems are built and trained that can create or amplify harm, requiring technical evaluation of frontier AI systems before deployment.
Downstream harms represent real-world effects people experience when AI systems are used, including bias in hiring algorithms, privacy breaches from data processing, discriminatory outcomes in automated decision-making, and emerging threats like AI-enabled crime and AI-facilitated abuse disproportionately impacting women and girls.
The Institute will generate and share technical insights on emerging AI capabilities, working across government and with international partners. It will develop advice, support bilateral and multilateral safety engagement, and publish safety research to inform industry and academia while engaging with unions, business, and researchers to ensure functions meet community needs.
The Institute will support coordinated responses to downstream AI harms by engaging with portfolio agencies and regulators, monitoring and analyzing information across government to allow ministers and regulators to take informed, timely, and cohesive regulatory action.
Portfolio agencies and regulators remain best placed to assess AI uses and harms in specific sectors and adjust regulatory approaches when necessary. The Institute will support existing regulators to ensure AI companies are compliant with Australian law and uphold legal standards of fairness and transparency.
The government emphasized that Australia has strong existing, largely technology-neutral legal frameworks including sector-specific guidance and standards that can apply to AI. The approach promotes flexibility, uses regulators' existing expertise, and targets emerging threats as understanding of AI's strengths and limitations evolves.
The government is taking targeted action against specific harms while continuing to assess suitability of existing laws. Consumer protections under Australian Consumer Law apply equally to AI-enabled goods and services, with Treasury's review finding Australians enjoy the same strong protections for AI products as traditional goods.
The government addresses AI-related risks through enforceable industry codes under the Online Safety Act 2021, criminalizing non-consensual deepfake material while considering further restrictions on "nudify" apps and reforms to tackle algorithmic bias.
The Attorney-General's Department engages stakeholders through the Copyright and AI Reference Group to consult on possible updates to copyright laws as they relate to AI, with the government ruling out a text and data mining exception to provide certainty to Australian creators and media workers.
Healthcare AI regulation is under review through the Safe and Responsible AI in Healthcare Legislation and Regulation Review, while the Therapeutic Goods Administration oversees AI used in medical device software following its review on strengthening regulation of medical device software including artificial intelligence.
The Department of Home Affairs, National Intelligence Community, and law enforcement agencies continue efforts to proactively mitigate serious risks posed by AI. Home Affairs coordinates cross-government efforts on cybersecurity and critical infrastructure protection while overseeing the Protective Security Policy Framework detailing policy requirements for authorizing AI technology systems for non-corporate Commonwealth entities.
AI is likely to exacerbate existing national security risks and create new, unknown threats. The government is preparing for potential AI-related incidents through the Australian Government Crisis Management Framework, which provides overarching policy for managing potential crises.
The government will consider how AI-related harms are managed under the framework to ensure ongoing clarity regarding roles and responsibilities across government to support coordinated and effective action.
The Institute will collaborate with domestic and international partners including the National AI Centre and the International Network of AI Safety Institutes to support global conversations on understanding and addressing AI risks.
Australia is a signatory to the Bletchley Declaration, Seoul Declaration, and Paris Statement emphasizing inclusive international cooperation on AI governance. Participation in the UN Global Digital Compact, Hiroshima AI Process, and Global Partnership on AI supports conversations on advancing safe, secure, and trustworthy adoption.
The government is developing an Australian Government Strategy for International Engagement and Regional Leadership on Artificial Intelligence to align foreign and domestic policy settings while establishing priorities for bilateral partnerships and engagement in international forums.
![]()
![]()
That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.
The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.
The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.
One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.
At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.
Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.
The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.
The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.
Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.
The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.
Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.
Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.
The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.
Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.
Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.
The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.
![]()