Normal view

Received before yesterday

Ad Fraud Is Exploding — Dhiraj Gupta of mFilterIt Explains How Brands Can Respond

30 January 2026 at 05:34

Data Privacy Week 2026-Interview

Ad fraud isn’t just a marketing problem anymore — it’s a full-scale threat to the trust that powers the digital economy. As Data Privacy Week 2026 puts a global spotlight on protecting personal information and ensuring accountability online, the growing fraud crisis in digital advertising feels more urgent than ever.

In 2024 alone, fraud in mobile advertising jumped 21%, while programmatic ad fraud drained nearly $50 billion from the industry. During data privacy week 2026, these numbers serve as a reminder that ad fraud is not only about wasted budgets — it’s also about how consumer data moves, gets tracked, and sometimes misused across complex ecosystems.

This urgency is reflected in the rapid growth of the ad fraud detection tools market, expected to rise from $410.7 million in 2024 to more than $2 billion by 2034. And in the context of data privacy week 2026, the conversation is shifting beyond fraud prevention to a bigger question: if ads are being manipulated and user data is being shared without clear oversight, who is truly in control?

To unpack these challenges, The Cyber Express team, during data privacy week 2026, spoke with Dhiraj Gupta, CTO & Co-founder of mFilterIt,  a technology leader at the forefront of helping brands win the battle against ad fraud and restore integrity across the advertising ecosystem. With a background in telecom and a passion for building AI-driven solutions, Gupta argues that brands can no longer rely on surface-level compliance or platform-reported metrics. As he puts it,
“Independent verification and data-flow audits are critical because they validate what actually happens in a campaign, not just what media plans, platforms, or dashboards report.”
Read the excerpt from the data privacy week 2026 interview below to understand why real-time audits, stronger privacy controls, and continuous accountability are quickly becoming non-negotiable in the fight against fraud — and in rebuilding consumer trust in digital advertising.

Interview Excerpt: Data Privacy Week 2026 Special

TCE: Why are independent verification and data-flow audits becoming essential for brands beyond just detecting ad fraud?

Gupta: Independent verification and data-flow audits are critical because they validate what actually happens in a campaign, not just what media plans, platforms, or dashboards report. They provide evidence-based accountability to regulators, advertisers, and agencies, allowing brands to move from assumed compliance to provable control. Importantly, these audits don’t only verify whether impressions are real; they also assess whether user data is being accessed, shared, or reused - such as for remarketing or profiling, in ways the brand never explicitly approved. In today’s regulatory environment, intent is no longer enough. Brands must be able to demonstrate operational control over how data moves across their digital ecosystem.

TCE: How can unauthorized or excessive tracking of users occur even when a brand believes it is compliant with privacy norms?

Gupta: In many cases, this happens not due to malicious intent, but because of operational complexity and the push for funnel optimization and deeper data mapping. Common scenarios include tags or SDKs triggering secondary or tertiary data calls that are not disclosed to the advertiser, and vendors activating new data parameters, such as device IDs or lead identifiers without explicit approval. Over time, incremental changes in tracking configurations can significantly expand data collection beyond what was originally consented to or contractually permitted, even though the brand may still believe it is operating within compliance frameworks.

TCE: How does programmatic advertising contribute to widespread sharing of user data across multiple intermediaries?

Gupta: Programmatic advertising is inherently multi-layered. A single ad impression can involve dozens of intermediaries like DSPs, SSPs, data providers, verification partners, and identity resolution platforms, each receiving some form of user signal for bidding, measurement, or optimization. While consent is often collected once, the data derived from that consent may be replicated, enriched, and reused multiple times across the supply chain. Without real-time data-flow monitoring, brands have very limited visibility into how far that data travels, who ultimately accesses it, or how long it persists across partner systems.

TCE: What risks do brands face if they don’t fully track the activities of their data partners, even when they don’t directly handle consumer information?

Gupta: Even when brands do not directly process personally identifiable information, they remain accountable for how their broader ecosystem behaves. The risks include regulatory exposure, reputational damage, erosion of consumer trust, and an inability to defend compliance claims during audits or investigations. Regulators are increasingly asking brands to demonstrate active control, not just contractual intent. Without independent verification and documented evidence, brands effectively carry residual compliance risk by default.

TCE: Why do consent frameworks sometimes fail to ensure that user data is controlled as intended?

Gupta: Consent frameworks are effective at capturing permission, but far less effective at enforcing downstream behaviour. They typically do not monitor what happens after consent is granted, whether data usage aligns with stated purposes, whether new vendors are added, or whether data access expands over time. Without execution-level oversight, consent becomes symbolic rather than operational. For example, data that was shared for campaign measurement may later be reused by third parties for audience profiling, without the user’s awareness and often without the brand’s visibility.

TCE: How can brands bridge the gap between regulatory intent and real-world implementation of privacy rules?

Gupta: Brands need to shift from document-based compliance to behaviour-based verification. This means auditing live campaigns, tracking actual data access, and continuously validating that data usage aligns with both consent terms and declared purposes. For instance, in quick-commerce or hyperlocal advertising, sensitive data like precise pin codes can be captured through data layers or partner integrations without the brand’s direct knowledge. Only runtime monitoring can surface such risks and align real-world execution with regulatory intent.

TCE: What strategies or tools can brands use to identify unauthorized data access within complex digital ecosystems?

Gupta: Effective control requires continuous, not one-time, oversight. Key strategies include independent runtime audits, continuous monitoring of data calls, partner-level risk scoring, and full data-journey mapping across platforms and vendors. Rather than relying solely on contractual assurances or annual audits, brands need ongoing visibility into how data is accessed and shared, especially as campaign structures, vendors, and technologies change rapidly.

TCE: How does excessive tracking or shadow profiling affect consumers’ privacy and trust in digital services?

Gupta: Consumers are becoming increasingly aware of how their data is used, and excessive or opaque tracking creates a perception of surveillance rather than value exchange. When users feel they have lost control over their personal information, trust declines, not only in platforms, but also in the brands advertising on them. For example, when consumers receive hyper-local ads on social media for products they were discussing offline, they often perceive it as continuous tracking, even if the data correlation occurred through indirect signals. This perception alone can damage brand credibility and long-term loyalty.

TCE: In your view, what will become the most critical privacy controls for organizations in the next 2–3 years? What practical steps can organizations take today?

Gupta: The most critical controls will be data-flow transparency, strict enforcement of purpose limitation, and continuous partner accountability. Organizations will be expected to prove where data goes, why it goes there, and whether that usage aligns with user consent and regulatory expectations. Privacy will increasingly be measured by operational evidence, not policy declarations. Practically, brands should start by independently auditing all live trackers and data endpoints, not just approved vendors. Privacy indicators should be reviewed alongside media and performance KPIs, and verification must be continuous rather than episodic. Most importantly, privacy must be treated as part of the brand’s trust infrastructure, not merely as a compliance checklist. Brands that invest in transparency and control today will be far better positioned as regulations tighten and consumer expectations continue to rise.

Canada Marks Data Privacy Week 2026 as Commissioner Pushes for Privacy by Design

27 January 2026 at 03:18

Data Privacy Week 2026

As Data Privacy Week 2026 gets underway from January 26 to 30, Canada’s Privacy Commissioner Philippe Dufresne has renewed calls for stronger data protection practices, modern privacy laws, and a privacy-first approach to emerging technologies such as artificial intelligence. In a statement marking Data Privacy Week 2026, Dufresne said data has become one of the most valuable resources of the 21st century, making responsible data management essential for both individuals and organizations. “Data is one of the most important resources of the 21st century and managing it well is essential for ensuring that individuals and organizations can confidently reap the benefits of a digital society,” he said. The Office of the Privacy Commissioner (OPC) has chosen privacy by design as its theme this year, highlighting the need for organizations to embed privacy into their programs, products, and services from the outset. According to Dufresne, this proactive approach can help organizations innovate responsibly, reduce risks, build for the future, and earn public trust.

Data Privacy Week 2026: Privacy by Design Takes Centre Stage

Speaking on the growing integration of technology into everyday life, Dufresne said Data Privacy Week 2026 is a timely opportunity to underline the importance of data protection. With personal data being collected, used, and shared at unprecedented levels, privacy is no longer a secondary concern. “Prioritizing privacy by design is my Office’s theme for Data Privacy Week this year, which highlights the benefits to organizations of taking a proactive approach to protect the personal information that is in their care,” he said. The OPC is also offering guidance for individuals on how to safeguard their personal information in a digital world, while providing organizations with resources to support privacy-first programs, policies, and services. These include principles to encourage responsible innovation, especially in the use of generative AI technologies.

Real-World Cases Show Why Privacy Matters

In parallel with Data Privacy Week 2026, Dufresne used a recent appearance before Parliament to point to concrete cases that show how privacy failures can cause serious and lasting harm. He referenced investigations into the non-consensual sharing of intimate images involving Aylo, the operator of Pornhub, and the 23andMe data breach, which exposed highly sensitive personal information of 7 million customers, including more than 300,000 Canadians. His office’s joint investigation into TikTok also highlighted the need to protect children’s privacy online. The probe not only resulted in a report but also led TikTok to improve its privacy practices in the interests of its users, particularly minors. Dufresne also confirmed an expanded investigation into X and its Grok chatbot, focusing on the emerging use of AI to create deepfakes, which he said presents significant risks to Canadians. “These are some of many examples that demonstrate the importance of privacy for current and future generations,” he told lawmakers, adding that prioritizing privacy is also a strategic and competitive asset for organizations.

Modernizing Canada’s Privacy Laws

A central theme of Data Privacy Week 2026 in Canada is the need to modernize privacy legislation. Dufresne said existing laws must be updated to protect Canadians in a data-driven world while giving businesses clear and practical rules. He voiced support for proposed changes under Bill C-15, the Budget 2025 Implementation Act, which would amend the Personal Information Protection and Electronic Documents Act (PIPEDA) to introduce a right to data mobility. This would allow individuals to request that their personal information be transferred to another organization, subject to regulations and safeguards. “A right to data mobility would give Canadians greater control of their personal information by allowing them to make decisions about who they want their information shared with,” he said, adding that it would also make it easier for people to switch service providers and support innovation and competition. Under the proposed amendments, organizations would be required to disclose personal information to designated organizations upon request, provided both are subject to a data-mobility framework. The federal government would also gain authority to set regulations covering safeguards, interoperability standards, and exceptions. Given the scope of these changes, Dufresne said it will be important for his office to be consulted as the regulations are developed.

A Call to Act During Data Privacy Week 2026

Looking ahead, Dufresne framed Data Privacy Week 2026 as both a moment of reflection and a call to action. “Let us work together to create a safer digital future for all, where privacy is everyone’s priority,” he said. He invited Canadians to take part in Data Privacy Week 2026 by joining the conversation online, engaging with content from the OPC’s LinkedIn account, and using the hashtag #DPW2026 to connect with others committed to advancing privacy in Canada and globally. As digital technologies continue to reshape daily life, the message from Canada’s Privacy Commissioner is clear: privacy is not just a legal requirement, but a foundation for trust, innovation, and long-term economic growth.

Data Privacy Week 2026: Why Secure Access is the New Data Protection Perimeter

27 January 2026 at 00:49

Data Privacy Week 2026

By Vijender Yadav, CEO & Co-founder, Accops  The cybersecurity industry is currently grappling with a paradox: encryption, compliance, and spending are at record highs, yet data privacy remains fragile. This stems from a reliance on a 2021 playbook to fight a 2026 war.  Historically, data protection was a static discipline focused on "data at rest" and "data in transit." However, in an era where automated discovery tools can map an enterprise's entire data footprint in minutes, traditional walls have become irrelevant. The perimeter has shifted; it no longer resides at the edge of the network, but at the precise moment of access. 

The Death of the "Safe" Zone 

By now, the concept of a "trusted network" is an architectural relic. In 2026, data is a fluid asset distributed across multi-region SaaS, edge computing nodes, and sovereign clouds rather than sitting in a central vault.  The primary challenge today is the "Identity-Data Gap." While the transition away from the physical office is complete, the assumption of trust associated with it often remains. If a user connects to a resource, legacy systems frequently grant broad, persistent visibility. This level of exposure facilitates near-instant lateral movement across the network and connected devices, making such visibility a direct threat to data privacy.  Protecting data privacy in this environment requires a shift from storage-centric security to visibility control. Resources must remain "dark" to everyone except the authenticated, authorised user throughout a continuously verified session. 

Data Privacy Week 2026: Defending Against the "Identity Hijack" 

In 2026, the primary threat to data privacy is the weaponisation of legitimate access rather than sophisticated software exploits. While a user’s identity can be verified with near-total certainty, organisations remain remarkably vulnerable to the context of that identity—specifically the what, how, and when of the access request. In this model, identity has become a false proxy for trust.  As identity remains under constant siege, secure access must move beyond a "gatekeeper" event to become a Continuous Adaptive Risk and Trust Assessment (CARTA). Securing the new perimeter requires the validation of three distinct pillars through persistent, 24/7/365 monitoring:
  1. Validate the Human (Identity & Presence): Progressive organisations are adopting a multi-modal approach that combines phishing-resistant hardware verification with biometric-first identity signals. By anchoring identity in physical hardware (such as FIDO2-compliant keys) and augmenting it with continuous monitoring of liveness and presence, it is possible to ensure that the authorised individual remains physically present at the keys throughout the interaction. This layered verification prevents session hijacking or "shoulder surfing" in real-time. 
  1. Validate the Device (Integrity & Posture): It is no longer safe to assume a device is secure simply because it is corporate-owned. The technical integrity of the endpoint must be evaluated before and during access. This involves continuous checks for managed status, OS vulnerabilities, and security software health to ensure the tool used to access data is not a compromised gateway. 
  1. Validate the Behaviour (Intent & Monitoring): This final layer of the perimeter involves monitoring user actions for deviations from established norms. Detecting anomalies in navigation speed, timing, and data consumption allows for an assessment of whether a device is acting like a human-operated workstation or an automated exfiltration bot. The perimeter thus functions as a dynamic response system that adapts based on 'Contextual Intelligence'—the real-time risk of the intent. 

Privacy-First Architecture: Micro-Segmentation of Access 

The defining transition for 2026 and beyond is the shift from "Access to Resources" to "Entitlement within Resources."  Under a Zero Trust Network Access (ZTNA) 2.0 framework, this is achieved through a "Privacy of Exclusion" model. Connecting a user to an application is no longer sufficient; granular actions within that application must be managed. By default, no user sees any data. Only when a specific request is validated is a "one-to-one" encrypted tunnel created, restricting the user to the precise dataset required for the task.  This approach is necessary to satisfy the rigorous "Need-to-Know" requirements of global regulations like the GDPR or India’s DPDPA. Data privacy cannot be maintained if a network architecture allows a marketing executive to even ping an HR database. Secure access enforces privacy by making the unauthorised invisible. 

Looking Ahead: The Invisible Perimeter 

The mandate for technology leaders is to de-couple security from the underlying infrastructure of the internet.  Data privacy is not a checkbox; it is a continuous state of being. It is maintained only when access is granular, just-in-time, and verified with every single click. The "Castle and Moat" has been replaced by an invisible guard made of identity and intent—ensuring that privacy is a default setting rather than a manual effort. 
❌