❌

Normal view

Received yesterday β€” 13 February 2026

NDSS 2025 – Automated Mass Malware Factory

13 February 2026 at 15:00

Session 12B: Malware

Authors, Creators & Presenters: Heng Li (Huazhong University of Science and Technology), Zhiyuan Yao (Huazhong University of Science and Technology), Bang Wu (Huazhong University of Science and Technology), Cuiying Gao (Huazhong University of Science and Technology), Teng Xu (Huazhong University of Science and Technology), Wei Yuan (Huazhong University of Science and Technology), Xiapu Luo (The Hong Kong Polytechnic University)

PAPER
Automated Mass Malware Factory: The Convergence of Piggybacking and Adversarial Example in Android Malicious Software Generation

Adversarial example techniques have been demonstrated to be highly effective against Android malware detection systems, enabling malware to evade detection with minimal code modifications. However, existing adversarial example techniques overlook the process of malware generation, thus restricting the applicability of adversarial example techniques. In this paper, we investigate piggybacked malware, a type of malware generated in bulk by piggybacking malicious code into popular apps, and combine it with adversarial example techniques. Given a malicious code segment (i.e., a rider), we can generate adversarial perturbations tailored to it and insert them into any carrier, enabling the resulting malware to evade detection. Through exploring the mechanism by which adversarial perturbation affects piggybacked malware code, we propose an adversarial piggybacked malware generation method, which comprises three modules: Malicious Rider Extraction, Adversarial Perturbation Generation, and Benign Carrier Selection. Extensive experiments have demonstrated that our method can efficiently generate a large volume of malware in a short period, and significantly increase the likelihood of evading detection. Our method achieved an average attack success rate (ASR) of 88.3% on machine learning-based detection models (e.g., Drebin and MaMaDroid), and an ASR of 76% and 92% on commercial engines Microsoft and Kingsoft, respectively. Furthermore, we have explored potential defenses against our adversarial piggybacked malware.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Automated Mass Malware Factory appeared first on Security Boulevard.

NDSS 2025 – Density Boosts Everything

13 February 2026 at 11:00

Session 12B: Malware

Authors, Creators & Presenters: Jianwen Tian (Academy of Military Sciences), Wei Kong (Zhejiang Sci-Tech University), Debin Gao (Singapore Management University), Tong Wang (Academy of Military Sciences), Taotao Gu (Academy of Military Sciences), Kefan Qiu (Beijing Institute of Technology), Zhi Wang (Nankai University), Xiaohui Kuang (Academy of Military Sciences)

PAPER
Density Boosts Everything: A One-stop Strategy For Improving Performance, Robustness, And Sustainability of Malware Detectors

In the contemporary landscape of cybersecurity, AI-driven detectors have emerged as pivotal in the realm of malware detection. However, existing AI-driven detectors encounter a myriad of challenges, including poisoning attacks, evasion attacks, and concept drift, which stem from the inherent characteristics of AI methodologies. While numerous solutions have been proposed to address these issues, they often concentrate on isolated problems, neglecting the broader implications for other facets of malware detection. This paper diverges from the conventional approach by not targeting a singular issue but instead identifying one of the fundamental causes of these challenges, sparsity. Sparsity refers to a scenario where certain feature values occur with low frequency, being represented only a minimal number of times across the dataset. The authors are the first to elevate the significance of sparsity and link it to core challenges in the domain of malware detection, and then aim to improve performance, robustness, and sustainability simultaneously by solving sparsity problems. To address the sparsity problems, a novel compression technique is designed to effectively alleviate the sparsity. Concurrently, a density boosting training method is proposed to consistently fill sparse regions. Empirical results demonstrate that the proposed methodologies not only successfully bolster the model's resilience against different attacks but also enhance the performance and sustainability over time. Moreover, the proposals are complementary to existing defensive technologies and successfully demonstrate practical classifiers with improved performance and robustness to attacks.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Density Boosts Everything appeared first on Security Boulevard.

Bill Introduced To Replace West Virginia's New CS Course Graduation Requirement With Computer Literacy Proficiency

13 February 2026 at 09:30
theodp writes: West Virginia lawmakers on Tuesday introduced House Bill 5387 (PDF), which would repeal the state's recently enacted mandatory stand-alone computer science graduation requirement and replace it with a new computer literacy proficiency requirement. Not too surprisingly, the Bill is being opposed by tech-backed nonprofit Code.org, which lobbied for the WV CS graduation requirement (PDF) just last year. Code.org recently pivoted its mission to emphasize the importance of teaching AI education alongside traditional CS, teaming up with tech CEOs and leaders last year to launch a national campaign to mandate CS and AI courses as graduation requirements. "It would basically turn the standalone computer science course requirement into a computer literacy proficiency requirement that's more focused on digital literacy," lamented Code.org as it discussed the Bill in a Wednesday conference call with members of the Code.org Advocacy Coalition, including reps from Microsoft's Education and Workforce Policy team. "It's mostly motivated by a variety of different issues coming from local superintendents concerned about, you know, teachers thinking that students don't need to learn how to code and other things. So, we are addressing all of those. We are talking with the chair and vice chair of the committee a week from today to try to see if we can nip this in the bud." Concerns were also raised on the call about how widespread the desire for more computing literacy proficiency (over CS) might be, as well as about legislators who are associating AI literacy more with digital literacy than CS. The proposed move from a narrower CS focus to a broader goal of computer literacy proficiency in WV schools comes just months after the UK's Department for Education announced a similar curriculum pivot to broader digital literacy, abandoning the narrower 'rigorous CS' focus that was adopted more than a decade ago in response to a push by a 'grassroots' coalition that included Google, Microsoft, UK charities, and other organizations.

Read more of this story at Slashdot.

What is the new gender guidance for schools and colleges in England?

Advice on how to respond to students questioning their birth gender has been updated. Here are the key changes

Ministers have released updated guidance on how schools and colleges in England should respond to students who are questioning their birth gender. How is it different to the previous Department for Education (DfE) guidance, released under the Conservatives in 2023?

Continue reading...

Β© Photograph: Klaus Vedfelt/Getty Images

Β© Photograph: Klaus Vedfelt/Getty Images

Β© Photograph: Klaus Vedfelt/Getty Images

Gender studies courses are shutting down across the US. The Epstein files reveal why | Joan Wallach Scott

13 February 2026 at 06:00

Texas A&M University is the latest school to end women’s and gender studies programs and teaching race. We know why

Last week, we learned of the decision of the Texas A&M University board of regents to end women’s and gender studies programs as well as the teaching of β€œdivisive concepts” such as race. A&M was not the first university to do this. Florida’s New College made the move in 2023. Other red state legislatures have passed similar requirements and their public universities (in North Carolina, Ohio and Kansas) have followed suit.

The move to cancel gender studies is explicitly justified as a way to comply with Donald Trump’s executive order of last year titled Defending Women from Gender Ideology Extremism and Restoring Biological Truth to the Federal Government. That document makes β€œthe biological reality of sex” a matter not of science but of law.

Continue reading...

Β© Photograph: US Justice Department/Reuters

Β© Photograph: US Justice Department/Reuters

Β© Photograph: US Justice Department/Reuters

Reeves urged to reassure MPs over public finances amid Β£6bn-a-year Send costs

13 February 2026 at 02:00

City analysts say financial market investors will be worried if cost is deducted from budget surplus

Rachel Reeves is under pressure to reassure MPs over the state of the UK’s public finances, amid concerns that the rising cost of special educational needs and disabilities (Send) could leave a significant hole in the government’s financial buffer.

Meg Hillier, the chair of the all-party House of Commons Treasury committee, said the chancellor should make clear her long-term plans for the Β£6bn-a-year Send bill as uncertainty grows over how it will be accounted for at the end of the decade.

Continue reading...

Β© Photograph: Toby Melville/Reuters

Β© Photograph: Toby Melville/Reuters

Β© Photograph: Toby Melville/Reuters

β€˜Invisible’ children born in the brothels of Bangladesh finally get birth certificates

13 February 2026 at 00:00

Destined to a perilous life with no right to an education or to vote, state recognition β€˜gives them hope’, campaigners say

Through the decades that the Daulatdia brothel in Bangladesh has existed, children born there have been invisible, unable to be registered because their mothers were sex workers and their fathers unknown. Now, for the first time, all 400 of them in the brothel village have their own birth certificates.

That milestone was reached after a push by campaigners who have spent decades working with Bangladesh’s undocumented children born in brothels or on the street. It means they can finally access the rights afforded to other citizens: the ability to go to school, to be issued a passport or to vote.

Continue reading...

Β© Photograph: Bengal Picture Library/Alamy

Β© Photograph: Bengal Picture Library/Alamy

Β© Photograph: Bengal Picture Library/Alamy

Received before yesterday

More exam stress at 15 linked to higher risk of depression as young adult – study

UK charity warns against excessive academic pressure and suggests reducing the number of high-stakes tests

Exam stress at age 15 can increase the risk of depression and self-harm into early adulthood, research suggests.

Academic pressure is known to have a detrimental impact on mood and overall wellbeing, but until now few studies had examined the long-term effects on mental health.

Continue reading...

Β© Photograph: David Davies/PA

Β© Photograph: David Davies/PA

Β© Photograph: David Davies/PA

NDSS 2025 – PBP: Post-Training Backdoor Purification For Malware Classifiers

12 February 2026 at 15:00

Session 12B: Malware

Authors, Creators & Presenters: Dung Thuy Nguyen (Vanderbilt University), Ngoc N. Tran (Vanderbilt University), Taylor T. Johnson (Vanderbilt University), Kevin Leach (Vanderbilt University)

PAPER
PBP: Post-Training Backdoor Purification for Malware Classifiers

In recent years, the rise of machine learning (ML) in cybersecurity has brought new challenges, including the increasing threat of backdoor poisoning attacks on ML malware classifiers. These attacks aim to manipulate model behavior when provided with a particular input trigger. For instance, adversaries could inject malicious samples into public malware repositories, contaminating the training data and potentially misclassifying malware by the ML model. Current countermeasures predominantly focus on detecting poisoned samples by leveraging disagreements within the outputs of a diverse set of ensemble models on training data points. However, these methods are not applicable in scenarios involving ML-as-a-Service (MLaaS) or for users who seek to purify a backdoored model post-training. Addressing this scenario, we introduce PBP, a post-training defense for malware classifiers that mitigates various types of backdoor embeddings without assuming any specific backdoor embedding mechanism. Our method exploits the influence of backdoor attacks on the activation distribution of neural networks, independent of the trigger-embedding method. In the presence of a backdoor attack, the activation distribution of each layer is distorted into a mixture of distributions. By regulating the statistics of the batch normalization layers, we can guide a backdoored model to perform similarly to a clean one. Our method demonstrates substantial advantages over several state-of-the-art methods, as evidenced by experiments on two datasets, two types of backdoor methods, and various attack configurations. Our experiments showcase that PBP can mitigate even the SOTA backdoor attacks for malware classifiers, e.g., Jigsaw Puzzle, which was previously demonstrated to be stealthy against existing backdoor defenses. Notably, your approach requires only a small portion of the training data -- only 1% -- to purify the backdoor and reduce the attack success rate from 100% to almost 0%, a 100-fold improvement over the baseline methods.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – PBP: Post-Training Backdoor Purification For Malware Classifiers appeared first on Security Boulevard.

Funding cuts will devastate the next generation of scientists | Letters

12 February 2026 at 12:09

Physics research drives technological innovation, from medical imaging to data processing, write Dr Phil Bull and Prof Chris Clarkson; plus letters from Tim Gershon and Vincenzo Vagnoni, and Prof Paul Howarth

Your article (UK β€˜could lose generation of scientists’ with cuts to projects and research facilities, 6 February) is right to highlight the serious consequences of proposed 30% funding cuts on the next generation of physics and astronomy researchers. The proposals also risk a generational destruction of the country’s ability to produce skilled graduates, retain specialist knowledge, and support physical science in industrial and educational settings.

This comes against a backdrop of wider threats to university finances, from rising costs to declining international student numbers. An estimated one in four UK physics departments are already at risk of closure, and recent cuts and delays to Science and Technology Facilities Council (STFC) grants have further depletedΒ finances and will result in the loss of some highly skilled technical staff.

Continue reading...

Β© Photograph: Murdo MacLeod/The Guardian

Β© Photograph: Murdo MacLeod/The Guardian

Β© Photograph: Murdo MacLeod/The Guardian

Gender guidance for English primary school pupils permits use of different pronouns

DfE guidance urges teachers to respond to social transition requests β€˜with caution’ and includes Cass report findings

Primary school-age children who question their gender could be allowed to use different pronouns under long-awaited government guidance on the subject.

The guidance, billed as moving away from a culture-war approach to the subject, has some notablechanges compared with a draft produced in 2023 under the Conservatives, which said that primary-aged children β€œshould not have different pronouns to their sex-based pronouns used about them”.

Continue reading...

Β© Photograph: James Jiao/Shutterstock

Β© Photograph: James Jiao/Shutterstock

Β© Photograph: James Jiao/Shutterstock

Declines in health and education in poor countries β€˜harming earning potential’

12 February 2026 at 10:06

World Bank says children born today could earn 51% more over lifetime if their country’s human capital improved

Deteriorating health, education and training in many developing countries is dramatically depressing the future earnings of children born today, the World Bank has said.

In a report, the World Bank urges policymakers to focus on improving outcomes in three settings: homes, neighbourhoods and workplaces.

Continue reading...

Β© Photograph: Anadolu Agency/Anadolu/Getty Images

Β© Photograph: Anadolu Agency/Anadolu/Getty Images

Β© Photograph: Anadolu Agency/Anadolu/Getty Images

Susie Dent’s tips and tricks to add muscle to a child’s vocabulary

To help combat the impact of screen-time creep, the Countdown word supremo has a few suggestions

Children’s vocabulary is shrinking as reading loses out to screen time, the Countdown lexicographer Susie Dent has suggested, as she urged families to read, talk and play word games to boost language development.

Dent, who also co-presents Channel 4’s Secret Genius with Alan Carr, is fronting a new campaign – working with an unexpected partner, Soreen malt loaf – aimed at boosting children’s vocabulary at snack time.

Reading.

Listening to audiobooks.

Sharing word stories and routinely going to the dictionary to find out where words come from.

Playing word games and puzzles, in print, online, with board games, or in the car.

Having conversations while doing active tasks with your child such as cooking or walking.

Asking your child to invent a new word, or to share the latest slang in their class.

Learn another language.

kerfuffle One of Soreen’s choices, kerfuffle is from Scots that describes a commotion or fuss. Children love it because of its sound, but it also adds a touch of humour to an otherwise tricky situation.

mellifluous Not only does this word have a pleasing sound, fulfilling the very quality it describes, but its etymology is also gorgeous – mellifluous comes from the Latin for flowing like honey.

thrill I chose this one because of its secret life. Something thrilling today is always positive, but in its earliest incarnation, to thrill meant to pierce someone with a sword rather than with excitement. The literal meaning of thrill was a hole, which is why our nostrils began as our nose-thrills, or nose holes.

apricity This is one of the many words in the Oxford English Dictionary that were recorded only once before fading away like a linguistic mayfly. Apricity, from 1623, means the warmth of the sun on a winter’s day. The word is as beautiful as the sensation it describes.

susurrus Say this word out loud and you will know its meaning instantly. Susurrus comes from the Latin for whispering and describes the rustling of leaves in a summer breeze.

bags of mystery This Victorian nickname for sausages always makes me smile. It was inspired by the fact that you can never quite know what’s in them.

snerdle English has a vast lexicon for snuggling, from nuddling, neezling and snoozling to snuggening, croodling and snerdling. Each of them expresses the act of lying quietly beneath the covers. Mind you, if you lie there a little bit too long, you could be accused of hurkle-durkling, old Scots for staying in bed long after it’s time to get up.

splendiferous Another of Soreen’s picks, this word has a distinct touch of Mary Poppins about it. In the middle ages it meant simply resplendent, but since the 19th century it has been a humorous description of anything considered rather magnificent.

ruthful The historical dictionary is full of lost positives – words whose negative siblings are alive and well while their parents have faded away. As well as being gormless, inept, unkempt, uncouth and disconsolate, you could in the past be full of gorm, ept, kempt, couth, and consolate. Best of all is surely ruthful, the counterpart to ruthless which means full of compassion.

muscle Another word with a hidden backstory, and this one often makes children laugh. In ancient times, athletes would exercise in the buff in order to show off their rippling muscles (the words gym and gymnasium go back to the Greek for exercise naked). To the Roman imagination, when an athlete flexed their biceps, it looked as though a little mouse was scuttling beneath their skin. Our word muscle consequently comes from the Latin musculus, little mouse.

Continue reading...

Β© Photograph: Frank Baron/The Guardian

Β© Photograph: Frank Baron/The Guardian

Β© Photograph: Frank Baron/The Guardian

Children’s vocabulary shrinking as reading loses out to screen time, says Susie Dent

Exclusive: Countdown lexicographer urges families to read, talk and play word games to help language development

Children’s vocabulary is shrinking as reading loses out to screen time, according to the lexicographer Susie Dent, who is urging families to read, talk and play word games to boost language development.

The Countdown star’s warning comes as the government prepares to issue its first advice to parents on how to manage screen use in under-fives, amid concerns that excessive screen time is damaging children’s language development.

Continue reading...

Β© Photograph: David Levenson/Getty Images

Β© Photograph: David Levenson/Getty Images

Β© Photograph: David Levenson/Getty Images

NDSS 2025 – MingledPie: A Cluster Mingling Approach For Mitigating Preference Profiling In CFL

11 February 2026 at 11:00

Session 12A: Federated Learning 2

Authors, Creators & Presenters: Cheng Zhang (Hunan University), Yang Xu (Hunan University), Jianghao Tan (Hunan University), Jiajie An (Hunan University), Wenqiang Jin (Hunan University)

PAPER
MingledPie: A Cluster Mingling Approach for Mitigating Preference Profiling in CFL

Clustered federated learning (CFL) serves as a promising framework to address the challenges of non-IID (non-Independent and Identically Distributed) data and heterogeneity in federated learning. It involves grouping clients into clusters based on the similarity of their data distributions or model updates. However, classic CFL frameworks pose severe threats to clients' privacy since the honest-but-curious server can easily know the bias of clients' data distributions (its preferences). In this work, we propose a privacy-enhanced clustered federated learning framework, MingledPie, aiming to resist against servers' preference profiling capabilities by allowing clients to be grouped into multiple clusters spontaneously. Specifically, within a given cluster, we mingled two types of clients in which a major type of clients share similar data distributions while a small portion of them do not (false positive clients). Such that, the CFL server fails to link clients' data preferences based on their belonged cluster categories. To achieve this, we design an indistinguishable cluster identity generation approach to enable clients to form clusters with a certain proportion of false positive members without the assistance of a CFL server. Meanwhile, training with mingled false positive clients will inevitably degrade the performances of the cluster's global model. To rebuild an accurate cluster model, we represent the mingled cluster models as a system of linear equations consisting of the accurate models and solve it. Rigid theoretical analyses are conducted to evaluate the usability and security of the proposed designs. In addition, extensive evaluations of MingledPie on six open-sourced datasets show that it defends against preference profiling attacks with an accuracy of 69.4% on average. Besides, the model accuracy loss is limited to between 0.02% and 3.00%.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – MingledPie: A Cluster Mingling Approach For Mitigating Preference Profiling In CFL appeared first on Security Boulevard.

India Rolls Out AI-on-Wheels to Bridge the Digital Divide

11 February 2026 at 04:48

Yuva AI for All

India has taken another step toward expanding AI literacy in India with the launch of Kaushal Rath under the national programme Yuva AI for All. Flagged off from India Gate in New Delhi, the mobile initiative aims to bring foundational Artificial Intelligence (AI) education directly to students, youth, and educators, particularly in semi-urban and underserved regions. For a country positioning itself as a global digital leader, the message behind Yuva AI for All is clear: AI cannot remain limited to elite institutions or metro cities. If Artificial Intelligence is to shape economies and governance, it must be understood by the wider population.

Yuva AI for All: Taking AI to the Doorstep

Launched by the Ministry of Electronics and Information Technology (MeitY) under the IndiaAI Mission in collaboration with AISECT, Yuva AI for All focuses on democratising access to AI education. Launching the initiative, the Minister of State Jitin Prasada stated, β€œThrough the Yuva AI for All initiative and the Kaushal Rath, we are taking AI awareness directly across the country, especially to young people. The bus will travel across regions to familiarise students and youth with the uses and benefits of Artificial Intelligence, fulfilling the Prime Minister Narendra Modi’s vision of ensuring that awareness and access to opportunity transcend geography and demography.” Adding to this, he also said that β€œTheΒ Yuva AI for All with Kaushal Rath initiative is a precursor to the India AI Impact Summit 2026, which is set to take place in New Delhi next week. It is a great pride for India to be hosting a Summit of this kind for the first time, to be held in the Global South. β€œ [caption id="attachment_109449" align="aligncenter" width="600"]Yuva AI for All Image Source: PIB[/caption] At the centre of this effort is Kaushal Rath, a fully equipped mobile computer lab with internet-enabled systems and audio-visual tools. The vehicle will travel across Delhi-NCR and later other regions, visiting schools, ITIs, colleges, and community spaces. The aim is not abstract policy messaging, but practical exposureβ€”hands-on demonstrations of AI and Generative AI tools, guided by trained facilitators and contextualised Indian use cases. The course structure is intentionally accessible. It is a four-hour, self-paced programme with six modules, requiring zero coding background. Participants learn AI concepts, ethics, and real-world applications. Upon completion, they receive certification, a move designed to add tangible value to academic and professional profiles. Kavita Bhatia, Scientist G, MeitY and COO of IndiaAI Mission highlighted, β€œUnder the IndiaAI Mission, skilling is one of the seven core pillars, and this initiative advances our goal of democratising AI education at scale. Through Kaushal Rath, we are enabling hands-on AI learning for students across institutions using connected systems, AI tools, and structured courses, including the YuvAI for All programme designed to demystify AI. By combining instructor-led training, micro- and nano-credentials, and nationwide outreach, we are ensuring that AI skilling becomes accessible to learners across regions.” In a global context, this matters. Many nations speak of AI readiness, but few actively drive AI education beyond established technology hubs. Yuva AI for All attempts to bridge that gap.

Building Momentum Toward the India AI Impact Summit 2026

The launch of Yuva AI for All and Kaushal Rath also builds momentum toward the upcoming India AI Impact Summit 2026, scheduled from February 16–20 at Bharat Mandapam, New Delhi. Positioned as the first global AI summit to be hosted in the Global South, the event is anchored on three pillars: People, Planet, and Progress. The summit aims to translate global AI discussions into development-focused outcomes aligned with India’s national priorities. But what distinguishes this effort is its nationwide groundwork. Over the past months, seven Regional AI Conferences were conducted across Meghalaya, Gujarat, Odisha, Madhya Pradesh, Uttar Pradesh, Rajasthan, and Kerala under the IndiaAI Mission. These conferences focused on practical AI deployment in governance, healthcare, agriculture, education, language technologies, and public service delivery. Policymakers, startups, academia, industry leaders, and civil society participated, ensuring that discussions were not limited to theory. Insights from these regional consultations will directly shape the agenda of the India AI Impact Summit 2026.

A Nationwide AI Push, Not Just a Summit

Several major announcements emerged from the regional conferences. Among them:
  • A commitment to train one million youth under Yuva AI for All
  • Expansion of AI Data Labs and AI Labs in ITIs and polytechnics
  • Launch of Rajasthan’s AI/ML Policy 2026
  • Announcement of the Uttar Pradesh AI Mission
  • Introduction of Madhya Pradesh’s SpaceTech Policy 2026 integrating AI
  • Signing of MoUs with institutions including Google, IIT Delhi, and National Law University, Jodhpur
  • Rollout of AI Stacks and cloud adoption frameworks for state-level governance
These developments suggest that India’s AI roadmap is not confined to policy speeches. It is being operationalised across states, with funding commitments and institutional backing. For global observers, this signals something important. Emerging economies are not merely consumers of AI technologiesβ€”they are actively shaping governance models and skilling frameworks suited to their socio-economic realities.

Why AI Literacy in India Matters Globally

Artificial Intelligence is often discussed in terms of advanced research and frontier innovation. Yet the real challenge is adoptionβ€”ensuring people understand what AI is, what it can do, and how it should be used responsibly. By launching Yuva AI for All, India is placing emphasis on foundational awareness, not just high-end research. That approach reflects a broader recognition: AI will influence public service delivery, agriculture systems, healthcare models, and digital governance worldwide. Without widespread literacy, the risk of exclusion grows. At the same time, scaling AI education in a country as large and diverse as India is no small task. The success of Kaushal Rath will depend on sustained outreach, quality training, and long-term institutional support. Still, the initiative marks a visible shift. AI is no longer framed as a specialist subjectβ€”it is being positioned as a public capability. As preparations intensify for the India AI Impact Summit 2026, Yuva AI for All stands out as a reminder that AI’s future will not be shaped only in boardrooms or research labs, but also in classrooms, ITIs, and community spaces across regions often left out of the digital conversation.

NYC Private School Tuition Breaks $70,000 Milestone for Fall

10 February 2026 at 09:00
The top private schools in New York City plan to charge more than $70,000 this year for tuition, an amount exceeding that of many elite colleges, as they pass on the costs of soaring expenses including teacher salaries. From a report: Spence School, Dalton School and Nightingale-Bamford School on Manhattan's Upper East Side are among at least seven schools where the fees now exceed that threshold, according to school disclosures and Bloomberg reporting Fees among 15 private schools across the city rose a median of 4.7%, outpacing inflation. Sending a kid to New York private school has always been expensive, but the cost now is so high that even those with well-above-average salaries are feeling squeezed. Prices have risen dramatically in the past decade, up from a median of $39,900 in 2014.

Read more of this story at Slashdot.

NDSS 2025 – BinEnhance

3 February 2026 at 15:00

Session 11B: Binary Analysis

Authors, Creators & Presenters: Yongpan Wang (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China), Hong Li (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China), Xiaojie Zhu (King Abdullah University of Science and Technology, Thuwal, Saudi Arabia), Siyuan Li (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China), Chaopeng Dong (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China), Shouguo Yang (Zhongguancun Laboratory, Beijing, China), Kangyuan Qin (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China)

PAPER
BinEnhance: An Enhancement Framework Based on External Environment Semantics for Binary Code Search

Binary code search plays a crucial role in applications like software reuse detection, and vulnerability identification. Currently, existing models are typically based on either internal code semantics or a combination of function call graphs (CG) and internal code semantics. However, these models have limitations. Internal code semantic models only consider the semantics within the function, ignoring the inter-function semantics, making it difficult to handle situations such as function inlining. The combination of CG and internal code semantics is insufficient for addressing complex real-world scenarios. To address these limitations, we propose BINENHANCE, a novel framework designed to leverage the inter-function semantics to enhance the expression of internal code semantics for binary code search. Specifically, BINENHANCE constructs an External Environment Semantic Graph (EESG), which establishes a stable and analogous external environment for homologous functions by using different inter-function semantic relation e.g., call, location, data-co-use}. After the construction of EESG, we utilize the embeddings generated by existing internal code semantic models to initialize EESG nodes. Finally, we design a Semantic Enhancement Model (SEM) that uses Relational Graph Convolutional Networks (RGCNs) and a residual block to learn valuable external semantics on the EESG for generating the enhanced semantics embedding. In addition, BinEnhance utilizes data feature similarity to refine the cosine similarity of semantic embeddings. We conduct experiments under six different tasks e.g}, under function inlining scenario and the results illustrate the performance and robustness of BINENHANCE. The application of BinEnhance to HermesSim, Asm2vec, TREX, Gemini, and Asteria on two public datasets results in an improvement of Mean Average Precision (MAP) from 53.6% to 69.7%. Moreover, the efficiency increases fourfold.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – BinEnhance appeared first on Security Boulevard.

Upset at reports that he'd given up, Trump now wants $1B from Harvard

3 February 2026 at 12:08

Amid the Trump administration's attack on universities, Harvard has emerged as a particular target. Early on, the administration put $2.2 billion in research money on holdΒ and shortly thereafter blocked all future funding while demanding intrusive control over Harvard's hiring and admissions. Unlike many of its peer institutions, Harvard fought back, filing and ultimately winning a lawsuit that restored the cut funds.

Despite Harvard's victory, the Trump administration continued to push for some sort of formal agreement that would settle the administration's accusations that Harvard created an environment that allowed antisemitism to flourish. In fact, it had become a running joke among some journalists that The New York Times had devoted a monthly column to reporting that a settlement between the two parties was near.

Given the government's loss of leverage, it was no surprise that the latest installment of said column included the detail that the latest negotiations had dropped demands that Harvard pay any money as part of a final agreement. The Trump administration had extracted hundreds of millions of dollars from some other universities and had demanded over a billion dollars from UCLA, so this appeared to be a major concession to Harvard.

Read full article

Comments

Β© joe daniel price

China's Decades-Old 'Genius Class' Pipeline Is Quietly Fueling Its AI Challenge To the US

2 February 2026 at 09:00
China's decades-old network of elite high-school "genius classes" -- ultra-competitive talent streams that pull an estimated 100,000 gifted teenagers out of regular schooling every year and run them through college-level science curricula -- has produced the core technical talent now building the country's leading AI and technology companies, the Financial Times reported Saturday. Graduates of these programs include the founder of ByteDance, the leaders of e-commerce giants Taobao and PDD, the billionaire behind super-app Meituan, the brothers who started Nvidia rival Cambricon, and the core engineers behind large language models at DeepSeek and Alibaba's Qwen. DeepSeek's research team of more than 100 was almost entirely composed of genius-class alumni when the startup released its R1 reasoning model last year at a fraction of the cost of its international rivals. The system traces to the mid-1980s, when China first sent students to the International Mathematical Olympiad and a handful of top high schools began creating dedicated competition-track classes. China now graduates around five million STEM majors annually -- compared to roughly half a million in the United States -- and in 2025, 22 of the 23 students it sent to the International Science Olympiads returned with gold medals. The computer science track has overtaken maths and physics as the most popular competition subject, a shift that accelerated after Beijing designated AI development a "key national growth strategy" in 2017.

Read more of this story at Slashdot.

NDSS 2025 – Alba: The Dawn Of Scalable Bridges For Blockchains

1 February 2026 at 11:00

Session 11A: Blockchain Security 2

Authors, Creators & Presenters: Giulia Scaffino (TU Wien), Lukas Aumayr (TU Wien), Mahsa Bastankhah (Princeton University), Zeta Avarikioti (TU Wien), Matteo Maffei (TU Wien)

PAPER
Alba: The Dawn of Scalable Bridges for Blockchains

Over the past decade, cryptocurrencies have garnered attention from academia and industry alike, fostering a diverse blockchain ecosystem and novel applications. The inception of bridges improved interoperability, enabling asset transfers across different blockchains to capitalize on their unique features. Despite their surge in popularity and the emergence of Decentralized Finance (DeFi), trustless bridge protocols remain inefficient, either relaying too much information (e.g., light-client-based bridges) or demanding expensive computation (e.g., zk-based bridges). These inefficiencies arise because existing bridges securely prove a transaction's on-chain inclusion on another blockchain. Yet this is unnecessary as off-chain solutions, like payment and state channels, permit safe transactions without on-chain publication. However, existing bridges do not support the verification of off-chain payments. This paper fills this gap by introducing the concept of Pay2Chain bridges that leverage the advantages of off-chain solutions like payment channels to overcome current bridges' limitations. Our proposed Pay2Chain bridge, named Alba, facilitates the efficient, secure, and trustless execution of conditional payments or smart contracts on a target blockchain based on off-chain events. Alba, besides its technical advantages, enriches the source blockchain's ecosystem by facilitating DeFi applications, multi-asset payment channels, and optimistic stateful off-chain computation. We formalize the security of Alba against Byzantine adversaries in the UC framework and complement it with a game theoretic analysis. We further introduce formal scalability metrics to demonstrate Alba's efficiency. Our empirical evaluation confirms Alba's efficiency in terms of communication complexity and on-chain costs, with its optimistic case incurring only twice the cost of a standard Ethereum transaction of token ownership transfer.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Alba: The Dawn Of Scalable Bridges For Blockchains appeared first on Security Boulevard.

NDSS 2025 – PropertyGPT

31 January 2026 at 11:00

Session 11A: Blockchain Security 2

Authors, Creators & Presenters: Ye Liu (Singapore Management University), Yue Xue (MetaTrust Labs), Daoyuan Wu (The Hong Kong University of Science and Technology), Yuqiang Sun (Nanyang Technological University), Yi Li (Nanyang Technological University), Miaolei Shi (MetaTrust Labs), Yang Liu (Nanyang Technological University)

PAPER
PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented Property Generation

Formal verification is a technique that can prove the correctness of a system with respect to a certain specification or property. It is especially valuable for security-sensitive smart contracts that manage billions in cryptocurrency assets. Although existing research has developed various static verification tools (or provers) for smart contracts, a key missing component is the automated generation of comprehensive properties, including invariants, pre-/post-conditions, and rules. Hence, industry-leading players like Certora have to rely on their own or crowdsourced experts to manually write properties case by case. With recent advances in large language models (LLMs), this paper explores the potential of leveraging state-of-the-art LLMs, such as GPT-4, to transfer existing human-written properties (e.g., those from Certora auditing reports) and automatically generate customized properties for unknown code. To this end, we embed existing properties into a vector database and retrieve a reference property for LLM-based in-context learning to generate a new property for a given code. While this basic process is relatively straightforward, ensuring that the generated properties are (i) compilable, (ii) appropriate, and (iii) verifiable presents challenges. To address (i), we use the compilation and static analysis feedback as an external oracle to guide LLMs in iteratively revising the generated properties. For (ii), we consider multiple dimensions of similarity to rank the properties and employ a weighted algorithm to identify the top-K properties as the final result. For (iii), we design a dedicated prover to formally verify the correctness of the generated properties. We have implemented these strategies into a novel LLM-based property generation tool called PropertyGPT. Our experiments show that PropertyGPT can generate comprehensive and high-quality properties, achieving an 80% recall compared to the ground truth. It successfully detected 26 CVEs/attack incidents out of 37 tested and also uncovered 12 zero-day vulnerabilities, leading to $8,256 in bug bounty rewards.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – PropertyGPT appeared first on Security Boulevard.

NDSS 2025 – Silence False Alarms

30 January 2026 at 15:00

Session 11A: Blockchain Security 2

Authors, Creators & Presenters: Qiyang Song (Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences), Heqing Huang (Institute of Information Engineering, Chinese Academy of Sciences), Xiaoqi Jia (Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences), Yuanbo Xie (Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences), Jiahao Cao (Institute for Network Sciences and Cyberspace, Tsinghua University)

PAPER
Silence False Alarms: Identifying Anti-Reentrancy Patterns on Ethereum to Refine Smart Contract Reentrancy Detection

Reentrancy vulnerabilities in Ethereum smart contracts have caused significant financial losses, prompting the creation of several automated reentrancy detectors. However, these detectors frequently yield a high rate of false positives due to coarse detection rules, often misclassifying contracts protected by anti-reentrancy patterns as vulnerable. Thus, there is a critical need for the development of specialized automated tools to assist these detectors in accurately identifying anti-reentrancy patterns. While existing code analysis techniques show promise for this specific task, they still face significant challenges in recognizing anti-reentrancy patterns. These challenges are primarily due to the complex and varied features of anti-reentrancy patterns, compounded by insufficient prior knowledge about these features. This paper introduces AutoAR, an automated recognition system designed to explore and identify prevalent anti-reentrancy patterns in Ethereum contracts. AutoAR utilizes a specialized graph representation, RentPDG, combined with a data filtration approach, to effectively capture anti-reentrancy-related semantics from a large pool of contracts. Based on RentPDGs extracted from these contracts, AutoAR employs a recognition model that integrates a graph auto-encoder with a clustering technique, specifically tailored for precise anti-reentrancy pattern identification. Experimental results show AutoAR can assist existing detectors in identifying 12 prevalent anti-reentrancy patterns with 89% accuracy, and when integrated into the detection workflow, it significantly reduces false positives by over 85%.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Silence False Alarms appeared first on Security Boulevard.

NDSS 2025 – Provably Unlearnable Data Examples

30 January 2026 at 11:00

Session 10D: Machine Unlearning

Authors, Creators & Presenters: Derui Wang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Bo Li (The University of Chicago), Seyit Camtepe (CSIRO's Data61), Liming Zhu (CSIRO's Data61)

PAPER
Provably Unlearnable Data Examples

The exploitation of publicly accessible data has led to escalating concerns regarding data privacy and intellectual property (IP) breaches in the age of artificial intelligence. To safeguard both data privacy and IP-related domain knowledge, efforts have been undertaken to render shared data unlearnable for unauthorized models in the wild. Existing methods apply empirically optimized perturbations to the data in the hope of disrupting the correlation between the inputs and the corresponding labels such that the data samples are converted into Unlearnable Examples (UEs). Nevertheless, the absence of mechanisms to verify the robustness of UEs against uncertainty in unauthorized models and their training procedures engenders several under-explored challenges. First, it is hard to quantify the unlearnability of UEs against unauthorized adversaries from different runs of training, leaving the soundness of the defense in obscurity. Particularly, as a prevailing evaluation metric, empirical test accuracy faces generalization errors and may not plausibly represent the quality of UEs. This also leaves room for attackers, as there is no rigid guarantee of the maximal test accuracy achievable by attackers. Furthermore, we find that a simple recovery attack can restore the clean-task performance of the classifiers trained on UEs by slightly perturbing the learned weights. To mitigate the aforementioned problems, in this paper, we propose a mechanism for certifying the so-called $(q, eta)$-Learnability of an unlearnable dataset via parametric smoothing. A lower certified (q, eta) - Learnability indicates a more robust and effective protection over the dataset. Concretely, we 1) improve the tightness of certified (q, eta) - Learnability and 2) design Provably Unlearnable Examples (PUEs) which have reduced (q, eta) - Learnability. According to experimental results, PUEs demonstrate both decreased certified (q, eta) - Learnability and enhanced empirical robustness compared to existing UEs. Compared to the competitors on classifiers with uncertainty in parameters, PUEs reduce at most 18.9% of certified (q, eta) - Learnability on ImageNet and 54.4% of the empirical test accuracy score on CIFAR-100.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Provably Unlearnable Data Examples appeared first on Security Boulevard.

NDSS 2025 – TrajDeleter: Enabling Trajectory Forgetting In Offline Reinforcement Learning Agents

29 January 2026 at 11:00

Session 10D: Machine Unlearning

Authors, Creators & Presenters: hen Gong (University of Vriginia), Kecen Li (Chinese Academy of Sciences), Jin Yao (University of Virginia), Tianhao Wang (University of Virginia)

PAPER
TrajDeleter: Enabling Trajectory Forgetting in Offline Reinforcement Learning Agents

Reinforcement learning (RL) trains an agent from experiences interacting with the environment. In scenarios where online interactions are impractical, offline RL, which trains the agent using pre-collected datasets, has become popular. While this new paradigm presents remarkable effectiveness across various real-world domains, like healthcare and energy management, there is a growing demand to enable agents to rapidly and completely eliminate the influence of specific trajectories from both the training dataset and the trained agents. To meet this problem, this paper advocates TRAJDELETER, the first practical approach to trajectory unlearning for offline RL agents. The key idea of TRAJDELETER is to guide the agent to demonstrate deteriorating performance when it encounters states associated with unlearning trajectories. Simultaneously, it ensures the agent maintains its original performance level when facing other remaining trajectories. Additionally, we introduce TRAJAUDITOR, a simple yet efficient method to evaluate whether TRAJDELETER successfully eliminates the specific trajectories of influence from the offline RL agent. Extensive experiments conducted on six offline RL algorithms and three tasks demonstrate that TRAJDELETER requires only about 1.5% of the time needed for retraining from scratch. It effectively unlearns an average of 94.8% of the targeted trajectories yet still performs well in actual environment interactions after unlearning. The replication package and agent parameters are available.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – TrajDeleter: Enabling Trajectory Forgetting In Offline Reinforcement Learning Agents appeared first on Security Boulevard.

NDSS 2025 – Recurrent Private Set Intersection For Unbalanced Databases With Cuckoo Hashing

28 January 2026 at 15:00

Session 10C: Privacy Preservation

Authors, Creators & Presenters: Eduardo Chielle (New York University Abu Dhabi), Michail Maniatakos (New York University Abu Dhabi)

PAPER
Recurrent Private Set Intersection for Unbalanced Databases with Cuckoo Hashing and Leveled FHE

A Private Set Intersection (PSI) protocol is a cryptographic method allowing two parties, each with a private set, to determine the intersection of their sets without revealing any information about their entries except for the intersection itself. While extensive research has focused on PSI protocols, most studies have centered on scenarios where two parties possess sets of similar sizes, assuming a semi-honest threat model. However, when the sizes of the parties' sets differ significantly, a generalized solution tends to underperform compared to a specialized one, as recent research has demonstrated. Additionally, conventional PSI protocols are typically designed for a single execution, requiring the entire protocol to be re-executed for each set intersection. This approach is suboptimal for applications such as URL denylisting and email filtering, which may involve multiple set intersections of small sets against a large set (e.g., one for each email received). In this study, we propose a novel PSI protocol optimized for the recurrent setting where parties have unbalanced set sizes. We implement our protocol using Levelled Fully Homomorphic Encryption and Cuckoo hashing, and introduce several optimizations to ensure real-time performance. By utilizing the Microsoft SEAL library, we demonstrate that our protocol can perform private set intersections in 20 ms and 240 ms on 10 Gbps and 100 Mbps networks, respectively. Compared to existing solutions, our protocol offers significant improvements, reducing set intersection times by one order of magnitude on slower networks and by two orders of magnitude on faster networks.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Recurrent Private Set Intersection For Unbalanced Databases With Cuckoo Hashing appeared first on Security Boulevard.

NDSS 2025 – Iris: Dynamic Privacy Preserving Search In Authenticated Chord Peer-To-Peer Networks

28 January 2026 at 11:00

Session 10C: Privacy Preservation

Authors, Creators & Presenters: Angeliki Aktypi (University of Oxford), Kasper Rasmussen (University of Oxford)

PAPER
Iris: Dynamic Privacy Preserving Search in Authenticated Chord Peer-to-Peer Networks

In structured peer-to-peer networks, like Chord, users find data by asking a number of intermediate nodes in the network. Each node provides the identity of the closet known node to the address of the data, until eventually the node responsible for the data is reached. This structure means that the intermediate nodes learn the address of the sought after data. Revealing this information to other nodes makes Chord unsuitable for applications that require query privacy so in this paper we present a scheme Iris to provide query privacy while maintaining compatibility with the existing Chord protocol. This means that anyone using it will be able to execute a privacy preserving query but it does not require other nodes in the network to use it (or even know about it). In order to better capture the privacy achieved by the iterative nature of the search we propose a new privacy notion, inspired by $k$-anonymity. This new notion called alpha, delta-privacy, allows us to formulate privacy guarantees against adversaries that collude and take advantage of the total amount of information leaked in all iterations of the search. We present a security analysis of the proposed algorithm based on the privacy notion we introduce. We also develop a prototype of the algorithm in Matlab and evaluate its performance. Our analysis proves Iris to be alpha, delta-private while introducing a modest performance overhead. Importantly the overhead is tunable and proportional to the required level of privacy, so no privacy means no overhead.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Iris: Dynamic Privacy Preserving Search In Authenticated Chord Peer-To-Peer Networks appeared first on Security Boulevard.

NDSS 2025 – On the Robustness Of LDP Protocols For Numerical Attributes Under Data Poisoning Attacks

27 January 2026 at 15:00

Session 10C: Privacy Preservation

Authors, Creators & Presenters: Xiaoguang Li (Xidian University, Purdue University), Zitao Li (Alibaba Group (U.S.) Inc.), Ninghui Li (Purdue University), Wenhai Sun (Purdue University, West Lafayette, USA)

PAPER
On the Robustness of LDP Protocols for Numerical Attributes under Data Poisoning Attacks

Recent studies reveal that local differential privacy (LDP) protocols are vulnerable to data poisoning attacks where an attacker can manipulate the final estimate on the server by leveraging the characteristics of LDP and sending carefully crafted data from a small fraction of controlled local clients. This vulnerability raises concerns regarding the robustness and reliability of LDP in hostile environments. In this paper, we conduct a systematic investigation of the robustness of state-of-the-art LDP protocols for numerical attributes, i.e., categorical frequency oracles (CFOs) with binning and consistency, and distribution reconstruction. We evaluate protocol robustness through an attack-driven approach and propose new metrics for cross-protocol attack gain measurement. The results indicate that Square Wave and CFO-based protocols in the server setting are more robust against the attack compared to the CFO-based protocols in the user setting. Our evaluation also unfolds new relationships between LDP security and its inherent design choices. We found that the hash domain size in local-hashing-based LDP has a profound impact on protocol robustness beyond the well-known effect on utility. Further, we propose a zero-shot attack detection by leveraging the rich reconstructed distribution information. The experiment show that our detection significantly improves the existing methods and effectively identifies data manipulation in challenging scenarios.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – On the Robustness Of LDP Protocols For Numerical Attributes Under Data Poisoning Attacks appeared first on Security Boulevard.

NDSS 2025 – Detecting Ransomware Despite I/O Overhead: A Practical Multi-Staged Approach

27 January 2026 at 11:00

Session 10B: Ransomware

Authors, Creators & Presenters: Christian van Sloun (RWTH Aachen University), Vincent Woeste (RWTH Aachen University), Konrad Wolsing (RWTH Aachen University & Fraunhofer FKIE), Jan Pennekamp (RWTH Aachen University), Klaus Wehrle (RWTH Aachen University)

PAPER
Detecting Ransomware Despite I/O Overhead: A Practical Multi-Staged Approach

Ransomware attacks have become one of the most widely feared cyber attacks for businesses and home users. Since attacks are evolving and use advanced phishing campaigns and zero-day exploits, everyone is at risk, ranging from novice users to experts. As a result, much research has focused on preventing and detecting ransomware attacks, with real-time monitoring of I/O activity being the most prominent approach for detection. These approaches have in common that they inject code into the execution of the operating system's I/O stack, a more and more optimized system. However, they seemingly do not consider the impact the integration of such mechanisms would have on system performance or only consider slow storage mediums, such as rotational hard disk drives. This paper analyzes the impact of monitoring different features of relevant I/O operations for Windows and Linux. We find that even simple features, such as the entropy of a buffer, can increase execution time by 350% and reduce SSD performance by up to 75%. To combat this degradation, we propose adjusting the number of monitored features based on a process's behavior in real-time. To this end, we design and implement a multi-staged IDS that can adjust overhead by moving a process between stages that monitor different numbers of features. By moving seemingly benign processes to stages with fewer features and less overhead while moving suspicious processes to stages with more features to confirm the suspicion, the average time a system requires to perform I/O operations can be reduced drastically. We evaluate the effectiveness of our design by combining actual I/O behavior from a public dataset with the measurements we gathered for each I/O operation and found that a multi-staged design can reduce the overhead to I/O operations by an order of magnitude while maintaining similar detection accuracy of traditional single-staged approaches. As a result, real-time behavior monitoring for ransomware detection becomes feasible despite its inherent overhead impacts.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Detecting Ransomware Despite I/O Overhead: A Practical Multi-Staged Approach appeared first on Security Boulevard.

NDSS 2025 – all your (data)base are belong to us: Characterizing Database Ransom(ware) Attacks

26 January 2026 at 15:00

Session 10B: Ransomware

Authors, Creators & Presenters: Kevin van Liebergen (IMDEA Software Institute), Gibran Gomez (IMDEA Software Institute), Srdjan Matic (IMDEA Software Institute), Juan Caballero (IMDEA Software Institute)

PAPER
all your (data)base are belong to us: Characterizing Database Ransom(ware) Attacks

We present the first systematic study of database ransom(ware) attacks, a class of attacks where attackers scan for database servers, log in by leveraging the lack of authentication or weak credentials, drop the database contents, and demand a ransom to return the deleted data. We examine 23,736 ransom notes collected from 60,427 compromised database servers over three years, and set up database honeypots to obtain a first-hand view of current attacks. Database ransom(ware) attacks are prevalent with 6K newly infected servers in March 2024, a 60% increase over a year earlier. Our honeypots get infected in 14 hours since they are connected to the Internet. Weak authentication issues are two orders of magnitude more frequent on Elasticsearch servers compared to MySQL servers due to slow adoption of the latest Elasticsearch versions. To analyze who is behind database ransom(ware) attacks we implement a clustering approach that first identifies campaigns using the similarity of the ransom notes text. Then, it determines which campaigns are run by the same group by leveraging indicator reuse and information from the Bitcoin blockchain. For each group, it computes properties such as the number of compromised servers, the lifetime, the revenue, and the indicators used. Our approach identifies that the 60,427 database servers are victims of 91 campaigns run by 32 groups. It uncovers a dominant group responsible for 76% of the infected servers and 90% of the financial impact. We find links between the dominant group, a nation-state, and a previous attack on Git repositories.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – all your (data)base are belong to us: Characterizing Database Ransom(ware) Attacks appeared first on Security Boulevard.

NDSS 2025 – ERW-Radar

26 January 2026 at 11:00

Authors, Creators & Presenters: Lingbo Zhao (Institute of Information Engineering, Chinese Academy of Sciences), Yuhui Zhang (Institute of Information Engineering, Chinese Academy of Sciences), Zhilu Wang (Institute of Information Engineering, Chinese Academy of Sciences), Fengkai Yuan (Institute of Information Engineering, CAS), Rui Hou (Institute of Information Engineering, Chinese Academy of Sciences)

PAPER
ERW-Radar: An Adaptive Detection System against Evasive Ransomware by Contextual Behavior Detection and Fine-grained Content Analysis

To evade existing antivirus software and detection systems, ransomware authors tend to obscure behavior differences with benign programs by imitating them or by weakening malicious behaviors during encryption. Existing defense solutions have limited effects on defending against evasive ransomware. Fortunately, through extensive observation, we find I/O behaviors of evasive ransomware exhibit a unique repetitiveness during encryption. This is rarely observed in benign programs. Besides, the $chi^2$ test and the probability distribution of byte streams can effectively distinguish encrypted files from benignly modified files. Inspired by these, we first propose ERW-Radar, a detection system, to detect evasive ransomware accurately and efficiently. We make three breakthroughs: 1) a contextual correlation mechanism to detect malicious behaviors; 2) a fine-grained content analysis mechanism to identify encrypted files; and 3) adaptive mechanisms to achieve a better trade-off between accuracy and efficiency. Experiments show that ERW-Radar detects evasive ransomware with an accuracy of 96.18% while maintaining a FPR of 5.36%. The average overhead of ERW-Radar is 5.09% in CPU utilization and 3.80% in memory utilization.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – ERW-Radar appeared first on Security Boulevard.

So, You’ve Hit an Age Gate. What Now?

14 January 2026 at 12:08

This blog also appears in our Age Verification Resource Hub: our one-stop shopΒ for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yesβ€”safeβ€”internet.

EFF is against age gating and age verification mandates, and we hope we’ll win in getting existing ones overturned and new ones prevented. But mandates are already in effect, and every day many people are asked to verify their age across the web, despite prominent cases of sensitive data getting leaked in the process.

At some point, you may have been faced with the decision yourself: should I continue to use this service if I have to verify my age? And if so, how can I do that with the least risk to my personal information? This is our guide to navigating those decisions, with information on what questions to ask about the age verification options you’re presented with, and answers to those questions for some of the top most popular social media sites. Even though there’s no way to implement mandated age gates in a way that fully protects speech and privacy rights, our goal here is to help you minimize the infringement of your rights as you manage this awful situation.

Follow the Data

Since we know that leaks happen despite the best efforts of software engineers, we generally recommend submitting the absolute least amount of data possible. Unfortunately, that’s not going to be possible for everyone. Even facial age estimation solutions where pictures of your face never leave your device, offering some protection against data leakage, are not a good option for all users: facial age estimation works less well for people of color, trans and nonbinary people, and people with disabilities. There are some systems that use fancy cryptography so that a digital ID saved to your device won’t tell the website anything more than if you meet the age requirement, but access to that digital ID isn’t available to everyone or for all platforms. You may also not want to register for a digital ID and save it to your phone, if you don’t want to take the chance of all the information on it being exposed upon request of an over-zealous verifier, or you simply don’t want to be a part of a digital ID system

If you’re given the option of selecting a verification method and are deciding which to use, we recommend considering the following questions for each process allowed by each vendor:

    • Data: What info does each method require?
    • Access: Who can see the data during the course of the verification process?
    • Retention: Who will hold onto that data after the verification process, and for how long?
    • Audits: How sure are we that the stated claims will happen in practice? For example, are there external audits confirming that data is not accidentally leaked to another site along the way? Ideally these will be in-depth, security-focused audits by specialized auditors like NCC Group or Trail of Bits, instead of audits that merely certify adherence to standards.Β 
    • Visibility: Who will be aware that you’re attempting to verify your age, and will they know which platform you’re trying to verify for?

We attempt to provide answers to these questions below. To begin, there are two major factors to consider when answering these questions: the tools each platform uses, and the overall system those tools are part of.

In general, most platforms offer age estimation options like face scans as a first line of age assurance. These vary in intrusiveness, but their main problem is inaccuracy, particularly for marginalized users. Third-party age verification vendors Private ID and k-ID offer on-device facial age estimation, but another common vendor, Yoti, sends the image to their servers during age checks by some of the biggest platforms. This risks leaking the images themselves, and also the fact that you’re using that particular website, to the third party.Β 

Then, there’s the document-based verification services, which require you to submit a hard identifier like a government-issued ID. This method thus requires you to prove both your age and your identity. A platform can do this in-house through a designated dataflow, or by sending that data to a third party. We’ve already seen examples of how this can fail. For example, Discord routed users' ID data through its general customer service workflow so that a third-party vendor could perform manual review of verification appeals. No one involved ever deleted users' data, so when the system was breached, Discord had to apologize for the catastrophic disclosure of nearly 70,000 photos of users' ID documents. Overly long retention periods expose documents to risk of breaches and historical data requests. Some document verifiers have retention periods that are needlessly long. This is the case with Incode, which provides ID verification for Tiktok. Incode holds onto images forever by default, though TikTok shouldΒ automatically start the deletion process on your behalf.

Some platforms offer alternatives, like proving that you own a credit card, or asking for your email to check if it appears in databases associated with adulthood (like home mortgage databases). These tend to involve less risk when it comes to the sensitivity of the data itself, especially since credit cards can be replaced, but in general still undermine anonymity and pseudonymity and pose a risk of tracking your online activity. We’d prefer to see more assurances across the board about how information is handled.

Each site offers users a menu of age assurance options to choose from. We’ve chosen to present these options in the rough order that we expect most people to prefer. Jump directly to a platform to learn more about its age checks:

Meta – Facebook, Instagram, WhatsApp, Messenger, Threads

Inferred Age

If Meta can guess your age, you may never even see an age verification screen. Meta, which runs Facebook, Threads, Instagram, Messenger, and WhatsApp, first tries to use information you’ve posted to guess your age, like looking at β€œHappy birthday!” messages. It’s a creepy reminder that they already have quite a lot of information about you.

If Meta cannot guess your age, or if Meta infers you're too young, it will next ask you to verify your age using either facial age estimation, or by uploading your photo ID.Β 

Face Scan

If you choose to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that β€œas soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. Researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be not only shared to Yoti, but leaked to third-party data brokers as well.Β 

You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with Meta. You also might not want to use this if you’re worried about a current picture of your face accidentally leakingβ€”for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.

Upload ID

If Yoti’s age estimation decides your face looks too young, or if you opt out of facial age estimation, your next recourse is to send Meta a photo of your ID. Meta sends that photo to Yoti to verify the ID. Meta says it will hold onto that ID image for 30 days, then delete it. Meanwhile, Yoti claims it will delete the image immediately after verification. Of course, bugs and process oversights exist, such as accidentally replicating information in logs or support queues, but at least they have stated processes. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting leaked through errors or hacking, but it also lets Meta see the information needed to tie your profile to your identityβ€”which you may not want. If you don’t want Meta to know your name and where you live, or rely on both Meta and Yoti to keep to their deletion promises, this option may not be right for you.

Google – Gmail, YouTubeΒ 

Inferred Age

If Google can guess your age, you may never even see an age verification screen. Your Google account is typically connected to your YouTube account, so if (like mine) your YouTube account is old enough to vote, you may not need to verify your Google account at all. Google first uses information it already knows to try to guess your age, like how long you’ve had the account and your YouTube viewing habits. It’s yet another creepy reminder of how much information these corporations have on you, but at least in this case they aren’t likely to ask for even more identifying data.

If Google cannot guess your age, or decides you're too young, Google will next ask you to verify your age. You’ll be given a variety of options for how to do so, with availability that will depend on your location and your age.

Google’s methods to assure your age include ID verification, facial age estimation, verification by proxy, and digital ID. To prove you’re over 18, you may be able to use facial age estimation, give Google your credit card information, or tell a third-party provider your email address.

Face Scan

If you choose to use facial age estimation, you’ll be sent to a website run by Private ID, a third-party verification service. The website will load Private ID’s verifier within the pageβ€”this means that your selfie will be checked without any images leaving your device. If the system decides you’re over 18, it will let Google know that, and only that. Of course, no technology is perfectβ€”should Private ID be mandated to target you specifically, there’s nothing to stop it from sending down code that does in fact upload your image, and you probably won’t notice. But unless your threat model includes being specifically targeted by a state actor or Private ID, that’s unlikely to be something you need to worry about. For most people, no one else will see your image during this process. Private ID will, however, be told that your device is trying to verify your age with Google and Google will still find out if Private ID thinks that you’re under 18.

If Private ID’s age estimation decides your face looks too young, you may next be able to decide if you’d rather let Google verify your age by giving it your credit card information, photo ID, or digital ID, or by letting Google send your email address to a third-party verifier.

Email Usage

If you choose to provide your email address, Google sends it on to a company called VerifyMy. VerifyMy will use your email address to see if you’ve done things like get a mortgage or paid for utilities using that email address. If you use Gmail as your email provider, this may be a privacy-protective option with respect to Google, as Google will then already know the email address associated with the account. But it does tell VerifyMy and its third-party partners that the person behind this email address is looking to verify their age, which you may not want them to know. VerifyMy uses β€œproprietary algorithms and external data sources” that involve sending your email address to β€œtrusted third parties, such as data aggregators.” It claims to β€œensure that such third parties are contractually bound to meet these requirements,” but you’ll have to trust it on that oneβ€”we haven’t seen any mention of who those parties are, so you’ll have no way to check up on their practices and security. On the bright side, VerifyMy and its partners do claim to delete your information as soon as the check is completed.

Credit Card Verification

If you choose to let Google use your credit card information, you’ll be asked to set up a Google Payments account. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. Google will then charge a small amount to the card, and refund it once it goes through. If you choose this method, you’ll have to tell Google your credit card info, but the fact that it’s done through Google Payments (their regular card-processing system) means that at least your credit card information won’t be sitting around in some unsecured system. Even if your credit card information happens to accidentally be leaked, this is a relatively low-risk option, since credit cards come with solid fraud protection. If your credit card info gets leaked, you should easily be able to dispute fraudulent charges and replace the card.

Digital ID

If the option is available to you, you may be able to use your digital ID to verify your age with Google. In some regions, you’ll be given the option to use your digital ID. In some cases, it’s possible to only reveal your age information when you use a digital ID. If you’re given that choice, it can be a good privacy-preserving option. Depending on the implementation, there’s a chance that the verification step will β€œphone home” to the ID provider (usually a government) to let them know the service asked for your age. It’s a complicated and varied topic that you can learn more about by visiting EFF’s page on digital identity.

Upload ID

Should none of these options work for you, your final recourse is to send Google a photo of your ID. Here, you’ll be asked to take a photo of an acceptable ID and send it to Google. Though the help page only states that your ID β€œwill be stored securely,” the verification process page says ID β€œwill be deleted after your date of birth is successfully verified.” Acceptable IDs vary by country, but are generally government-issued photo IDs. We like that it’s deleted immediately, though we have questions about what Google means when it says your ID will be used to β€œimprove [its] verification services for Google products and protect against fraud and abuse.” No system is perfect, and we can only hope that Google schedules outside audits regularly.

TikTok

Inferred Age

If TikTok can guess your age, you may never even see an age verification notification. TikTok first tries to use information you’ve posted to estimate your age, looking through your videos and photos to analyze your face and listen to your voice. By uploading any videos, TikTok believes you’ve given it consent to try to guess how old you look and sound.

If TikTok decides you’re too young, appeal to revoke their age decision before the deadline passes. If TikTok cannot guess your age, or decides you're too young, it will automatically revoke your access based on ageβ€”including either restricting features or deleting your account. To get your access and account back, you’ll have a limited amount of time to verify your age. As soon as you see the notification that your account is restricted, you’ll want to act fast because in some places you’ll have as little as 23 days before the deadline passes.

When you get that notification, you’re given various options to verify your age based on your location.

Face Scan

If you’re given the option to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that β€œas soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. However, researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be leaked not only to Yoti, but to third-party data brokers as well.

You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with TikTok. You also might not want to use this if you’re worried about a current picture of your face accidentally leakingβ€”for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID or your credit card information, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.

Credit Card Verification

If you have a credit card in your name, TikTok will accept that as proof that you’re over 18. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. TikTok will charge a small amount to the credit card, and refund it once it goes through. It’s unclear if this goes through their regular payment process, or if your credit card information will be sent through and stored in a separate, less secure system. Luckily, these days credit cards come with solid fraud protection, so if your credit card gets leaked, you should easily be able to dispute fraudulent charges and replace the card. That said, we’d rather TikTok provide assurances that the information will be processed securely.

Credit Card Verification of a Parent or Guardian

Sometimes, if you’re between 13 and 17, you’ll be given the option to let your parent or guardian confirm your age. You’ll tell TikTok their email address, and TikTok will send your parent or guardian an email asking them (a) to confirm your date of birth, and (b) to verify their own age by proving that they own a valid credit card. This option doesn’t always seem to be offered, and in the one case we could find, it’s possible that TikTok never followed up with the parent. So it’s unclear how or if TikTok verifies that the adult whose email you provide is your parent or guardian. If you want to use credit card verification but you’re not old enough to have a credit card, and you’re ok with letting an adult know you use TikTok, this option may be reasonable to try.

Photo with a Random Adult?

Bizarrely, if you’re between 13 and 17, TikTok claims to offer the option to take a photo with literally any random adult to confirm your age. Its help page says that any trusted adult over 25 can be chosen, as long as they’re holding a piece of paper with the code on it that TikTok provides. It also mentions that a third-party provider is used here, but doesn’t say which one. We haven’t found any evidence of this verification method being offered. Please do let us know if you’ve used this method to verify your age on TikTok!

Photo ID and Face Comparison

If you aren’t offered or have failed the other options, you’ll have to verify your age by submitting a copy of your ID and matching photo of your face. You’ll be sent to Incode, a third-party verification service. In a disappointing failure to meet the industry standard, Incode itself doesn’t automatically delete the data you give it once the process is complete, but TikTok does claim to β€œstart the process to delete the information you submitted,” which should include telling Incode to delete your data once the process is done. If you want to be sure, you can ask Incode to delete that data yourself. Incode tells TikTok that you met the age threshold without providing your exact date of birth, but then TikTok wants to know the exact date anyway, so it’ll ask for your date of birth even after your age has been verified.

TikTok itself might not see your actual ID depending on its implementation choices, but Incode will.Β Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting accidentally leaked through errors or hacking. If you don’t want TikTok or Incode to know your name, what you look like, and where you liveβ€”or if you don't want to rely on both TikTok and Incode to keep to their deletion promisesβ€”then this option may not be right for you.

Everywhere Else

We’ve covered the major providers here, but age verification is unfortunately being required of many other services that you might use as well. While the providers and processes may vary, the same general principles will apply. If you’re trying to choose what information to provide to continue to use a service, consider the β€œfollow the data” questions mentioned above, and try to find out how the company will store and process the data you give it. The less sensitive information, the fewer people have access to it, and the more quickly it will be deleted, the better. You may even come to recognize popular names in the age verification industry:Β Spotify and OnlyFans use Yoti (just like Meta and Tiktok), Quora and Discord use k-ID, and so on.Β 

Unfortunately, it should be clear by now that none of the age verification options are perfect in terms of protecting information, providing access to everyone, and safely handling sensitive data. That’s just one of the reasons that EFF is against age-gating mandates, and is working to stop and overturn them across the United States and around the world.


Join EFF


Help protect digital privacyΒ & free speech for everyone

Surveillance Self-Defense: 2025 Year in Review

2 January 2026 at 01:48

Our Surveillance Self-Defense (SSD) guides, which provide practical advice and explainers for how to deal with government and corporate surveillance, had a big year. We published several large updates to existing guides and released three all new guides. And with frequent massive protests across the U.S., our guide to attending a protest remained one of the most popular guides of the year, so we made sure our translations were up to date.

(Re)learn All You Need to Know About Encryption

We started this year by taking a deep look at our various encryption guides, which start with the basics before moving up to deeper concepts. We slimmed each guide down and tried to focus on making them as clear and concise as deep explainers on complicated topics can be. We reviewed and edited four guides in total:

And if you’re not sure where to start, we got you covered with the new Interested in Encryption? playlist.

New Guides

We launched three new guides this year, including iPhone and Android privacy guides, which walk you through all the various privacy options of your phone. Both of these guides received a handful of updates throughout their first year as new features were released or, in the case of the iPhone, a new design language was introduced. These also got a fun little boost from a segment on "Last Week Tonight with John Oliver"Β telling people how to disable their phone’s advertising identifier.

We also launched our How to: Manage Your Digital Footprint guide. This guide is designed to help you claw back some of the data you may find about yourself online, walking through different privacy options across different platforms, digging up old accounts, removing yourself from people search sites, and much more.

Always Be Updating

As is the case with most software, there is always incremental work to do. This year, that meant small updates to our WhatsApp and Signal guides to acknowledge new features (both are already on deck for similar updates early next year as well).Β 

We overhauled our device encryption guides for Windows, Mac, and Linux, rolling what was once three guides into one, and including more detailed guidance on how to handle recovery keys. Some slight changes to how this works on both Windows and Mac means this one will get another look early next year as well.

Speaking of rolling multiple guides into one, we did the same with our guidance for the Tor browser, where it once lived across three guides, it now lives as one that covers all the major desktop platforms (the mobile guide remains separate).

The password manager guide saw some small changes to note some new features with Apple and Chrome’s managers, as well as some new independent security audits. Likewise, the VPN guide got a light touch to address the TunnelVision security issue.

Finally, the secure deletion guide got a much needed update after years of dormancy. With the proliferation of solid state drives (SSDs, not to be confused with SSD), not much has changed in the secure deletion space, but we did move our guidance for those SSDs to the top of the guide to make it easier to find, while still acknowledging many people around the world still only have access to a computer with spinning disk drives.Β 

Translations

As always, we worked on translations for these updates. We’re very close to a point where every current SSD guide is updated and translated into Arabic, French, Mandarin, Portuguese, Russian, Spanish, and Turkish.

And with the help of Localization Lab, we also now have translations for a handful of the most important guides in Changana, Mozambican Portuguese, Ndau, Luganda, and Bengali.

Blogs Blogs Blogs

Sometimes we take our SSD-like advice and blog it so we can respond to news events or talk about more niche topics. This year, we blogged about new features, like WhatsApp’s β€œAdvanced Chat Privacy” and Google’s "Advanced Protection.” We also broke down the differences between how different secure chat clients handle backups and pushed for expanding encryption on Android and iPhone.

We fight for more privacy and security every day of every year, but until we get that, stronger controls of our data and a better understanding of how technology works is our best defense.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

FTC Action Hits Illuminate Education Over Massive Student Data Breach

2 December 2025 at 02:09

FTC action

FTC action takes center stage as the U.S. Federal Trade Commission has announced strong enforcement steps against education technology (Edtech) provider Illuminate Education, following a major data breach that exposed the personal information of more than 10 million students across the United States. The agency said the company failed to implement reasonable security measures despite promising schools and parents that student information was protected.

Why the Agency Intervened

FTC complaint outlines a series of allegations against the Wisconsin-based company, which provides cloud-based software tools for schools. According to the complaint, Illuminate Education claimed it used industry-standard practices to safeguard student information but failed to put in place basic security controls. The Illuminate Education data breach incident dates back to December 2021 when a hacker accessed the company’s cloud databases using login credentials belonging to a former employee who had left the company more than three years earlier. This lapse allowed unauthorized access to data belonging to 10.1 million students, including email addresses, home addresses, dates of birth, academic records, and sensitive health information. FTC officials said the company ignored warnings as early as January 2020, when a third-party vendor alerted them to several vulnerabilities in their systems. The data security failures included weak access controls, gaps in threat detection, and a lack of proper vulnerability monitoring and patch management. The agency also noted that student data was stored in plain text until at least January 2022, increasing the severity of the breach.

FTC Action: Requirements Under the Proposed Order

As part of the proposed settlement, the FTC will require Illuminate Education to adopt a comprehensive information security program and follow stricter privacy obligations. The proposed FTC order includes several mandatory steps:
  • Deleting any personal information that is no longer required for service delivery.
  • Following a transparent, publicly available data retention schedule that explains why data is collected and when it will be deleted.
  • Implementing a detailed information security program to protect the confidentiality and integrity of personal information.
  • Notifying the FTC when the company reports a data breach to any federal, state, or local authority.
The order also prohibits the company from misrepresenting its data security practices or delaying breach notifications to school districts and families. The FTC said Illuminate had waited nearly two years before informing some districts about the breach, impacting more than 380,000 students. The Commission has voted unanimously to advance the complaint and proposed order for public comment. It will be published in the Federal Register, where stakeholders can share feedback for 30 days before the FTC decides whether to finalize the consent order.

FTC Action and State-Level Enforcement

Alongside the federal enforcement, the state data breach settlement adds another layer of accountability. Attorneys General from California, Connecticut, and New York recently announced a $5.1 million settlement with Illuminate Education for failing to adequately protect student data during the same 2021 cyber incident. California will receive $3.25 million in civil penalties, and the settlement includes strict requirements designed to improve the company’s cybersecurity safeguards. With more than 434,000 California students affected, this marks one of the largest enforcement actions under the California K-12 Pupil Online Personal Information Protection Act (KOPIPA). State officials emphasized that educational technology companies must prioritize the security of children’s data, which often includes highly sensitive information like medical details and learning records.

This is what the future of media looks like | Hamish McKenzie

What if the polarizing mess of social media, clickbait headlines and addictive algorithms isn't a breakdown of media but a transition to something better? Substack cofounder Hamish McKenzie explores how independent creators are growing a new media "garden," where trust beats engagement metrics and audiences matter more than ads. Learn why clicking β€œsubscribe” doesn’t just signal support; it gives you power.

πŸ’Ύ

The catastrophic risks of AI β€” and a safer path | Yoshua Bengio

Yoshua Bengio β€” the world's most-cited computer scientist and a "godfather" of artificial intelligence β€” is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they've already learned to deceive, cheat, self-preserve and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.

πŸ’Ύ

How to make climate stories impossible to ignore | Katherine Dunn

In environmental reporting, β€œit's not always about the big climate story,” says journalist Katherine Dunn. She challenges newsrooms to rethink how they cover climate change, connecting to the things readers love β€” whether that’s jobs, football or even a good mango β€” with three actionable tips for making overlooked stories irresistible.

πŸ’Ύ

What if the climate movement felt like a house party? | Matthew Phillips

You’re invited into a bold new vision for the climate movement β€” a space of trust and honesty, where artists inspire action and everyone has a role to play. Social impact leader Matthew Phillips explores how shared purpose and imagination can revive the fragmented approach to climate action and unlock the power of collective momentum.

πŸ’Ύ

The AI revolution is underhyped | Eric Schmidt

The arrival of non-human intelligence is a very big deal, says former Google CEO and chairman Eric Schmidt. In a wide-ranging interview with technologist Bilawal Sidhu, Schmidt makes the case that AI is wildly underhyped, as near-constant breakthroughs give rise to systems capable of doing even the most complex tasks on their own. He explores the staggering opportunities, sobering challenges and urgent risks of AI, showing why everyone will need to engage with this technology in order to remain relevant.

πŸ’Ύ

The delicious potential of rescuing wasted food | Jasmine Crowe-Houston

What if solving hunger isn't about growing more food but wasting less of it? Social entrepreneur Jasmine Crowe-Houston has made that idea her mission with Goodr, a platform that reroutes surplus food to people in need. In conversation with journalist and "TED Radio Hour" host Manoush Zomorodi, she shares how a viral moment led to a nationwide effort to fix the food waste problem.

πŸ’Ύ

Are we cooked? How social media shapes your language | Adam Aleksic

Gen Z slang is rife with new words like "unalive," "skibidi" and "rizz." Where do these words come from β€” and how do they get popular so fast? Linguist Adam Aleksic explores how the forces of social media algorithms are reshaping the way people talk and view their very own identities.

πŸ’Ύ

How I make vegan food sexy | Pinky Cole

12 May 2025 at 10:58
At the plant-based burger chain Slutty Vegan, Pinky Cole is flipping the script on vegan food with bold style. In conversation with host of "TED Radio Hour" Manoush Zomorodi, she shares the highs and lows of her entrepreneurial journey, from her roots in Baltimore to the grease fire that took her first storefront in Harlem. Learn more about the authenticity, resilience and community that went into building a multimillion-dollar vegan food empire.

πŸ’Ύ

Can AI help with the chaos of family life? | Avni Patel Thompson

Tech innovator Avni Patel Thompson designed an app to shield busy parents from the chaos of scheduling school pickups, coordinating playdates, planning birthday parties and more β€” but as the product developed, something felt off. What might we lose when AI smooths over the friction of everyday family life? Patel Thompson explores her surprising discovery and how you can leverage AI to connect more deeply with the ones you love.

πŸ’Ύ

A parent's guide to raising kids after loss | Andy Laats

Andy Laats had the textbook fairytale family setup ... a great job, a happy marriage, three wonderful kids and everything going for them. Until one day, they didn't anymore. In this tender, wise and unexpectedly funny talk, Laats describes the profound lessons he's learned over the years as a father, offering insights that will resonate with anyone who's ever had any kind of family.

πŸ’Ύ

You are the bridge to the next generation | Ndinini Kimesera Sikar

"Do you know what you want to preserve for the next generation?" asks community leader Ndinini Kimesera Sikar. Drawing on her experience growing up in a family of 38 in a traditional Maasai village in Tanzania β€” where every chore was shared, every story was sung and belonging meant survival β€” she explores how we can blend the old with the new to build the life we want, encouraging us all to ponder our list of "must-haves" for the future.

πŸ’Ύ

Are we still human if robots help raise our babies? | Sarah Blaffer Hrdy

AI is transforming the way we work β€” could it also reshape what makes us human? In this quick and insightful talk, evolutionary anthropologist Sarah Blaffer Hrdy explores how the human brain was shaped by millions of years of shared childcare and mutually supportive communities, asking a provocative question: If robots help raise the next generation, will we lose the empathy that defines us?

πŸ’Ύ

The mental health AI chatbot made for real life | Alison Darcy

Who do you turn to when panic strikes in the middle of the night β€” and can AI help? Psychologist Alison Darcy shares the vision behind Woebot, a mental health chatbot designed to support people in tough moments, especially when no one else is around. In conversation with author and podcaster Kelly Corrigan, Darcy explores what we should expect and demand from ethically designed, psychological AIs.

πŸ’Ύ

How art helped me grapple with grief | Navied Mahdavian

With just a few lines, cartoons can say so much with so little. In a moving talk, cartoonist Navied Mahdavian shares his process for distilling huge concepts into drawings on the page β€” and shows how his work helped him grieve the death of his beloved grandmother, flaws and all.

πŸ’Ύ

❌