Reading view

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage

Amazon and Flock Safety have ended a partnership that would've given law enforcement access to a vast web of Ring cameras.

The decision came after Amazon faced substantial backlash for airing a Super Bowl ad that was meant to be warm and fuzzy, but instead came across as disturbing and dystopian.

The ad begins with a young girl surprised to receive a puppy as a gift. It then warns that 10 million dogs go missing annually. Showing a series of lost dog posters, the ad introduces a new "Search Party" feature for Ring cameras that promises to revolutionize how neighbors come together to locate missing pets.

Read full article

Comments

© Jagoda Matejczuk / 500px | 500px Prime

  •  

Discord faces backlash over age checks after data breach exposed 70,000 IDs

Discord is facing backlash after announcing that all users will soon be required to verify ages to access adult content by sharing video selfies or uploading government IDs.

According to Discord, it's relying on AI technology that verifies age on the user's device, either by evaluating a user's facial structure or by comparing a selfie to a government ID. Although government IDs will be checked off-device, the selfie data will never leave the user's device, Discord emphasized. Both forms of data will be promptly deleted after the user's age is estimated.

In a blog, Discord confirmed that "a phased global rollout" would begin in "early March," at which point all users globally would be defaulted to "teen-appropriate" experiences.

Read full article

Comments

© SOPA Images / Contributor | LightRocket

  •  

Mountain View Shuts Down Flock Safety ALPR Cameras After Year-Long Unrestricted Data Access

Flock Safety ALPR cameras

Mountain View’s decision to shut down its automated license plate reader program is a reminder of an uncomfortable truth that surveillance technology is only as trustworthy as the systems—and vendors—behind it. This week, Police Chief Mike Canfield announced that all Flock Safety ALPR cameras in Mountain View have been turned off, effective immediately. The move pauses the city’s pilot program until the City Council reviews its future at a February 24 meeting. The decision comes after the police department discovered that hundreds of unauthorized law enforcement agencies had been able to search Mountain View’s license plate camera data for more than a year—without the city’s awareness. For a tool that was sold to the public as tightly controlled and privacy-focused, this is a serious breach of trust.

Flock Safety ALPR Cameras Shut Down Over Data Access Failures

In his message to the community, Chief Canfield made it clear that while the Flock Safety ALPR pilot program had shown value in solving crimes, he no longer has confidence in the vendor. “I personally no longer have confidence in this particular vendor,” Canfield wrote, citing failures in transparency and access control. The most troubling issue, according to the police chief, was the discovery that out-of-state agencies had been able to search Mountain View’s license plate data—something that should never have been possible under state law or city policy. This wasn’t a minor technical glitch. It was a breakdown in oversight, accountability, and vendor responsibility.

Automated License Plate Readers Under Growing National Scrutiny

Automatic license plate readers, or ALPR surveillance cameras, have become one of the most controversial policing technologies in the United States. These cameras capture images of passing vehicles, including license plate numbers, make, and model. The information is stored and cross-checked with databases to flag stolen cars or vehicles tied to investigations. Supporters argue that ALPRs help law enforcement respond faster and solve crimes more efficiently. But critics have long warned that ALPR systems can easily become tools of mass surveillance—especially when data-sharing controls are weak. That concern has intensified under the Trump administration, as reports have emerged of license plate cameras being used for immigration enforcement and even reproductive healthcare-related investigations. Mountain View’s case shows exactly why the debate isn’t going away.

Mountain View Police Violated Its Own ALPR Policies

According to disclosures made this week, the Mountain View Police Department unintentionally violated its own policies by allowing statewide and national access to its ALPR data. Chief Canfield admitted that “statewide lookup” had been enabled since the program began 17 months ago, meaning agencies across California could search Mountain View’s license plate records without prior authorization. Even more alarming, “national lookup” was reportedly turned on for three months in 2024, allowing agencies across the country to access the city’s data. State law prohibits sharing ALPR information with out-of-state agencies, especially for immigration enforcement purposes. So how did it happen? Canfield was blunt: “Why wasn’t it caught sooner? I couldn’t tell you.” That answer won’t reassure residents who were promised strict safeguards.

Community Trust Matters More Than Surveillance Tools

Chief Canfield’s message repeatedly emphasized one point: technology cannot replace trust. “Community trust is more important than any individual tool,” he wrote. That statement deserves attention. Police departments across the country have adopted surveillance systems with the promise of safety, only to discover later that the systems operate with far less control than advertised. When a vendor fails to disclose access loopholes—or when law enforcement fails to detect them—the public pays the price. Canfield acknowledged residents’ anger and frustration, offering an apology and stating that transparency is essential for community policing. It’s a rare moment of accountability in a space where surveillance expansion often happens quietly.

Flock Safety Faces Questions About Transparency and Oversight

Mountain View’s ALPR program began in May 2024, when the City Council approved a contract with Flock Safety, a surveillance technology company. Since August 2024, the city installed cameras at major entry and exit points. By January 2026, Mountain View had 30 Flock cameras operating. Now, the entire program is paused. Flock spokesperson Paris Lewbel said the company would address the concerns directly with the police chief, but the damage may already be done. This incident raises a bigger question: should private companies be trusted to manage sensitive surveillance infrastructure in the first place?

What Happens Next for the Flock Safety ALPR Program?

The City Council will now decide whether Mountain View continues with the Flock contract, modifies the program, or shuts it down permanently. But the broader lesson is already clear. ALPR surveillance cameras may offer law enforcement real investigative value, but without airtight safeguards, they risk becoming tools of unchecked monitoring. Mountain View’s shutdown is not just a local story—it’s part of a national reckoning over how much surveillance is too much, and whether public safety can ever justify the loss of privacy without full accountability.
  •  

Newborn dies after mother drinks raw milk during pregnancy

A newborn baby has died in New Mexico from a Listeria infection that state health officials say was likely contracted from raw (unpasteurized) milk that the baby's mother drank during pregnancy.

In a news release Tuesday, officials warned people not to consume any raw dairy, highlighting that it can be teeming with a variety of pathogens. Those germs are especially dangerous to pregnant women, as well as young children, the elderly, and people with weakened immune systems.

"Raw milk can contain numerous disease-causing germs, including Listeria, which is bacteria that can cause miscarriage, stillbirth, preterm birth, or fatal infection in newborns, even if the mother is only mildly ill," the New Mexico Department of Health said in the press release.

Read full article

Comments

© Getty | adamkaz

  •  

China bans all retractable car door handles, starting next year

Flush door handles have been quite the automotive design trend of late. Stylists like them because they don't add visual noise to the side of a car. And aerodynamicists like them because they make a vehicle more slippery through the air. When Tesla designed its Model S, it needed a car that was both desirable and as efficient as possible, so flush door handles were a no-brainer. Since then, as electric vehicles have proliferated, so too have flush door handles. But as of next year, China says no.

Just like pop-up headlights, despite the aesthetic and aerodynamic advantages, there are safety downsides. Tesla's handles are an extreme example: In the event of a crash and a loss of 12 V power, there is no way for first responders to open the door from the outside, which has resulted in at least 15 deaths.

Those deaths prompted the National Highway Traffic Safety Administration to open an investigation last year, but China is being a little more proactive. It has been looking at whether retractable car door handles are safe since mid-2024, according to Bloomberg, and has concluded that no, they are not.

Read full article

Comments

© Smith Collection/Gado/Getty Images

  •  

The rise of Moltbook suggests viral AI prompts may be the next big security threat

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.

Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.

History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further.

Read full article

Comments

© Aurich Lawson | Moltbook

  •  

France Approves Social Media Ban for Children Under 15 Amid Global Trend

social media ban for children France

French lawmakers have approved a social media ban for children under 15, a move aimed at protecting young people from harmful online content. The bill, which also restricts mobile phone use in high schools, was passed by a 130-21 vote in the National Assembly and is expected to take effect at the start of the next school year in September. French President Emmanuel Macron has called for the legislation to be fast-tracked, and it will now be reviewed by the Senate. “Banning social media for those under 15: this is what scientists recommend, and this is what the French people are overwhelmingly calling for,” Macron said. “Our children’s brains are not for sale — neither to American platforms nor to Chinese networks. Their dreams must not be dictated by algorithms.”

Why France Introduced a Social Media Ban for Children

The new social media ban for children in France is part of a broader effort to address the negative effects of excessive screen time and harmful content. Studies show that one in two French teenagers spends between two and five hours daily on smartphones, with 58% of children aged 12 to 17 actively using social networks. Health experts warn that prolonged social media use can lead to reduced self-esteem, exposure to risky behaviors such as self-harm or substance abuse, and mental health challenges. Some families in France have even taken legal action against platforms like TikTok over teen suicides allegedly linked to harmful online content. The French legislation carefully exempts educational resources, online encyclopedias, and platforms for open-source software, ensuring children can still access learning and development tools safely.

Lessons From Australia’s Social Media Ban for Children

France’s move mirrors global trends. In December 2025, Australia implemented a social media ban for children under 16, covering major platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, Threads, X, YouTube, and Twitch. Messaging apps like WhatsApp were exempt. Since the ban, social media companies have revoked access to about 4.7 million accounts identified as belonging to children. Meta alone removed nearly 550,000 accounts the day after the ban took effect. Australian officials said the measures restore children’s online safety and prevent predatory social media practices. Platforms comply with the ban through age verification methods such as ID checks, third-party age estimation technologies, or inference from existing account data. While some children attempted to bypass restrictions, the ban is considered a significant step in protecting children online.

UK Considers Following France and Australia

The UK is also exploring similar measures. Prime Minister Keir Starmer recently said the government is considering a social media ban for children aged 15 and under, along with stricter age verification, phone curfews, and restrictions on addictive platform features. The UK’s move comes amid growing concern about the mental wellbeing and safety of children online.

Global Shift Toward Child Cyber Safety

The introduction of a social media ban for children in France, alongside Australia’s implementation and the UK’s proposal, highlights a global trend toward protecting minors in the digital age. These measures aim to balance access to educational and creative tools while shielding children from online harm and excessive screen time. As more countries consider social media regulations for minors, the focus is clear: ensuring cyber safety, supporting mental health, and giving children the chance to enjoy a safe and healthy online experience.
  •  

Security Researcher Finds Exposed Admin Panel for AI Toy

Security Researcher Finds Exposed Admin Panel for AI Toy

A security researcher investigating an AI toy for a neighbor found an exposed admin panel that could have leaked the personal data and conversations of the children using the toy. The findings, detailed in a blog post by security researcher Joseph Thacker, outlines the work he did with fellow researcher Joel Margolis, who found the exposed admin panel for the Bondu AI toy. Margolis found an intriguing domain (console.bondu.com) in the mobile app backend’s Content Security Policy headers. There he found a button that simply said: “Login with Google.” “By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. But instead of a parent portal, it turned out to be the Bondu core admin panel. “We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

After some investigation in the admin panel, the researchers found they had full access to “Every conversation transcript that any child has had with the toy,” which numbered in the “tens of thousands of sessions.” The panel also contained personal data about children and their family, including:
  • The child’s name and birth date
  • Family member names
  • The child’s likes and dislikes
  • Objectives for the child (defined by the parent)
  • The name given to the toy by the child
  • Previous conversations between the child and the toy (used to give the LLM context)
  • Device information, such as location via IP address, battery level, awake status, and more
  • The ability to update device firmware and reboot devices
They noticed the application is based on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).” In addition to the authentication bypass, they also discovered an Insecure Direct Object Reference (IDOR) vulnerability in the product’s API “that allowed us to retrieve any child’s profile data by simply guessing their ID.” “This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

A (Very) Quick Response from Bondu

Margolis reached out to Bondu’s CEO on LinkedIn over the weekend – and the company took down the console “within 10 minutes.” “Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said. The company took other steps to investigate and look for additional security flaws, and also started a bug bounty program. They examined console access logs and found that there had been no unauthorized access except for the researchers’ activity, so the company was saved from a data breach. Despite the positive experience working with Bondu, the experience made Thacker reconsider buying AI toys for his own kids. “To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.” “AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.” Aside from potential security issues, “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said. Bondu's website says the AI toy was built with child safety in mind, noting that its "safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period."
  •  

Simplifying K-12 Technology: How ManagedMethods Can Reduce Complexity To Do More With Less

Simplifying K-12 Technology: How ManagedMethods Can Reduce Complexity To Do More With Less As K-12 districts plan for the 2026/27 school year, the pressure is mounting. Budgets are tight, staffing is stretched thin, and the number of digital tools schools rely on continues to grow. What started as efforts to solve specific problems—student safety, classroom ...

The post Simplifying K-12 Technology: How ManagedMethods Can Reduce Complexity To Do More With Less appeared first on ManagedMethods Cybersecurity, Safety & Compliance for K-12.

The post Simplifying K-12 Technology: How ManagedMethods Can Reduce Complexity To Do More With Less appeared first on Security Boulevard.

  •  

“IG is a drug”: Internal messages may doom Meta at social media addiction trial

Anxiety, depression, eating disorders, and death. These can be the consequences for vulnerable kids who get addicted to social media, according to more than 1,000 personal injury lawsuits that seek to punish Meta and other platforms for allegedly prioritizing profits while downplaying child safety risks for years.

Social media companies have faced scrutiny before, with congressional hearings forcing CEOs to apologize, but until now, they've never had to convince a jury that they aren't liable for harming kids.

This week, the first high-profile lawsuit—considered a "bellwether" case that could set meaningful precedent in the hundreds of other complaints—goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality.

Read full article

Comments

  •  

Volvo invented the three-point seat belt 67 years ago; now it has improved it

With the launch of its all-new, all-electric EX60, Volvo has put lessons learned from the EX30 and EX90 to use. The EX60 is built on Volvo’s new SPA3 platform, made only for battery-electric vehicles. It boasts up to 400 miles (643 km) of range, with fast-charging capabilities Volvo says add 173 miles (278 km) in 10 minutes. Mega casting reduces the number of parts of the rear floor from 100-plus to one piece crafted of aluminum alloy, reducing complexities and weld points.

Inside the cabin, however, the real achievement is Volvo’s new multi-adaptive safety belt. Volvo has a history with the modern three-point safety belt, which was perfected by in-house engineer Nils Bohlin in 1959 before the patent was shared with the world. Today at the Volvo Cars Safety Center lab, at least one brand-new Volvo is crashed every day in the name of science. The goal: to test not just how well its vehicles are protecting passengers but what the next frontier is in safety technology.

Senior Safety Technical Leader Mikael Ljung Aust is a driving behavior specialist with 20 years under his belt at Volvo. He says it’s easy to optimize testing toward one person or one test point and come up with a good result. However, both from the behavioral perspective and from physics, people are different. What’s not different, he points out, is how people drive.

Read full article

Comments

© Volvo Cars

  •  

For Men, How Much Alcohol Is Too Much?

Federal officials working on the new dietary guidelines had considered limiting men to one drink daily. The final advice was only that everyone should drink less.

© Robert Wright for The New York Times

“There are a lot of reasons people drink alcohol,” said one epidemiologist who led an advisory panel on alcohol. “What we’re saying is health shouldn’t be one of them.”
  •  

Optimism About Nuclear Energy Is Rising Again. Will It Last?

Companies like Kairos Power are building new types of reactors with the encouragement of the Trump administration, but their success is far from assured.

© Ramsay de Give for The New York Times

Kairos Power, which is developing a new kind of nuclear reactor, makes many of its parts at a facility in Albuquerque, N.M.
  •  

EU and Singapore Deepen Tech Ties, Prioritize AI Safety and Cybersecurity

European Union

The European Union and Singapore are intensifying their digital collaboration, following the second meeting of the Digital Partnership Council in Brussels. The discussions stressed strategic priorities across critical technology sectors, including artificial intelligence (AI), cybersecurity, semiconductors, and digital trade.   The Digital Partnership Council was co-chaired by Henna Virkkunen, Executive Vice-President of the European Commission for Tech Sovereignty, Security and Democracy, and Josephine Teo, Singapore’s Minister for Digital Development and Information. Since the European Union and Singapore partnership was launched in February 2023, the council has monitored progress and adjusted its focus to reflect current technological and market developments. 

European Union and Singapore on AI and Digital Safety 

AI remained a central topic, with both the European Union and Singapore reaffirming the importance of existing frameworks that ensure the safe development and deployment of AI technologies. Future cooperation was discussed in areas such as language AI models, linking the EU’s Alliance for Language Technologies European Digital Infrastructure Consortium (ALT-EDIC) with Singapore’s Sea-Lion model.   Online safety and scam prevention were also highlighted as growing priorities. Both parties expressed a commitment to protecting vulnerable groups, particularly minors, by exploring tools such as age-verification mechanisms and digital protection that enhance user trust online. 

Digital Trust and Identity 

Strengthening digital trust remains a key goal under the EU–Singapore Digital Partnership. The council explored the development of interoperable trust services and verifiable credentials that could enable secure cross-border digital identity use cases. This approach aims to simplify regulatory compliance and facilitate smoother digital transactions across sectors, supporting both public and private initiatives.  Cybersecurity remains a cornerstone of the Digital Partnership Council’s agenda. Both the European Union and Singapore emphasized the importance of assessing new cyber threats and reinforcing resilience through coordinated bilateral and multilateral actions. The ongoing focus reflects recognition of cybersecurity’s vital role in sustaining market confidence and protecting digital infrastructure. 

Data, Semiconductors, and New Technologies 

The council also reviewed strategies to enhance cross-border data flows and explored potential collaboration in shared data spaces. Both parties expressed interest in research partnerships in semiconductors and quantum technologies, recognizing the value of cross-border investments and scientific collaboration under frameworks such as Horizon Research. These initiatives aim to strengthen innovation capabilities and ensure long-term technological competitiveness.  The EU and Singapore reaffirmed their goal for digital trade, building on the Digital Trade Agreement signed in May 2025. This agreement sets binding rules that enhance legal certainty, protect consumers, and remove unnecessary barriers to digital commerce. Through this framework, the Digital Partnership Council seeks to foster economic security and innovation while reinforcing international digital standards. 

A Strategic Framework for Future Cooperation 

Since its inception in 2023, the EU–Singapore Digital Partnership has aimed to empower businesses and citizens to fully leverage technological opportunities. The partnership has focused on bridging the digital divide, promoting trusted data flows, developing digital identities, and fostering skills and research excellence.   By continuing to align strategies and advance joint projects, the European Union and Singapore are setting a model for international digital cooperation, ensuring that both economies remain competitive and secure in the technology-driven world. 
  •