Normal view

Received before yesterday

Discord will limit profiles to teen-appropriate mode until you verify your age

10 February 2026 at 10:29

Discord announced it will put all existing and new profiles in teen-appropriate mode by default in early March.

The teen-appropriate profile mode will remain in place until users prove they are adults. To change a profile to “full access” will require verification by Discord’s age inference model—a new system that runs in the background to help determine whether an account belongs to an adult, without always requiring users to verify their age.

Savannah Badalich, Head of Product Policy at Discord, explained the reasoning:

“Rolling out teen-by-default settings globally builds on Discord’s existing safety architecture, giving teens strong protections while allowing verified adults flexibility. We design our products with teen safety principles at the core and will continue working with safety experts, policymakers, and Discord users to support meaningful, long term wellbeing for teens on the platform.”

Platforms have been facing growing regulatory pressure—particularly in the UK, EU, and parts of the US—to introduce stronger age-verification measures. The announcement also comes as concerns about children’s safety on social media continue to surface. In research we published today, parents highlighted issues such as exposure to inappropriate content, unwanted contact, and safeguards that are easy to bypass. Discord was one of the platforms we researched.

The problem in Discord’s case lies in the age-verification methods it’s made available, which require either a facial scan or a government-issued ID. Discord says that video selfies used for facial age estimation never leave a user’s device, but this method is known not to work reliably for everyone.

Identity documents submitted to Discord’s vendor partners are also deleted quickly—often immediately after age confirmation, according to Discord. But, as we all know, computers are very bad at “forgetting” things and criminals are very good at finding things that were supposed to be gone.

Besides all that, the effectiveness of this kind of measure remains an issue. Minors often find ways around systems—using borrowed IDs, VPNs, or false information—so strict verification can create a sense of safety without fully eliminating risk. In some cases, it may even push activity into less regulated or more opaque spaces.

As someone who isn’t an avid Discord user, I can’t help but wonder why keeping my profile teen-appropriate would be a bad thing. Let us know in the comments what your objections to this scenario would be.

I wouldn’t have to provide identification and what I’d “miss” doesn’t sound terrible at all:

  • Mature and graphic images would be permanently blocked.
  • Age-restricted channels and servers would be inaccessible.
  • DMs from unknown users would be rerouted to a separate inbox.
  • Friend requests from unknown users would always trigger a warning pop-up.
  • No speaking on server stages.

Given the amount of backlash this news received, I’m probably missing something—and I don’t mind being corrected. So let’s hear it.

Note: All comments are moderated. Those including links and inappropriate language will be deleted. The rest must be approved by a moderator.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Discord Introduces Stronger Teen Safety Controls Worldwide

10 February 2026 at 04:19

Discord teen-by-default settings

Discord teen-by-default settings are now rolling out globally, marking a major shift in how the popular communication platform handles safety for users aged 13 to 17. The move signals a clear message from Discord: protecting teens online is no longer optional, it is expected. The Discord update applies to all new and existing users worldwide and introduces age-appropriate defaults, restricted access to sensitive content, and stronger safeguards around messaging and interactions. While Discord positions this as a safety-first upgrade, the announcement also arrives at a time when gaming and social platforms are under intense regulatory and public scrutiny.

What Discord Teen-by-Default Settings Actually Change

Discord, headquartered in San Francisco and used by more than 200 million monthly active users, says the new Discord teen-by-default settings are designed to create safer experiences without breaking the sense of community that defines the platform. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities Under the new system, teen users automatically receive stricter communication settings. Sensitive content remains blurred, access to age-restricted servers is blocked, and direct messages from unknown users are routed to a separate inbox. Only age-verified adults can change these defaults. The company says these measures are meant to protect teens while still allowing them to connect around shared interests like gaming, music, and online communities.

Age Verification, But With Privacy Guardrails

Age assurance sits at the core of the Discord teen-by-default settings rollout. Starting in early March, users may be asked to verify their age if they want to access certain content or change safety settings. Discord is offering multiple options: facial age estimation processed directly on a user’s device, or submission of government-issued ID through approved vendors. The company has also introduced an age inference model that runs quietly in the background to help classify accounts without always forcing verification. Discord stresses that privacy remains central. Video selfies never leave the device, identity documents are deleted quickly, and a user’s age status is never visible to others. In most cases, verification is a one-time process.

Why it Matters Now Than Ever Before

The timing of the Discord teen-by-default settings rollout is no coincidence. In October 2025, Discord disclosed a data breach involving a third-party vendor that handled customer support and age verification. While Discord’s own systems were not breached, attackers accessed government ID photos submitted for age verification, limited billing data, and private support conversations. The incident reignited concerns about whether platforms can safely handle sensitive identity data—especially when minors are involved. For many users, that trust has not fully recovered. At the same time, regulators are tightening the screws. The U.S. Federal Trade Commission has publicly urged companies to adopt age verification tools faster. Platforms like Roblox are rolling out facial AI and ID-based age estimation, while Australia has gone further by banning social media use for children under 16. Similar discussions are underway across Europe.

Teen Safety Meets Public Skepticism

Not everyone is convinced. Online reaction, particularly on Reddit, has been harsh. Some users accuse Discord of hypocrisy, pointing to past breaches and questioning the wisdom of asking users to upload IDs to third-party vendors. Others see the changes as the beginning of the end for Discord’s open community model. There is also concern among game studios and online communities that rely heavily on Discord. If access becomes more restricted, some fear engagement could drop—or migrate elsewhere.

Giving Teens a Voice, Not Just Rules

To balance control with understanding, Discord is launching its first Teen Council, a group of 10–12 teens aged 13 to 17 who will advise the company on safety, product design, and policy decisions. The goal is to avoid guessing what teens need and instead hear it directly from them. This approach acknowledges a hard truth: safety tools only work if teens understand them and trust the platform using them.

A Necessary Shift, Even If It’s Uncomfortable

The Discord teen-by-default settings rollout reflects a broader industry reality. Platforms built for connection can no longer rely on self-reported ages and loose moderation. Governments, parents, and regulators are demanding stronger protections—and they are willing to step in if companies do not act. Discord’s approach won’t please everyone. But in today’s climate, doing nothing would be far riskier. Whether this move strengthens trust or fuels backlash will depend on how well Discord protects user data—and how honestly it continues to engage with its community.

Discord faces backlash over age checks after data breach exposed 70,000 IDs

9 February 2026 at 14:39

Discord is facing backlash after announcing that all users will soon be required to verify ages to access adult content by sharing video selfies or uploading government IDs.

According to Discord, it's relying on AI technology that verifies age on the user's device, either by evaluating a user's facial structure or by comparing a selfie to a government ID. Although government IDs will be checked off-device, the selfie data will never leave the user's device, Discord emphasized. Both forms of data will be promptly deleted after the user's age is estimated.

In a blog, Discord confirmed that "a phased global rollout" would begin in "early March," at which point all users globally would be defaulted to "teen-appropriate" experiences.

Read full article

Comments

© SOPA Images / Contributor | LightRocket

Email Bombs Exploit Lax Authentication in Zendesk

17 October 2025 at 07:26

Cybercriminals are abusing a widespread lack of authentication in the customer service platform Zendesk to flood targeted email inboxes with menacing messages that come from hundreds of Zendesk corporate customers simultaneously.

Zendesk is an automated help desk service designed to make it simple for people to contact companies for customer support issues. Earlier this week, KrebsOnSecurity started receiving thousands of ticket creation notification messages through Zendesk in rapid succession, each bearing the name of different Zendesk customers, such as CapCom, CompTIA, Discord, GMAC, NordVPN, The Washington Post, and Tinder.

The abusive missives sent via Zendesk’s platform can include any subject line chosen by the abusers. In my case, the messages variously warned about a supposed law enforcement investigation involving KrebsOnSecurity.com, or else contained personal insults.

Moreover, the automated messages that are sent out from this type of abuse all come from customer domain names — not from Zendesk. In the example below, replying to any of the junk customer support responses from The Washington Post’s Zendesk installation shows the reply-to address is help@washpost.com.

One of dozens of messages sent to me this week by The Washington Post.

Notified about the mass abuse of their platform, Zendesk said the emails were ticket creation notifications from customer accounts that configured their Zendesk instance to allow anyone to submit support requests — including anonymous users.

“These types of support tickets can be part of a customer’s workflow, where a prior verification is not required to allow them to engage and make use of the Support capabilities,” said Carolyn Camoens, communications director at Zendesk. “Although we recommend our customers to permit only verified users to submit tickets, some Zendesk customers prefer to use an anonymous environment to allow for tickets to be created due to various business reasons.”

Camoens said requests that can be submitted in an anonymous manner can also make use of an email address of the submitter’s choice.

“However, this method can also be used for spam requests to be created on behalf of third party email addresses,” Camoens said. “If an account has enabled the auto-responder trigger based on ticket creation, then this allows for the ticket notification email to be sent from our customer’s accounts to these third parties. The notification will also include the Subject added by the creator of these tickets.”

Zendesk claims it uses rate limits to prevent a high volume of requests from being created at once, but those limits did not stop Zendesk customers from flooding my inbox with thousands of messages in just a few hours.

“We recognize that our systems were leveraged against you in a distributed, many-against-one manner,” Camoens said. “We are actively investigating additional preventive measures. We are also advising customers experiencing this type of activity to follow our general security best practices and configure an authenticated ticket creation workflow.”

In all of the cases above, the messaging abuse would not have been possible if Zendesk customers validated support request email addresses prior to sending responses. Failing to do so may make it easier for Zendesk clients to handle customer support requests, but it also allows ne’er-do-wells to sully the sender’s brand in service of disruptive and malicious email floods.

Scammers Unleash Flood of Slick Online Gaming Sites

30 July 2025 at 14:46

Fraudsters are flooding Discord and other social media platforms with ads for hundreds of polished online gaming and wagering websites that lure people with free credits and eventually abscond with any cryptocurrency funds deposited by players. Here’s a closer look at the social engineering tactics and remarkable traits of this sprawling network of more than 1,200 scam sites.

The scam begins with deceptive ads posted on social media that claim the wagering sites are working in partnership with popular social media personalities, such as Mr. Beast, who recently launched a gaming business called Beast Games. The ads invariably state that by using a supplied “promo code,” interested players can claim a $2,500 credit on the advertised gaming website.

An ad posted to a Discord channel for a scam gambling website that the proprietors falsely claim was operating in collaboration with the Internet personality Mr. Beast. Image: Reddit.com.

The gaming sites all require users to create a free account to claim their $2,500 credit, which they can use to play any number of extremely polished video games that ask users to bet on each action. At the scam website gamblerbeast[.]com, for example, visitors can pick from dozens of games like B-Ball Blitz, in which you play a basketball pro who is taking shots from the free throw line against a single opponent, and you bet on your ability to sink each shot.

The financial part of this scam begins when users try to cash out any “winnings.” At that point, the gaming site will reject the request and prompt the user to make a “verification deposit” of cryptocurrency — typically around $100 — before any money can be distributed. Those who deposit cryptocurrency funds are soon asked for additional payments.

However, any “winnings” displayed by these gaming sites are a complete fantasy, and players who deposit cryptocurrency funds will never see that money again. Compounding the problem, victims likely will soon be peppered with come-ons from “recovery experts” who peddle dubious claims on social media networks about being able to retrieve funds lost to such scams.

KrebsOnSecurity first learned about this network of phony betting sites from a Discord user who asked to be identified only by their screen name: “Thereallo” is a 17-year-old developer who operates multiple Discord servers and said they began digging deeper after users started complaining of being inundated with misleading spam messages promoting the sites.

“We were being spammed relentlessly by these scam posts from compromised or purchased [Discord] accounts,” Thereallo said. “I got frustrated with just banning and deleting, so I started to investigate the infrastructure behind the scam messages. This is not a one-off site, it’s a scalable criminal enterprise with a clear playbook, technical fingerprints, and financial infrastructure.”

After comparing the code on the gaming sites promoted via spam messages, Thereallo found they all invoked the same API key for an online chatbot that appears to be in limited use or else is custom-made. Indeed, a scan for that API key at the threat hunting platform Silent Push reveals at least 1,270 recently-registered and active domains whose names all invoke some type of gaming or wagering theme.

The “verification deposit” stage of the scam requires the user to deposit cryptocurrency in order to withdraw their “winnings.”

Thereallo said the operators of this scam empire appear to generate a unique Bitcoin wallet for each gaming domain they deploy.

“This is a decoy wallet,” Thereallo explained. “Once the victim deposits funds, they are never able to withdraw any money. Any attempts to contact the ‘Live Support’ are handled by a combination of AI and human operators who eventually block the user. The chat system is self-hosted, making it difficult to report to third-party service providers.”

Thereallo discovered another feature common to all of these scam gambling sites [hereafter referred to simply as “scambling” sites]: If you register at one of them and then very quickly try to register at a sister property of theirs from the same Internet address and device, the registration request is denied at the second site.

“I registered on one site, then hopped to another to register again,” Thereallo said. Instead, the second site returned an error stating that a new account couldn’t be created for another 10 minutes.

The scam gaming site spinora dot cc shares the same chatbot API as more than 1,200 similar fake gaming sites.

“They’re tracking my VPN IP across their entire network,” Thereallo explained. “My password manager also proved it. It tried to use my dummy email on a site I had never visited, and the site told me the account already existed. So it’s definitely one entity running a single platform with 1,200+ different domain names as front-ends. This explains how their support works, a central pool of agents handling all the sites. It also explains why they’re so strict about not giving out wallet addresses; it’s a network-wide policy.”

In many ways, these scambling sites borrow from the playbook of “pig butchering” schemes, a rampant and far more elaborate crime in which people are gradually lured by flirtatious strangers online into investing in fraudulent cryptocurrency trading platforms.

Pig butchering scams are typically powered by people in Asia who have been kidnapped and threatened with physical harm or worse unless they sit in a cubicle and scam Westerners on the Internet all day. In contrast, these scambling sites tend to steal far less money from individual victims, but their cookie-cutter nature and automated support components may enable their operators to extract payments from a large number of people in far less time, and with considerably less risk and up-front investment.

Silent Push’s Zach Edwards said the proprietors of this scambling empire are spending big money to make the sites look and feel like some fancy new type of casino.

“That’s a very odd type of pig butchering network and not like what we typically see, with much lower investments in the sites and lures,” Edwards said.

Here is a list of all domains that Silent Push found were using the scambling network’s chat API.

❌