Normal view

Received before yesterday

NSFW ChatGPT? OpenAI plans “grown-up mode” for verified adults

28 October 2025 at 07:39

If you’ve had your fill of philosophical discussions with ChatGPT, CEO Sam Altman has news for you: the service will soon be able to engage in far less highbrow conversations of the sexual kind. That’s right—sexting is coming to ChatGPT. Are we really surprised?

It marks a change in sentiment for the company, which originally banned NSFW content. In an October 14 post on X, Altman said the company had kept ChatGPT “pretty restrictive” to avoid creating mental health issues for vulnerable users. But now, he says, the company has learned from that experience and feels ready to “experiment more.”

“In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”

He added that by December, as age-gating expands, ChatGPT will “allow even more, like erotica for verified adults.”

This isn’t a sudden pivot. Things started to change at least as far back as May last year, when the company said in its Model Specification document that it was considering allowing ChatGPT to get a little naughty under the right circumstances.

“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.”

It followed up on that with another statement in a February 2025 update to the document, when it starting mulling a ‘grown-up mode’ while drawing a hard boundary around things like age, sexual deepfakes, and revenge porn.

A massive market

There’s no denying the money behind this move. Analysts believe people paid $2.7 billion worldwide for a little AI companionship last year, with the market expected to balloon to $24.5 billion by 2034—a staggering 24% annual growth rate.

AI “girlfriends” and “boyfriends” already span everything from video-based virtual partners to augmented reality companions that can call you. Even big tech companies have been getting into it, with Elon Musk’s X launching a sexualized virtual companion called Ani that will apparently strip for you if you pester it enough.

People have been getting down and dirty with technology for decades, of course (phone sex lines began in the early 1980s, and cam sites have been a thing for years). But AI changes the scale entirely. There’s no limit to automation, no need for human operators, and no guarantee that the users on the other side know where the boundaries are.

We’re not judging, but the normal rules apply. This stuff is supposed to be for adults, which makes it more important than ever that parents monitor what their kids access online.

Privacy risk

Earlier this month, we covered how two AI companion apps exposed millions of private chat logs, including sexual conversations, after a database misconfiguration—a sobering reminder of how much intimate data these services collect.

It wasn’t the first time, either. Back in 2024, another AI girlfriend platform was breached, leaking users’ fantasies, chat histories, and profile data. That story showed just how vulnerable these apps can be when they mix emotional intimacy with poor security hygiene.

As AI companionship becomes mainstream, breaches like these raise tough questions about how safely this kind of data can ever really be stored.

For adults wanting a little alone time with an AI, remember to take a regular break and a sanity check. While Altman might think that OpenAI has “been able to mitigate the serious mental health issues,” experts still warn that relationships with increasingly lifelike AIs can create very real emotional risks.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Millions of (very) private chats exposed by two AI companion apps

10 October 2025 at 07:32

Cybernews discovered how two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users.

This is not the first time we have to write about AI “girlfriends” exposing their secrets—and it probably won’t be the last. This latest incident is a reminder that not every developer takes user privacy seriously.

This was not a sophisticated hack that required a skilled approach. All it took was knowing how to look for unprotected services. Researchers found a publicly exposed and unprotected streaming and content delivery system—a Kafka Broker instance.

Think of it like a post office that stores and delivers confidential mail. Now, imagine the manager leaves the front doors wide open, with no locks, guards, or ID checks. Anyone can walk in, look through private letters and photos, and grab whatever catches their eye.

That’s what happened with the two AI apps. The “post office” (Kafka Broker) was left open on the internet without locks (no authentication or access controls). Anyone who knew its address could enter and see every private message, photo, and the purchases users made.

The Kafka broker instance was handling real-time data streams for two apps, which are available on Android and iOS: Chattee Chat – AI Companion and GiMe Chat – AI Companion.

The exposed data belonged to over 400,000 people and included 43 million messages and over 600,000 images and videos. The content shared with and created by the AI models was not suitable for a work environment (NSFW), the researchers found.

One of the apps—Chattee—was particularly popular, with over 300,000 downloads, mostly in the US. Both apps were developed by Imagime Interactive Limited, a Hong Kong-based developer, though only Chattee gained significant popularity.

While the apps didn’t reveal names or email addresses, they did expose IP addresses and unique device identifiers, which attackers could combine with data from previous breaches to identify users.

The researchers concluded:

“Users should be aware that conversations with AI companions may not be as private as claimed. Companies hosting such apps may not properly secure their systems. This leaves intimate messages and any other shared data vulnerable to malicious actors, who leverage any viable opportunities for financial gain.”

It doesn’t take a genius cybercriminal with access to data from other breaches to turn the information they found here into something they can use for sextortion.

Another thing that the information shows is that the developer’s revenue from the apps exceeded $1 million. If only they had spent a few of those dollars on security. Securing a Kafka Broker instance is not technically difficult or especially costly. Setting up proper security mostly requires configuration changes, not major purchases.

Leaks like this one can lead to harassment, reputational damage, financial fraud, and targeted attacks on users whose trust was abused—which does not make for happy customers.

Protecting yourself after a data breach

The leak has been closed after responsible disclosure by the researchers, but there is no guarantee they were the first to find out about the exposure. If you think you have been the victim of a data breach, here are steps you can take to protect yourself:

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the company’s website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but we highly recommend not storing that information on websites.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

❌