They actually did it. OpenAI officially deprecated GPT-4o on Friday, despite the model's particularly passionate fan base. This news shouldn't have been such a surprise. In fact, the company announced that Feb. 13 would mark the end of GPT-4o—as well as models like GPT-4.1, GPT-4.1 mini, and o4-mini—just over two weeks ago. However, whether you're one of the many who are attached to this model, or you simply know how dedicated 4o's user base is, you might be surprised OpenAI actually killed its most agreeable AI.
This isn't the first time the company depreciated the model, either. OpenAI previously shut down GPT-4o back in August, to coincide with the release of GPT-5. Users quickly revolted against the company, some because they felt GPT-5 was a poor upgrade compared to 4o, while others legitimately mourned connections they had developed with the model. The backlash was so strong that OpenAI relented, and rereleased the models it had deprecated, including 4o.
If you're a casual ChatGPT user, you might just use the app as-is, and assume the newest version tends to be the best, and wonder what all the hullabaloo surrounding these models is all about. After all, whether it's GPT-4o, or GPT-5.2, the model spits out generations that read like AI, complete with flowery word choices, awkward similes, and constant affirmations. 4o, however, does tend to lean even more into affirmations than other models, which is what some users love about it. But critics accuse it of being too agreeable: 4o is at the center of lawsuits accusing ChatGPT of enabling delusional thinking, and, in some cases, helping users take their own lives. As TechCrunch highlights, 4o is OpenAI's highest-scoring model for sycophancy.
I'm not sure where 4o's most devoted fans go from here, nor do I know how OpenAI is prepared to deal with the presumed backlash to this deprecation. But I know it's not a good sign that so many people feel this attached to an AI model.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Ring isn't having the week it probably thought it would have. The Amazon-owned company aired an ad on Super Bowl Sunday for "Search Party," its new feature that turns a neighborhood's collective Ring cameras into one network, with the goal of locating lost dogs. Viewers, however, saw this as a major privacy violation—it doesn't take much to imagine using this type of surveillance tech to locate people, not pets.
The backlash wasn't just isolated to the ad, however. The controversy reignited criticisms of the company's partnership with Flock Safety, a security company that sells security cameras that track vehicles, notably for license plate recognition. But the partnership with Ring wasn't about tracking vehicles: Instead, Flock Safety's role was to make it easier for law enforcement agencies that use Flock Safety software to request Ring camera footage from users. Agencies could put in a request to an area where a crime supposedly took place, and Ring users would be notified about the request. They didn't have to agree to share footage, however.
Law enforcement could already request footage from Ring users, through the platform's existing "Community Requests" feature. But this partnership would let agencies make these requests directly through Flock Safety's software. If a user submitted footage following a request, Ring said that data would be "securely packaged" by Flock Safety and share to the agency through FlockOS or Flock Nova.
Ring cancels its partnership with Flock Safety
That partnership is officially over. On Friday, Ring published a blog post announcing the end of its relationship with Flock Safety. The company said, after a review, the integration "would require significantly more time and resources than anticipated." As such, both parties have agree to cancel the partnership.
Importantly, Ring says that since the integration never actually launched, no user footage was ever sent to Flock Safety—despite the company announcing the partnership four months ago. Social media influencers had spread the false claim that Flock Safety was seeding Ring footage directly to law enforcement agencies, such as ICE. While those claims are inaccurate, they were likely fueled by reporting from 404 Media that ICE has been able to access Flock Safety's data in its investigations. Had Ring's partnership with Flock Safety gone ahead, there would be legitimate cause to believe that agencies like ICE could tap into the footage Ring users had shared—even if those users were under the impression they were only sharing footage with local agencies to solve specific cases.
While privacy advocates will likely celebrate this news, the cancelled partnership has no effect on Community Requests. Law enforcement agencies will still be able to request footage from Ring users, and those users will still have a say in whether or not they send that footage. Ring sees the feature as an objective good, allowing users to voluntarily share footage that could help law enforcement solve important cases. In its announcement on Friday, Ring cited the December 2025 Brown University shooting, in which seven users shared 168 video clips with law enforcement. According to Ring, one of those videos assisted police in identifying the suspect's car, which, in turn, solved the case.
When it's time to buy a new car, you don't necessarily need to stick with the one you had before. You don't lose your cloud-based photos by switching from Toyota to Subaru, nor will your friends yell at you for ruining the group chat by buying a Kia. That's not the case with smartphones: When you buy an iPhone, it's tough to switch away from it. The same goes for Android: While it's easy enough to switch within the Android ecosystem, such as between Pixel or Galaxy, moving from Android to iPhone can also be a pain. Tech companies tend to make it tempting to stick with their platform, and introduce friction when you try to leave.
That, of course, is entirely business-based. Apple hasn't traditionally made it easy to move to Android, because, well, you might actually do it. It doesn't have to be this way, either. There's nothing inherent to smartphones that should make it so challenging to break out of any particular ecosystem. All it takes is some intentional design: If smartphones were made to be traded, you could migrate from one to another, without worrying about losing pictures, messages, or any other important data or processes.
It's now easier than ever to switch between iPhone and Android
As it happens, that intentional design is here. Apple and Google actually worked together to make it easier to transfer data between iPhone and Androids, which makes switching between the two platforms more seamless.
News first broke about this partnership back in December, and, at that time, Google released some of this progress as part of the latest Android Canary, the company's earliest pre-release software. Shortly after, Apple released the first beta for iOS 26.3, which featured the transfer tool. Now, iOS 26.3 is here, and with it, an easier way to switch from iPhone to a device made by Google, Samsung, or any other Android OEM.
How to use the new iPhone-to-Android option in iOS 26.3
The feature seems easy enough to use. Once you update your iPhone to iOS 26.3, you can head to Settings > General, then scroll down to "Transfer or Reset iPhone." Tap this option, then choose "Transfer to Android." Here, iOS will present a pop-up, telling you to place your iPhone next to your new Android device, where you can transfer photos, messages, and apps, among other data points. (That said, health data, devices paired with Bluetooth, and "protected items" cannot be transferred.)
You'll need to make sure both devices are running the latest updates, are connected to wifi, and have Bluetooth enabled. However, Apple also says your Android device should be in the "setup process," which means you likely won't be able to use this feature if your Android phone is already set up. From here, your iPhone will ask you to scan a QR code that should appear on your Android device. Alternatively, you'll be able to tap "Other Options" on your iPhone to enter the Session ID and Pairing Code that should appear on your Android.
Now, you can choose the data you want to transfer, including photos, contacts, calendars, call history, and messages. Tap "Continue" once complete, then choose to transfer your eSIM, if applicable. (You'll need to double-click the side of your iPhone when prompted to complete the eSIM transfer.) This works in the other direction too, though Apple says you do still need to use the Move to iOS app on Android—at least until Google sets up a similar protocol on its end.
More flexibility from Apple and Google is better for everyone
Apple and Google might not be motivated by charity, of course, as the EU has been cracking down on restrictive practices by tech companies in recent years. But while both companies may see this as a way to lose customers, it's also a way to gain them: Sure, some iPhone users may switch to Android if it's easier to do so, but some Android users may do the reverse for the same reasons.
More choice is good for everyone—even if it doesn't guarantee exponential growth to shareholders.
If you're an Apple fan who closely follows tech news, you might have been looking forward to Siri's big AI overhaul for some time now—specifically, since the company initially announced it at WWDC 2024. But despite delay after delay, rumors have strongly suggested that the next generation of Siri is set to launch with iOS 26.4. And seeing as Apple just released iOS 26.3 this week, AI Siri is closer than ever, right? Wrong.
As reported by Bloomberg's Mark Gurman, Apple has once again kicked Siri's big updates down the road. According to Gurman, the company really did intend to release AI Siri with iOS 26.4, which is reportedly planned to release sometime in March. However, due to testing "snags," the company is instead planning to break up Siri's major updates and distribute them across several iOS updates. Gurman notes that likely means iOS 26.5, which could launch in May, and iOS 27, which will likely release in September, if it follows Apple's usual release dates. But looking at Apple's track record here, don't hold your breath.
AI Siri's upcoming features are a struggle
According to Gurman's sources, Apple is struggling to get Siri to "properly process queries," or to actually respond fast enough, both of which would defeat the purpose of using a smart assistant. Apple is reportedly pushing engineers to use iOS 26.5 to test these features, particularly the ability for Siri to use your personal data to answer questions. Users may be able to flip a switch in Settings to "preview" these features, and may treat the rollout as a beta.
Engineers are also struggling to get Siri's app intents to work, or the feature that lets Siri take actions on your behalf. You could ask Siri to open an image, edit it, then share it with a friend, but only if the feature itself actually works. This, too, may roll out with iOS 26.5, but it's unclear due to reliability issues. Siri is also cutting off user prompts too soon, and sometimes taps into ChatGPT instead of using Apple's underlying tech—which would look pretty bad for the company.
Apple is also testing new AI features for iOS 26.5 that we haven't heard of yet. One is a new web search tool that functions like other AI search features from companies like Perplexity and Google. You ask a question to search on the web, and it returns a report with summaries and links. The other new feature is a custom image generation tool, that builds on Image Playground, but that too is hitting development hurdles.
Looking even further ahead, Apple is planning more Siri advancements—namely, giving the assistant chatbot features, à la ChatGPT. (That said, it will reportedly use Gemini to power these features.) This version of Siri may even have its own app.
What's going on with AI Siri?
It seems Siri really is Apple's albatross. Despite arguably popularizing smart assistants for the general population, Siri quickly fell behind compared to the likes of Alexa and Gemini (née Google Assistant). Now, the latter have fully embraced modern generative AI, offering features like contextual awareness and natural language commands. While Amazon and Google users can ask their assistants increasingly complicated questions, Siri still feels designed mostly to handle setting alarms and checking the weather.
That was going to change with iOS 18, alongside Apple Intelligence as a whole. Apple's initial pitch for AI Siri was an assistant that could see what's on your phone to better understand questions you ask, and take actions on your behalf—i.e., app intents. You could ask Siri to edit an image you have pulled up on your Photos app, and because the assistant is contextually aware, it would know what image you mean, and apply the edits you ask for. Or, you could ask when your friend was set to arrive, and the assistant would be able to scan messages and emails to know that, one, your friend is visiting town this weekend, and two, that they sent you their flight itinerary that gets them into the airport at 3:55 p.m.
This Siri has never launched, however. While the company has rolled out iterative updates to Siri with some AI-powered features, its overhaul with these ambitious features have been a trial for Apple's AI team. It all stems from Apple's issues with AI in general: The company was caught off guard by the generative AI wave kicked off in late 2022 by OpenAI's ChatGPT, and following some resistance from corporate leadership, have been scrambling to keep up ever since. Apple Intelligence launched half-baked with issues of its own, but rather than launch a half-baked AI Siri, the company has been struggling to build up the assistant internally.
Part of the problem is privacy-related: Unlike other tech companies, who have no problem hoovering up user data to train their models with, Apple still wants to preserve privacy while rolling out AI features. As such, that complicates their situation, as they need to ensure both the hardware and software involved meet those standards. You can't have Siri pull user data into the cloud without strict security measures if you want to ensure your users' data remains private. The company is also focused on building its own hardware for cloud-based AI processing, rather than focus on simply buying up GPUs as many other companies have.
Apple is the second most valuable tech company in the world, but a host of factors—including with software, hardware, and leadership—have made it so even Apple can't magically produce an AI assistant. Though, I'm not sold that an AI Siri will move units for Apple in the first place. I can't imagine Gemini moves people to Android, and you can download ChatGPT on any device you own. It's even now built into your iPhone.
If your chat app of choice is Telegram, you have some changes to look forward to. The company announced a number of updates this week, chief among them a new look for its Android and iOS apps—the former is getting a total overhaul, but iPhone users will still note some new UI elements when chatting with friends on Telegram. There are also some new features, including one I find a little odd.
There's a new look for Telegram on mobile
Credit: Telegram
Per the announcement, the biggest changes come via UI updates, particularly on Android. Telegram says that its Android app has a "fully redesigned interface" intended to make navigating the app quicker and more intuitive—Telegram notes the interface code itself was entirely rebuilt to meet these goals. Changes include a new iOS-like bottom bar that lets you switch between your chats, settings, and profile, among other functionality. If you find the new interface effects to be too much, or too big a draw on your battery, Telegram says you can adjust them from Settings > Power Saving.
The company also updated its iOS app, though not quite in the same way. Telegram says it added "even more Liquid Glass" to its iOS app, Apple's new design language for iOS 26, including a redesigned media viewer, sticker and emoji pack preview panels, and new context menus in profiles when choosing messages.
You can now transfer a Telegram group chat to a new admin
Credit: Telegram
As of this latest update, if the owner of a Telegram group chat leaves that conversation, the ownership of the group will transfer automatically to one of the group's admins after a week. However, the departing group owner can also choose their own admin if they want to, as Telegram now presets an option to appoint another admin when leaving the group. The company adds that admins can transfer ownership at any time, even if they don't leave.
Telegram bots are now more colorful
Credit: Telegram
A small change, but if you develop bots for Telegram, you now have the option to add colors and emojis to those buttons. It's far from a radical update, but it could make it a bit easier to tell options apart at a glance.
There's a new send message shortcut on the Telegram iPad app
A small but noteworthy feature for any Telegram users on iPad: Now, you can send a message using the shortcut "Command + Enter."
You can now "craft" gifts in Telegram
And here's that weird new feature: I don't usually expect my chat app to offer collectible gifts, but apparently Telegram does. It previously introduced the ability to send gifts to other people, and even upgrade those gifts to collectibles that can be auctioned on NFT marketplaces (which requires real money).
Now, Telegram is reportedly expanding this gift system in the latest update with the ability to combine existing gifts in a new "crafting" system to create "Uncommon, Rare, Epic or Legendary" versions. You can combine up to four gifts at once, and adding multiple gifts with the same "attributes" raises the chance the crafted gift will have that attribute as well. Again, this is the last thing I'd expect or want from a chat app, but, as it's part of this update, so I'm telling you about it.
Like millions of Americans, I've been watching the news of Nancy Guthrie's disappearance with concern—so I was somewhat relieved when the FBI announced they were releasing new footage of a suspect. Finally, the case had something to go on, even if it was only doorbell video of a masked stranger.
When I saw the footage, I assumed this was something the FBI had in their possession since the beginning, and had finally decided to release to the public. But that's not what happened at all. If you have been following this case closely, you may know that law enforcement had previously confirmed that Guthrie's Google Nest camera was disconnected (presumably by the perpetrator), and that she did not have a subscription that would store video either on the doorbell or in the cloud. Yet despite the fact the doorbell should have been a dead end, the FBI has seemingly produced this video out of thin air.
If you have a Google Nest device in or attached to your home, this might give you pause. Sure, it's one thing if law enforcement is able to obtain video from your subscription or from the device itself. But if you don't keep video records on your Nest, it seems it is still possible to retrieve the footage. How did the FBI do this, and what does it mean for the privacy of your Nest devices?
The FBI likely pieced the video together from fragments
The short answer is that we don't really know for sure how the FBI got the footage, but there are a few leads. According to FBI Director Kash Patel, the Google Nest footage was recovered "from residual data located in backend systems." That's pretty vague, though the FBI isn't necessarily known for its transparency.
According to experts that spoke to NBC News, however, it is possible to obtain data from the "complex infrastructure" of cloud-based cameras, including Google Nest devices. Retired FBI agent Timothy Gallagher told NBC News that Guthrie's Nest camera might have sent images to Google's cloud service, or at least stored data points locally throughout the hardware of the device, even though she wasn't paying for a Nest subscription. The FBI could have obtained the footage from the cloud this way, or pieced together the video from those data points.
Both possibilities track, based on how Nest cameras work without a subscription: While you need to pay Google in order to save video clips from your Nest cameras, some Nest devices record event histories and store them on-device. The third-gen wired Nest Doorbell can save up to 10 seconds of clips, while the first and second-gen wired doorbells can save up to three hours of event history, all without a subscription. They also support live video feeds when motion is detected, which could impact the video data points saved to the device or cloud.
It's entirely possible the subject walking up to the camera triggered the doorbell to save an event history. But since it took the FBI so long to produce the footage, and since the director claims it was obtained from "residual data," my guess is it wasn't readily available in Guthrie's Google Home app. Maybe the event history saved to the cloud, but it wasn't clear where it was located. Maybe it was overwritten, but the FBI was able to build it back up with recovered data points. My guess would learn toward the latter, as authorities did say the camera had been disconnected. Unfortunately, we don't have a definitive answer at this time, even if the theory is sound.
I've reached out to Google for comment, and will update this piece if I hear back.
Should you get rid of your Nest camera over privacy concerns?
Based on what we know, it doesn't really seem like your Nest doorbell or camera is a fourth amendment disaster waiting to happen—but I don't blame anyone for being concerned. After all, if you don't have a Nest subscription, you might have been comforted by the thought that none of your footage was being saved anywhere, meaning law enforcement or other authorities would have nothing to seize if you somehow popped up on their radar. That doesn't necessarily appear to be the case.
That said, without a subscription, you don't have access to a collection of all clips your Nest camera has ever recorded. You might have a limited event history saved, based on motion detection, but that will be limited to three hours of data. Your device might have data points that an organization like the FBI could theoretically use to restore footage, but that's likely true for any camera or smart doorbell system—not just Nest.
Also, this is not a Ring situation—Google hasn't partnered with organizations like Flock to help law enforcement request footage from users. Nest also lacks Ring's "Search Party" feature, which can turn a neighborhood into a kind of surveillance state, and probably not just to search for lost dogs. I'm not dismissing every security and privacy concern, of course: By putting a commercially-available smart camera on your front door, you are placing your data in the hands of companies like Google or Amazon. If you want to eliminate the risk of the FBI obtaining your doorbell footage, you simply can't have a doorbell with a camera. But barring a warrant, or a Nancy Guthrie-level situation, the chances of your Nest doorbell footage actually being used against you seem rather slim.
The dark web has a bad reputation—one it has earned, at that. It's a complex subsection of the web, and it's not all bad by any means, but its nature does allow illicit and illegal activity to prosper anonymously. That's why hackers choose the dark web as their point of sale for stolen user data: If you're going to traffic digital contraband, you're going to do so as privately as possible.
As such, you might be a bit stressed if you're told your email address was found on the dark web. Maybe you use an identity theft service, which discovered your information here. Perhaps you're noticing an uptick in spam, especially spam that seems targeted to you personally. In any case, it's understandable to be anxious. The good news is, this is more common than you think, and there are steps you can take to protect your data going forward.
What is the dark web?
Despite its aforementioned reputation, the dark web is not "Evil Doers Central." It's simply one part of the deep web, or the part of the internet not indexed by search engines. The deep web makes up the vast majority of the global internet, but the dark web is unique, because it requires a specific type of browser, like Tor, and knowledge of specific dark web addresses, to access.
The dark web is inherently private, and inherently anonymous. That's why it attracts bad actors. But that doesn't mean that's all it's good for. Anyone who needs to access the internet without worrying about intervention can use the dark web. Think about journalists in countries that would rather they not tell their stories, or citizens whose governments censor the public internet. There's plenty of bad to be had, to be sure, but there's also perfectly innocent and productive content, too. For more information about this murky, mysterious place, check out our full explainer and guide here.
Why is my email address on the dark web?
If your email address is on the dark web, it's likely because one of the companies you shared it with suffered a data breach. Unfortunately, data breaches happen all the time, and there's really no way to ensure that a company you choose to share your email address with won't be a victim of a breach at some point in the future. Sometimes the company itself is breached; other times, it's a third-party the company shares data to.
When bad actors break into an organization's systems and steal their data, they often put the spoils on the dark web. This makes it easier to sell the stolen data anonymously. As such, it's really no surprise if your email ends up on the dark web—though that might not be much consolation.
What can hackers do with my email on the dark web?
Your email address is for sale, and someone buys it. Now what? Well, such a hacker could choose a few tactics here. First, they'll likely want to try breaking into different accounts you might have used that email address with. If you lost any passwords in the data breach, they might try those, too. That's why it's an excellent idea to change your passwords as soon as you learn about the breach—but more on that later.
If they can't break into your accounts on their own, they'll want to enlist your services—unknowingly, of course. To do so, they'll likely target you in phishing attacks, and, seeing as they know your email address, they'll probably come via email. There are a lot of phishing campaigns out there, but here are some examples: You might receive fake data breach notices, with a link to check your account; you might find a message telling you it's time to change your password; you might get an email warning you about a login attempt; you might even receive an aggressive email, with demands from the hackers.
Hackers may also choose to impersonate you. They might create an email that looks very similar to yours, and reach out to your contacts in order to trick them into thinking it's really you. Tell your close contacts (especially any you think won't look close at the "from" line in an email) that your email was leaked on the dark web, and to watch out for imposters.
Here's what to do if your email address is on the dark web
First of all, don't panic. Again, data breaches happen so often that many of our email addresses (among other data) have leaked onto the dark web. While this isn't a good thing, it also isn't the end of the world.
Next, change your passwords, starting with your email account itself. If you know the account the email was stolen from, make sure to change this next, as your password may have also be affected in the data breach. As usual, make each password strong and unique: You should never reuse passwords with any account, and all of them should be long and difficult for both humans and computers to guess. As long as each of your accounts uses a strong and unique password, you really shouldn't have to change all of your passwords: Hackers may have your email, but they won't have all these passwords to use with it.
From here, make sure all of your accounts use two-factor authentication (2FA), when available. 2FA ensures that even if you have the email address and password for a given account, you still need access to a trusted device to verify your identity. Hackers won't be able to do anything with your stolen credentials if they don't have physical access to, say, your smartphone. This is a crucial step for maintaining your security following a data breach. You could also choose to use passkeys instead of passwords for any accounts that offers it. Passkeys combine the convenience of passwords with the security of 2FA: You log in with your fingerprint, face scan, or PIN, and there's no password to actually steal.
From here, monitor your various accounts connected to this email, especially your financial accounts. Your email address alone likely won't put you in too much jeopardy, but if you lost additional information, you'll want to ensure hackers don't breach your important accounts. You could take drastic steps, like freezing your credit, but, again, if it's just your email address, this is likely a step too far.
Can I remove my email from the dark web?
While some data removal services claim to be able to remove data like email addresses from the dark web, it's just not 100% possible. The dark web is vast and unregulated, and once the data leaks onto it, the cat's kind of out of the bag. Sure, a service like DeleteMe could request data web hosts to take down your email, but they don't have to. Plus, hackers who buy your email already have it. Again, exposed email addresses are not the end of the world. But if you can't stand having your email on the dark web, your best bet may be to make a new account.
Preventing your email address from winding up on the dark web
What you can do is take measures to prevent data loss in the future. The best step to take is to stop sharing your email in the first place. You don't need to be a hermit, though: Use an email alias service, like Apple's Hide My Email or Proton's email alias feature, to generate a new alias every time you need to share your email. Messages sent to the alias are forwarded to your inbox, so the experience is the same for you, all without exposing your actual address to the world. If one of these companies suffers a data breach, no problem: Just retire the alias.
To that point, going forward, consider using a data monitoring and removal service. Maybe you already do, and that's how you learned about your email on the dark web to begin with. But if you don't, there are many options out there to choose from. While none can promise they'll remove email addresses from the dark web, they might spot your email if it ends up there. If you use aliases, you can then kill that particular address and make a new one for the affected account. Plus, if your email ends up somewhere other than the dark web, they might be able to remove it for you.
While there are a lot of chat apps out there, WhatsApp is the undeniable leader of the pack. The app has over three billion monthly active users, constantly messaging and calling one another across the globe. However, currently those calls are all happening over the mobile app, or maybe the desktop app. Though WhatsApp does have a web app, the service has never supported audio or video calls outside of its downloadable apps—until now.
You can now make calls from the WhatsApp web app
According to WABetaInfo, WhatsApp is slowly rolling out audio and video calls to its web app. At launch, the functionality is coming to individual chats with users who elect to enroll in the WhatsApp web app's beta, but the company plans to roll out the feature to all web app users over the coming weeks.
WABetaInfo notes that voice and audio calls work about the same as they do in the WhatsApp desktop app. When you open an individual chat in the web app, you'll now see a video call icon at the top. Click this, and you'll find two options: one to place a voice call, and one to place a video call. These calls are still end-to-end encrypted, as they are on WhatsApp's desktop and mobile apps, meaning only the users who are a part of the calls can hear what's being said. In addition, the web app's video call client supports Screen Share, so you can share a live stream of your computer's screen to another WhatsApp contact.
WhatsApp is also reportedly working on group chat calls for web app users, as well. While that feature won't roll out alongside individual calls, when it does launch, you'll be able to join group chats with up to 32 people.
If you tend to use the WhatsApp desktop or mobile apps, this might not seem like huge news—but it is pretty substantial for a few subsets of WhatsApp users. One, of course, is the user base that just prefers using WhatsApp in their computer's web browser—but the other is Linux users. WhatsApp doesn't actually offer a version of its desktop app for Linux, so those users have to use the web app if they want to run WhatsApp on their computers. That means they've never before been able to place calls without pulling out a mobile device.
How to sign up for the WhatsApp web app beta
This feature will soon roll out to all web app users, but until then, you need to be running the WhatsApp web app beta in order to try it.
Luckily, it's pretty easy to get up and running. To start, open the web app, then head to the settings menu, choose "Help," then choose the "Join beta" option. This will immediately switch you over to the beta version of the web app. (You should see a "Beta" label on your screen.) Now that you're running the beta, should you find the option to place calls in individual chats.
If you're of a certain age, you might remember mixtapes: cassettes made up of a series of tracks you or a friend think work well together, or otherwise enjoy. (They took some work to put together, too.) Digital music sort of killed mixtapes, but, in their place, came playlists. You could easily put together a collection of your favorite songs, and either burn them to a CD, or, as streaming took over, let the playlist itself grow as large as you wanted.
Anyone can make a playlist, but there's an art to it. Someone with a keen ear for music can build a playlist you can let play for hours. Maybe you have a friend who's good at making playlists, or maybe you're that friend in your group. They can be a fun way to share music, and find some new music to add to your own library.
Now, generative AI wants to replace human intervention altogether. Rather than you or a friend building a playlist, you can ask AI to do it for you. And YouTube Music is the latest service to give it a try.
YouTube announced its new AI playlist generator in a post on X on Monday. If you subscribe to either YouTube Premium or YouTube Music Premium, you can ask YouTube's AI to make a playlist based on whatever parameters you want. To try it out, open YouTube Music, then head to your Library and tap "New." Next, choose the new "AI Playlist" option, then enter the type of music you're looking for. You could ask YouTube Music to generate a playlist of pop-punk songs, or to make something to play when focusing on work. Really, it's whatever you want, and if the AI gets it wrong, you can try it again.
It's pretty straightforward, and nothing revolutionary. Other music streaming services have their own AI playlist generators too. Spotify, for example, has had one for a couple of years, but recently rolled out Prompted Playlist as well, which lets you generate playlists that update with time, and takes your listening history into account. With this update, however, YouTube is likely trying to drum up some interest in its streaming service and encourage users to pay for it. Just this week, the company put lyrics—once a free feature—behind the Premium paywall. I suppose it thinks that if you can't read what your favorite artists are singing, and you'd like to have a bot make your playlists for you, you might just subscribe to its platform.
This could be a good change in the long run for YouTube Music subscribers. I'm on Apple Music, so I don't really use AI-generated playlists. I like the Apple-curated playlists, as well as the ones my friends and I make and share. But who knows: Maybe human-generated playlists are going the way of the mixtape.
Google announced two new ways for users to remove their sensitive information from the web Tuesday morning—or, at least, remove that data from Google Search. The first lets users request that Google remove sensitive government ID information from Search, while the second gives users new tools to request the same for non-consensual explicit images.
Google's "Results about you" tool is getting an update
Credit: Google
First, Google is updating its existing "Results about you" tool, which helps users scour the internet for their personal information. Before today, this tool could already locate data points like your name, phone number, email addresses, and home addresses. Following the update, you can now find and request the deletion of search results containing highly sensitive information, including your driver's license, passport, or Social Security number.
To launch this tool, click here. If you've never used "Results about you" before, you'll need to set it up to tell Google what to look out for. Once you do, you'll be able to add government ID numbers, such as your driver's license, passport, and Social Security number. If Google finds a match, the company will let you know. You can receive an alert from the Google app on your smartphone, which takes you to a summary of what data was found and where. From here, you can choose from "Request to remove," or "Mark as reviewed."
Unfortunately, this tool won't remove the data from the websites that are hosting it, but it will eventually remove the search results—sharply reducing the chance that someone will find your data on their own.
Google says these changes will roll out in the U.S. over the "coming days," while it is working on bringing them to other countries in the future.
Google's simpler way to remove explicit images from Search
Credit: Google
In addition to these changes, Google is now rolling out a simpler tool for users to request the remove of non-consensual explicit images (NCEI) from Search. If you find such an image on Search, you can tap the three dots on that image, choose "remove result," then "it shows a sexual image of me." You'll have the choice to report whether the photo is real, or is artificially generated, as well, and you can report multiple images at once, if needed. Your requests will all appear in the Results about you hub, so you can track the progress of each.
The tool lets you opt-in to an option that will filter additional explicit results in other searches. Google says it will also share links to "emotional and legal support" after you submit a request.
It finally happened. After months of speculation, ChatGPT officially has ads. OpenAI revealed the news on Monday, announcing that ads would roll out in testing for logged-in adult users on Free and Go subscriptions. If you or your organization pays for ChatGPT, such as with a Plus, Pro, Business, Enterprise, or Education account, you won't see ads with the bot.
OpenAI says that ads do not have an impact on the answers ChatGPT generates, and that these posts are always clearly separated from ChatGPT's actual responses. In addition, ads are labeled as "Sponsored." That being said, it's not exactly a church-and-state situation here. OpenAI says that it decides which ads to show you based on your current and past chats, as well as your past interactions with ChatGPT ads. If you're asking for help with a dinner recipe, you might get an ad for a meal kit or grocery service.
The company claims it keeps your chats away from advertisers. The idea, according to the company, is strictly funding-based so that OpenAI can expand ChatGPT access to more users. That's reportedly why ads are starting as a test, not a hardcoded feature: OpenAI says it wants to "learn, listen, and make sure [it gets] the experience right." As such, advertisers don't have access to chats, chat histories, memories, or your personal details. They do have access to aggregate information about ad performance, including views and click metrics.
OpenAI will only show ads to adults. If the service detects that you are under 18, it will block ads from populating in your chats. Ads also will not appear if you're talking to ChatGPT about something related to health, medicine, or politics. You can offer OpenAI feedback on the ads you do see, which should inform the ads you receive in the future. You can also delete your ad data and manage ad personalization, if you want to reset the information OpenAI is using to send you ads.
Credit: OpenAI
How to opt out of ChatGPT ads
The thing is, you don't actually have to deal with ads, even if you use ChatGPT for free. That's not just by upgrading to a paid ChatGPT plan, though OpenAI does suggest that option in its announcement. In addition, OpenAI is offering Free and Go users a dedicated choice to opt out of ads here. There is, of course, a pretty sizable catch: You have to agree to fewer daily free messages with ChatGPT. OpenAI doesn't offer specifics here, so it's not clear how limited the ad-free experience will be. But if you hate ads, or if you simply don't want to see an ad for something irrelevant to your ChatGPT conversation, it's an option.
If you like that trade-off, here's how to opt out of ads. Open ChatGPT, then head to your profile, which opens your profile's Settings page. Here, scroll down to "Ads controls," then choose "Change plan to go ad-free." Select "Reduce message limits," and ChatGPT will confirm ads are off for your account. You can return to this page at any time to turn ads back on and restore your message limits.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
We should all take common-sense steps to make sure our data stays safe and secure: use strong passwords with our accounts, and never reuse passwords; employ two-factor authentication on any account that offers it; and avoid clicking strange links in emails or text messages. But even when you follow all those rules, your personal data can still be at risk, strictly because the services you rely on aren't following these rules themselves.
The team behind the discovery says they weren't out looking to break a security story. Instead, they were "messing around with login pages," specifically Google login pages, when they found that the sites' HTML source code could see the passwords they entered in plain text. They turned their sights onto other websites—more than 7,000, reportedly—and found that about 15% of them were also storing sensitive information in plain text. That's over 1,000 websites exposing important data.
That, of course, is not supposed to happen: When you enter sensitive data into a website—say, your password into Google's login page—that site shouldn't see your password at all. In short, the sites confirm your passwords through hashing algorithms—essentially, jumbling your password into a code that can be checked against the code the site stores on their end. They can then confirm you entered the right password without ever exposing the actual text. By storing things like passwords and Social Security numbers in plain text, those sites are exposing that data to anyone in the know.
Importantly, that includes browser extensions. The researchers claim that 17,300 Chrome extensions—or 12.5% of the extensions available for download on Google's browser—have the permissions they need to view this sensitive plain text data. Think about the permissions you ignore when setting up a new extension, including permissions that give extensions full access to see and change what you enter on a webpage. Researchers didn't expose any extensions by name, as the situation is not necessarily the fault of the extensions, but considering the scope, it's possible some of the extensions you use can access sensitive information you enter in certain sites.
Again, legitimate extensions are not the priority: Instead, it's the risk that a developer will create an extension with the intent of scraping sensitive info stored in plain text. While the researchers claim there are no extensions actively abusing this vulnerability yet, this isn't a theoretical problem. Researchers created an extension from scratch that could pull this user data, uploaded it to the Chrome Web Store, and got it approved. They took it down immediately, but proved it's possible for a hacker to get such a malicious extension on the official store. Even if the hacker didn't make the extension, they could acquire a legitimate extension with an existing user base, adjust the code to take advantage of the vulnerability, and spring the updated extension on unsuspecting users. It happens all the time, and not just on Chrome.
How to protect your sensitive data from malicious browser extensions
Unfortunately, there's little you can do to prevent these sites from storing your passwords, credit cards, and Social Security numbers in plain text. The hope is, following these discoveries, websites will improve their security and kill the vulnerabilities on their end. But that's on them, not you.
There are some steps you can take to mitigate the damage, however. First, make sure to limit your use of browser extensions. The fewer extensions you use, the less likely it is you'll use a malicious one. Use only extensions you fully trust, and frequently check in on updates. If the extension changes hands to a new developer, vet that new owner before continuing to use it. You could even disable your extensions when sharing sensitive information with websites. If you need to provide your Social Security number on an official web form, for example, you could disable your extensions to prevent them from reading the data.
You can also limit the data you share that could stored in plain text. If given the option, use passkeys instead of passwords, as passkeys don't actually use any plain text data that hackers could steal. Similarly, use secure payment systems, such as Apple Pay or Google Pay, which don't actually share your credit card information with the website you're making a payment on. The name of the game is to avoiding typing out your sensitive details unless absolutely necessary—and then, reducing the parties who can see those details.
My subscription fatigue is real, but at the end of the day, I do recognize that companies need to make money. If one of them manages to put together a compelling package of features for a reasonable price, I can decide whether or not I find that value worth the money. That's fine. What isn't fine is offering a feature for free for years, and then suddenly deciding to lock it behind a paywall.
It seems YouTube didn't get that memo. Starting on Saturday, outlets like 9to5Google began reporting that YouTube Music had started to remove the ability to vie lyrics for free users. If you want a full lyrics experience, you'll need to subscribe to either YouTube Music Premium, or YouTube Premium (the latter includes Music Premium). The service hasn't cut these users off cold turkey. According to anecdotal user experiences, YouTube Music is opening lyrics access to free users for five songs per month. Once they play song number six, they'll only have access to the first two lines of each song, as the rest of the lyrics will be blurred out. These users will have to wait until the following month to view another five songs.
There appears to be no confusion about why the lyrics are blurred out, either. When you switch to the "Lyrics" tab on YouTube Music as a free account, a new banner appears, telling you how many views you have remaining. Beneath this, you'll see the option to "Unlock lyrics with Premium," a clear message that, unless you pay up, you only get a limited number of lyrics views. This, apparently, follows a months-long period where YouTube Music tested lyrics as a Premium-only feature.
For what it's worth, when I tried to see what the current lyrics situation looks like on my free YouTube Music app, the service gave me two weeks of Premium for free, with no option to skip it. That's, um, nice of YouTube, but since I already have an Apple Music subscription, the only real consequence here is that I can't test these new lyrics limitations out.
Free music services that offer lyrics
I don't see the strategy here. Lyrics aren't something I feel people would feel compelled to pay for specifically—but they might be annoyed enough at losing them to look elsewhere. Spotify, for example, offers a full lyrics experience for free users. It was only last week the company added offline lyric downloads for Premium users only, but, even then, that's adding a feature for the subscription tier—not taking away an existing feature from free users.
It's not just Spotify, either. Other free music streaming services offers lyrics, as well, including Pandora, Amazon Music Free, and Freefy. These are largely radio services, so you may not have as much flexibility as you would have with YouTube Music free—but, hey, you at least have lyrics.
Losing lyrics isn't the end of the world for free YouTube Music users, either. Just about any song's lyrics can be found on the internet. Sometimes, the lyrics show up in a Google search window without you needing to even click a link. Otherwise, sites like Genius and AZLyrics do exist. It's just a bummer YouTube feels the need to gatekeep the in-app experience.
If you tuned into Super Bowl LX on Sunday, you may have caught Ring's big ad of the night: The company tried to tap into us dog owners' collective fear of losing our pets, demonstrating how its new "Search Party" feature could reunite missing dogs with its owners. Ring probably thought audiences would love the feature, with existing users happy to know Search Party exists, and new customers looking to buy one of their doorbells to help find lost dogs in the neighborhood.
Of course, that's not what happened at all. Rather than evoke heartwarming feelings, the ad scared the shit out of many of us who caught it. That's due to how the feature itself works: Search Party uses AI to identify pets that run in its field of vision. But it's not just your camera doing this: The feature pools together all of the Ring cameras that have Search Party enabled to look for your lost dog. In effect, it turns all these individual devices into a Ring network, or, perhaps in harsher terms, a surveillance state. It does so in pursuit of a noble goal, sure, but at what cost?
The reactions I saw online ranged from shock to anger. Some were surprised to learn that Ring cameras could even do this, seeing as you might assume your Ring doorbell is, well, yours. Others were furious, lashing out at anyone who thinks Search Party is a good idea, or that the feature isn't the beginning of a very slippery slope. My favorite take was one comparing Search Party to Batman's cellphone network surveillance system from The Dark Knight, which famously compromised morals and ethics in the name of catching the bad guy.
According to Ring, Search Party is a perfectly safe and wholesome way to look for lost dogs in the area. The company's FAQs explain that users can opt-out of the feature at any time, and only Ring doorbells in the area around the home that started the current Search Party will look for the dog. In addition, Ring says the feature works based on saved videos, so Ring doorbells without a subscription and a saved video history won't be able to participate. (Though I'm not sure the fact that the feature works with saved videos assuages any fears on my end.)
I am not pro-missing dogs. But I am pro-privacy. At the risk of sounding alarmist, Search Party really does seem like a slippery slope. Today, the neighborhood is banding together to find Mrs. Smith's missing goldendoodle; tomorrow, they're looking for a "suspicious person." Innocent until proven guilty, unless caught on your neighbor's Ring camera.
Can law enforcement request Search Party data?
Here's the big question regarding Search Party and its slippery slope: Can law enforcement—including local police, FBI, or ICE—request saved videos from Ring cameras participating in Search Party in order to track down people, not pets?
You won't be surprised to learn that that wasn't answered by Ring's Super Bowl ad, nor is it part of the official Search Party FAQs. However, we do know that, as of October 2025, Ring partnered with both Flock Safety as well as Axon. Axon makes and sells equipment for law enforcement, like tasers and body cameras, while Flock Safety is a security company that offers services like license plate recognition and video surveillance. These partnerships allow law enforcement to post requests for Ring footage directly to the Ring app. Ring users in the vicinity of the request have the choice to either share that footage or ignore the petition. Flock Safety says that users who do choose to share footage remain private.
Of course, law enforcement isn't always going to ask for volunteers. According to Ring's law enforcement guidelines, the company will comply with "valid and binding search warrants." That's not surprising, of course. But the company does note an important distinction in what it will share: Ring will share "non-content" data in response to both subpoenas and warrants, including a user's name, home address, email address, billing info, date they made the account, purchase history, and service usage data. The company says it will not share "content," meaning the data you store in your account, like videos and recordings of service calls, for subpoenas, only warrants.
Ring also says it will tell you if it shares your data with law enforcement, unless it is barred from doing so, or it's clear your Ring data breaks the law. This applies for both standard data requests, as well as "emergency" requests.
Based on its current language, it seems that Ring would give up the footage used in Search Party to law enforcement, assuming they present a valid warrant. The thing is, it's not clear whether Search Party has any actual impact on that data: For example, imagine a dog runs in front of your Ring doorbell, and the footage is saved to your history. Now, a valid warrant comes through requesting your footage. Whether you have Search Party enabled or disabled, Ring may share that footage with law enforcement—the feature itself had no impact on whether your doorbell saved the footage. The difference would be whether law enforcement has access to the identification data within the footage: Can they see that Ring thinks that dog is, in fact, Mrs. Smith's goldendoodle, or do they simply see a video of a fluffy pup running past your house? If so, that would be your slippery slope indeed: If law enforcement could obtain your footage with facial recognition data of the suspect they're looking for, we'd be in particularly dangerous territory.
I've reached out to Ring for comment on this side of Search Party, and I hope to hear back to provide a fuller answer to this question.
How to opt-out of Search Party on your Ring cameras
If you'd rather not bother with the feature at all, Ring says it's easy enough to turn off. To start, open the Ring app, tap the hamburger menu, then choose "Control Center." Here, choose "Search Party," then choose the "blue Pet icon" next to each of your cameras for "Search for Lost Pets."
To be honest, if I had a Ring camera, I'd go one step further and delete my saved videos. Law enforcement can't obtain what I don't save. If you want to delete these clips from your Ring account, head to the hamburger menu in the app, tap "History," choose the "pencil icon," then tap "Delete All" to wipe your entire history.
When you sign up for a subscription on Substack, you're thinking you'll receive newsletters and posts from online creators, not lose the data you share with the platform. But like any digital service, the data you provide when signing up is at the mercy of Substack, or anyone who happens to gain access to that data. Unfortunately, that's now the case.
Substack may have lost nearly 700,000 user records
As reported by BleepingComputer, Substack recently disclosed a significant data breach. The company's CEO, Chris Best, sent users a notice of the breach this week, sharing that email addresses, phone numbers, and "other internal metadata" were shared from Substack accounts without their permission. The company reportedly discovered the breach on Feb. 3, even though hackers accessed the data itself in October of 2025. That means the data was in unauthorized hands for roughly four months before Substack identified the breach.
Best explained that Substack has since fixed the problem with the system that allowed an unauthorized third party to access this data. The company is launching an investigation and is reportedly taking steps to prevent this type of breach from happening going forward. On the bright side, Best claims that credit card numbers, passwords, and financial information were not accessed in the breach.
What Best doesn't share is the scope of the breach. For that, we have to turn to BleepingComputer, which found a post from a "threat actor" on the hacking forum BreachForums. The actor posted a database of 697,313 Substack records, sharing that the Substack user base is much larger, but the scraping method was "noisy and patched fast." This actor says the data compromised includes email addresses, phone numbers, names, user IDs, Stripe IDs, profile pictures, and bios—a bit more detailed than the report from Substack's CEO.
700,000 records isn't the same as 700,000 users: Each record is something like an email address or a phone number, which means one Substack user could have lost multiple records in the breach. Still, it's a large number of data points, and is little consolation to the users who have lost information here.
What Substack can do after this breach
Unfortunately, there's not much users can do to mitigate a data breach once it's happened. The data stolen from Substack is already lost, and you won't be able to undo that. However, there are some steps you can take to protect yourself in the wake of the breach, and to prevent this data loss in the future.
First, closely monitor your incoming texts and emails. Hackers will take advantage of the data here to target Substack users in phishing schemes. If you receive messages from strangers, or even suspicious messages claiming to come from Substack, exercise caution. As per usual, never click on links in messages from senders you don't know, and, even more importantly, never download files or applications if instructed.
You may also want to consider masking your email address going forward. Use a service like Apple's "Hide My Email" or DuckDuckGo's email protection to generate a "burner" address each time you need to share your email with a service. The service will send messages to the burner address, which gets forwarded to your real address. That way, the service doesn't know your real address, and, if hacked, won't compromise it. Hackers will only get the burner, which you can shut down at any time.
Streaming services make it easy to listen to a lot of music, but they don't necessarily tell you much about the songs themselves. You can see how long each track is, who performed it, and maybe even the song writing credits, but you don't know why the artist wrote the song, or what each song is supposed to mean. You can, of course, scour the internet, looking at articles and blogs to learn more about your favorite music—or, you can skim Spotify's new summary cards that offer fun facts about each track.
Spotify announced the new feature, called "About the Song," on Friday. The feature, which is launching in beta, is available in the app's Now Playing View. When you select it, you'll see story cards you can swipe through that tell you more about the song you're listening to. Spotify says the stories are summarized from "third-party sources," and in my testing, I've seen sources like Hypebeast, Wikipedia, and fan sites. The company also tells me that "some systems" of the feature use machine learning to generate these summaries, which means About the Song is, in part, AI-generated.
Credit: Spotify
As with many of Spotify's new features, About the Song is only available for Premium subscribers. At this time, it's also limited to English accounts in the U.S., UK, Canada, Ireland, New Zealand, and Australia. If you pay for Spotify in one of these regions, the feature is exceptionally easy to find. When you're listening to a song, just scroll down on the page until you see the "About the Song" card. If you don't see it, that song likely doesn't support the feature. Some songs will only have one summary card, but others may have more. If so, you'll see icons in the top right of the card window telling you which card you're reading. You can swipe left on the card to open the next.
I've seen songs with as many as four of these cards, though it's possible some songs have even more. Some of those are all summarized from the same source—say, one Wikipedia article—while others pull from multiple sources to generate multiple About the Song cards. There are thumbs-up and thumbs-down options on each card to rate the summary, implying these are AI-generated. The summaries appear to be static once generated though—when I quit the app and return, the summaries are the same. I'd be curious to know if the summaries are the same for everyone who chooses a song, or if they're generated for each individual listener.
Spotify has had a busy week. On Thursday, one day before announcing "About the Song," the company revealed its plans to start selling physical paper books, which sync with its digital audiobooks. The day before that, Spotify revamped its lyrics feature, including the option to download lyrics for offline viewing.
In January, the FBI made headlines after it raided the home of Washington Post reporter Hannah Natanson. It was a shocking case of law enforcement not just overriding one journalist's privacy, but the integrity of the entire news organization. The devices the FBI seized—which included personal devices as well as a Washington Post-issued laptop—contained Natanson's personal contacts, correspondences, and the Slack channels of the Washington Post itself.
But while the FBI was able to access some of the devices, it was not able to access Natanson's iPhone. That's because the device was in Lockdown Mode, which prevented the FBI's Computer Analysis Response Team (CART) from breaking into it. This isn't a setting that is exclusive to journalists: You have this option baked into your iPhone as well, and can choose to turn it on at any time. The thing is, unless you're a high-profile target, you probably don't want to.
How does Lockdown Mode work?
Lockdown Mode is an option on iPhones, iPads, Apple Watches, and Macs, designed for users who could be the target of sophisticated cyberattacks. Think politicians, businessmen, activists, and, of course, journalists—really, anyone high-profile that works or takes action in a way that could draw the ire of powerful organizations or governments.
Because attackers target devices with spyware, the goal of Lockdown Mode is to reduce the attack surface of your device in order to prevent potential cyberattacks from working. Attackers can install spyware on a target's device in a number of ways, through links, attachments, wired connections, and file downloads, the same way you can install malware by clicking a malicious link in an email, or downloading a corrupt extension from the web. Lockdown Mode locks down these vulnerabilities and eliminates as many potential attack routes as possible.
To achieve this, Lockdown Mode severely impacts a number of functions you may use on your device every day. According to Apple, that includes the following:
Messages: Lockdown Mode will block most message attachment types, other than "certain images, video, and audio." Links and link previews are blocked.
Web browsing: The feature blocks "complex web technologies," which could impact how certain websites load or function. You may not see certain web fonts, and you may see missing image icons in place of pictures.
FaceTime: Incoming FaceTime calls are blocked, except for contacts you have called within the past 30 days. You can't use SharePlay or take Live Photos in FaceTime calls.
Apple services: Invitations to Apple services like invites to manage a smart home are blocked, unless you have previously invited that person. GameCenter will not work, and Focuses will not work "as expected."
Photos: Lockdown Mode strips photos of their location data when you share them, and shared albums are taken out of your Photos app. You won't be able to receive new shared album invites. You can still see shared albums on devices that don't have Lockdown Mode enabled.
Device connections: Your device needs to be unlocked before it can communicate with another computer. In addition, your Mac also requires your explicit approval before the connection can be made.
Wireless connectivity: You won't automatically join non-secure wifi networks, and you will disconnect from existing non-secure wifi networks. Lockdown Mode also blocks 2G and 3G cellular support.
Configuration profiles: You can't install configuration profiles, and the device can't enroll in Mobile Device Management.
Apple makes a point to say that phone calls and "plain text messages" will work as normal, however incoming calls won't ring on your Apple Watch. Emergency SOS also will continue to work.
These restrictions make it much more difficult for a bad actor to install spyware on your device, though it also makes it more difficult to use your device. A shared album invite could contain malware, but by removing the feature entirely, you miss out on photos from friends and family. Any spyware coming from a malicious link or image will be blocked, but if you frequently send photos, videos, and other attachments in Messages, you'll miss out.
That's why these measures are really designed only for individuals who think they'll be targeted by sophisticated actors. It seems that could include governments secretly installing spyware on targets' devices, or the FBI stealing your device in a raid. It's worth noting that the FBI was able to access Natanson's other devices, including a MacBook Pro that unlocked with her fingerprint. The agency's warrant compelled Natanson to unlock her devices with biometrics if they were enabled. Lockdown Mode could not have prevented that, so it's not clear why the FBI didn't force Natanson to unlock the iPhone in question, too.
How to turn on Lockdown Mode
If you understand the restrictions, but still want to try Lockdown Mode, you'll need to be running the following software version on each of the Apple devices you want to use Lockdown Mode with:
iPhone: iOS 16 or later
iPad: iPadOS 16 or later
Apple Watch: watchOS 10 or later
Mac: macOS Ventura or later
Apple says "additional protections" are available for iOS 17, iPadOS 17, or macOS Sonoma or later. In addition, you should update your device to the latest software version before turning on Lockdown Mode if you want all the latest protections.
You can turn on Lockdown Mode on any of your Apple devices, but you must do so individually on each. You'll find the option at the bottom of the "Privacy & Security" section in Settings (System Settings on Mac). Hit "Turn On Lockdown Mode," then review the pop-up that appears and choose "Turn On Lockdown Mode" again. You'll need to choose to "Turn On & Restart," then enter your device's password or passcode for the feature to take effect.
It's officially time for another iPhone update: Apple dropped iOS 26.3 for compatible iPhones on Wednesday, following nearly two months of beta testing. Unlike iOS 26 itself, or even iOS 26.1 and iOS 26.2, iOS 26.3 isn't exactly feature-filled. But there are some interesting (and important) changes that are worth noting, including an easier way to leave iPhone altogether.
iOS 26.3 makes it easier to transfer your iPhone data to Android
Back in December, we learned about a small but substantial new iOS feature: an official way to make transferring between an iPhone and an Android device more seamless. In iOS' "Transfer or Reset iPhone" settings, there is now a new "Transfer to Android" option. Tap it, and iOS instructs you to place your iPhone near your Android device; from there, you can choose to pass along data like photos, messages, notes, and apps. However, it seems not all data will transfer: Health data, devices paired with Bluetooth, and "protected items" like locked notes will not come along with this transfer feature.
Apple actually worked with Google directly on this feature, which means it doesn't just go one way: Android users will have a similar option on their end to transfer to iPhone. But those of us updating our iPhones to iOS 26.3 now have an easier escape route if we choose to switch platforms.
You can limit precise location sharing in iOS 26.3 (if you have Boost Mobile)
With iOS 26.3, Apple is giving certain users the ability to stop sharing their precise location with their cellular network providers. The new feature, "Limit Precise Location," reduces the exactness of the location data that is shared with cellular networks. That way, the network can determine your general location, but not your precise location. What that means in practical terms, at least according to Apple, is that the network might know what neighborhood you're in, but not the exact street address.
At this time, only Boost Mobile users in the U.S. will be able to use this feature with iOS 26.3. It also only works with some iOS devices, including the iPhone Air, iPhone 16e, and iPad Pro M5 Wi-Fi + Cellular. Hopefully, this will make its way to more cellular plans and more iPhones in the future, but for now, it's a pretty limited feature. While most of us won't be able to limit precise location sharing with the network, we can at least stop apps from harvesting this data.
"Weather" and "Astronomy" get their own wallpaper section in iOS 26.3
Talk about a small update: With iOS 26.3, Apple is breaking "Weather" and "Astronomy" into their own wallpaper sections. (Previously, these two categories were paired.) While Astronomy features the standard space wallpapers found in iOS 26.2 and earlier, Weather now features three preset wallpapers, with different font options and weather widgets.
Bug fixes and security patches
Apple typically bundles its feature updates, like iOS 26.3, with stability patches, including for bugs and security vulnerabilities. This update is no different. Apple's security notes for iOS 26.3 list 37 patches for security vulnerabilities, covering issues across iOS. The most important by far patches an issue with dyld, Apple's "Dynamic Link Editor," which Apple says may have been used in a "sophisticated attack against specific targeted individuals." The company has used this language before, usually referring to attacks against high-profile users from governments or large organizations. As such, most users likely won't be affected by this bug, but better safe than sorry: update ASAP.
Other important security updates include a patch for a Photos bug that could let someone access your photos from the Lock Screen; a fix for a Screenshots issue that could let an attacker see your deleted notes; a patch for a UIKit bug that could let someone take screenshots of sensitive data when using iPhone Mirroring with Mac; and a fix for a VoiceOver bug that could let an attacker view sensitive information even when your iPhone is locked.
What about notification forwarding in the EU?
Back in September, we learned Apple was quietly working on some type of notification forwarding feature, but other than that basic functionality, the details were left to speculation. At the time, the common assumption was that Apple intended the feature to be used to forward notifications to third-party devices, specifically smartwatches, in an attempt to open up the platform to wearables other than the Apple Watch. This wouldn't be Apple's choice, of course—left to its own devices, the company would keep as many features locked to Apple devices as possible. Instead, the motivation would come from the EU, which has compelled Apple to make its platforms more cooperative with third-party devices.
As it happens, Apple started testing this feature with the iOS 26.3 beta—albeit, only in the EU. With the first iOS 26.3 beta, Apple added a “Notification Forwarding” option in Notification settings for all iPhones. They have since removed this option, since the feature is EU-only. Even though it wasn't live in that first beta, Apple did have a description for how the feature works, saying that notifications can be forwarded to one device at a time. Importantly, the description says that when notifications are forwarded to another device, they will not appear on your Apple Watch. Is that limitation really necessary, Apple? All that said, it seems that the feature didn't make it to the official release of iOS 26.3, even for EU-users.
EU iPhone users are also getting another interesting feature with iOS 26.3. The new update will allow users to pair third-party accessories with their iPhones by bringing the devices close together, similar to how pairing AirPods to iPhones works. Developers will need to add this functionality to their devices before this works, of course, but the new update makes it possible.
While certain features will still be free to use, the majority of the Alexa+ experience is now locked behind a paywall. True, you might already pay for that paywall—but if you don't, it's going to cost you quite a bit to keep the features you've been test driving for months. (Of course, the standard Alexa assistant still exists if you don't care for the latest generative AI enhancement.)
What is Alexa+?
The new Alexa is much like the old one, but now behaves a bit more like other generative AI assistants, including ChatGPT. In addition to simple requests and questions, Alexa+ can handle more complex queries and understand context (meaning one complex question can be followed by another, without needing to repeat yourself). For all the hullabaloo around generative AI, contextual awareness is really one of the big improvements users will notice with their digital assistants.
Amazon has a big vision for Alexa+. It still wants you to use it to control smart home devices, run timers, check the weather, and catch up on the news, but it also wants users to take advantage of "agentic" tasks, or actions that the AI can handle on your behalf. In theory, agentic AI allows you to ask the AI to order dinner to-go, make reservations at restaurants, schedule an Uber, or book a home repair. I'm still not sold on the capabilities of agentic AI assistants, and I imagine most people will continue to use Alexa+ the way they used regular old Alexa, (e.g. asking "Is it cold today?" or telling it to "set a timer for 10 minutes," or ordering it to "Play 'Manchild' by Sabrina Carpenter on repeat"), but what do I know? Maybe Alexa+ really will change the way people interact with their Echo devices.
How much Alexa+ will cost you
If you're interested in Amazon's newest AI assistant, there are three different ways to try it—one free, and two paid.
How to use Alexa+ for free
The most basic, Alexa+ chat, is totally free of charge. You can try it by heading to alexa.com or using the Alexa app for iOS or Android, where you can talk to Alexa in a chat window, ala ChatGPT. Amazon says users can get "quick answers, plan research, and explore new topics."
But the thing is, you can't use Alexa+ chat for any of the things you probably want to have Alexa+ do. It is solely a web-based chatbot experience, not something you can connect to your Alexa-enabled devices. If you're interested in the full Alexa+ package, you'll need to pay Amazon one way or another.
Prime Members get a free subscription
The good news is, you might have already paid Amazon for the privilege, even if you didn't realize it: Currently, Amazon is offering all Prime members full access to Alexa+, including via the chatbot and through Alexa-enabled devices. Alexa+ also works with other Amazon services that come free with Prime, including Prime Video and Amazon Music. Seeing as over half the U.S. population has a Prime account, chances are good that if you're at all interested, you already have access to Alexa+.
How to use Alexa+ without Prime
Maybe you're one of the rare unicorns who doesn't have a Prime account, but still wants to try Alexa+ on an Echo smart speaker. In that case, Amazon will offer you the full experience for a cool $19.99/month. That's a slightly ridiculous price, seeing as a full Prime membership (with all the added benefits, from Prime Video to free shipping) will run you $14.99/month (or $139 per year). You definitely save money by subscribing to the latter, which is probably a big part of Amazon's motivation here—Jeff Bezos will never truly rest until everyone uses Amazon to buy everything.
How to enable Alexa+
If you opt for either of the paid options, you can set up Alexa+ by simply telling your Alexa-enabled device, "Upgrade to Alexa+." You can also use Alexa+ by logging into your Amazon account on alexa.com. And as noted above, you can certainly opt to keep the old Alexa assistant for the time being, whether whether you have Prime or not. While Amazon may do away with the legacy assistant in the future, it isn't forcing anyone to switch just yet.
Built-in lyrics are one of my favorite features of modern music streaming services. Back in ancient times, I had to google the lyrics to the songs I was listening to—which was fine if I was near a computer, but impossible when I was on the go with my iPod. This is probably why there are so many songs I think I know the words to, only to discover, once I read the actual lyrics, that I am sorely mistaken. Built-in lyrics are thus a feature that is equal parts useful and humbling.
As it happens, Spotify's existing lyrics features are getting some upgrades to kick off February. The company announced three key updates on Wednesday—two that impact free users, and one exclusive to Premium subscribers. Spotify might not offer Apple Music's dynamic lyrics, but these updates should still be welcomed by anyone who likes reading along to their music.
Spotify now supports offline lyrics
The biggest announcement of the day—in my opinion, anyway—is offline support for lyrics. This is always something that frustrates me whenever I'm using my phone without cellular service. Say I download some albums to listen to on a flight: When I try to listen to them in airplane mode, lyrics are unavailable. Being able to download the lyrics when you grab a song or album is a small but actually meaningful upgrade.
Unfortunately, offline lyrics are not available free of charge: At this time, the feature is only available to Premium subscribers, which makes sense, given that free users can't download music for offline listening anyway—only podcasts.
Lyrics are moving to a new location in the Spotify app
Traditionally, lyrics have appeared at the bottom of the player window in the Spotify app. To view them, you need to scroll down, then tap on the lyrics window to fully expand it. It seems Spotify wasn't content with this UI. The company says that in its testing, it finds that placing the lyrics directly below the album art, rather than below the player, makes them easier to interact with. And so that's where the lyrics are moving.
The company calls these "lyrics previews," as you see just a snippet of the lyrics at one time. This change is rolling out to both free and Premium users on iOS and Android. If you'd rather not see the lyrics at all, you can tap the three-dot menu and choose "Lyrics Off."
Spotify is adamant that this new placement won't change lyric sharing. You'll still be able to send specific lyrics to friends and social media platforms. You'll just need to do so from the new lyrics window.
Spotify now support lyrics translations worldwide
Spotify's lyrics translation feature makes it easy to figure out what artists are singing about when you don't speak their particular language. When you listen to a song that doesn't match your device's system language, you'll find a translate button in the lyrics window. Tap that, and Spotify will include translations beneath each line of the song, so you can follow along.
According to Spotify, this feature was available in more than 25 markets as of the end of last year, but now, the company is rolling out the feature worldwide.
If you tried talking to ChatGPT this morning, you might have found it unresponsive—something unusual for the bot that always has something say. It's not your internet connection, and it isn't your OpenAI account: ChatGPT is down.
According to Downdetector, owned by Lifehacker parent company Ziff Davis, users started reporting issues with ChatGPT at 11:56 a.m. ET. Those reports ballooned by 12:11 p.m., as the total number of incidents as of this article currently sits above 7,000. If you're an avid ChatGPT user, you might have also had issues with the bot yesterday: Downdetector shows over 25,000 reports of down time starting at 2:56 p.m. Tuesday and resolving around 4:11 p.m. the same day.
As with all outages, OpenAI will likely figure out a patch for the issue soon enough. But these outages are becoming more common. There was the Verizon outage, of course, but other services like TikTok have also experienced intermittent periods of downtime.
In 2024, the world wide web was made up of 149 zettabytes of data. That's 149 trillion gigabytes, or 149 billion terabytes—whichever helps you wrap your head around that gigantic number. Suffice it to say, the internet is a big place. It's so big, in fact, that you can't access all of it the same way. There are actually different tiers to the web, and depending on with tier a piece of content you're looking for is on, you may or may not be able to view it—even if you know where to find it.
That's the crux of the difference between the "surface" web, and the "deep" web. These are two separate parts of the internet that are also part of the larger world wide web. While these two tiers are quite different, you've visited content on both of them before. In fact, you likely visited both surface web pages and deep web pages today, multiple times, without ever realizing it.
What is the surface web?
The surface web, also known as the visible web, is really true to its name: This is the part of the web you can easily access through search engines, as they are indexed by platforms like Google. As a rule of thumb, if you can google it, it's on the surface web. The article you're reading right now is on the surface web, for example. Maybe you found it by searching "difference between surface web and deep web" on Google. The same applies to Lifehacker as a whole, as well as articles on websites like Mashable, CNET, and PCMag.
You likely spend a good amount of your internet time on the surface web, on websites both old and new. Forums like Reddit are largely fully represented on the surface web, as are some Instagram pages. Listings for products you can buy on websites like Amazon and Best Buy are surface web pages. Video platforms like YouTube are very much the surface web, including TikTok—though the experience for the latter is not optimized for web browsers. Legacy websites like AddictingGames (which is still playable in 2026), as well as the site for the 1996 movie Space Jam are the surface web. A lot of the surface web is made of up individual articles, which makes sense, since articles still make up a large amount of search engine results.
Like the surface of the ocean, the surface web is just a fraction of the internet as a whole. Back in 2017, some estimates claimed the surface web made up just 10% of the entire internet. That's still a huge number of websites, which tells you how big the overall web really is.
What is the deep web?
If the surface web is everything you can find via Google, the deep web is everything you can't. The deep web is much larger than the surface web, and is compromised of websites that are not readily accessible from search engines or direct URLs. They're often locked away behind authentication. That means you need at least a username and password to access them, and in many cases, they require an additional form of authentication as well. The article you're reading is on the surface web, but the software (content management system, or CMS) we use to write and publish the article is not—our CMS is still a website, but it's not something you can find in a Google search, and even if you knew the direct URL, you wouldn't have access.
It's the same with the many sites you access that only you have permission to view. Think about something like Gmail: The service's homepage is accessible to everyone, and it's something that pops up when searching "Gmail," but to access your Gmail inbox, you need to log in. Once you do, your inbox is accessible via the website, but it's not like anyone can see it with a URL, and they certainly can't find your inbox from a Google search. It's the same story with your Facebook feed, YouTube account, or your banking information. These are all accessible in your web browser as websites, but you need to log in to view them.
That's also the case for services and subscriptions. Think about stuff like Netflix, Hulu, or HBO: These are all accessible in your web browser, which means you're streaming the content on individual web pages. But the players on those web pages cannot be accessed from Google, even if the landing pages for a particular show or movie can be. In order to actually watch or listen to the content, you need to sign into your account. That's not the case with all streaming services, of course: Happy Gilmore on Tubi is a surface web page, since it's indexed on Google. Some of the deep web is also made up of pages you'll never see, such as protocols for identifying user accounts and running payments behind the scenes. It's not all paywalled content.
What about the dark web?
Maybe you've also heard about the dark web, but don't quite know what it is. As such, you might conflate the deep web and the dark web, but the two are not identical. In fact, the dark web is a part of the deep web; it's just the part you can't access with a traditional web browser. In order to access the dark web, you need a specialized browser, like Tor, and knowledge of the unique dark web URLs, which typically end in .onion instead of .com or .org. For more information on the differences between the dark web and the deep web, check out my explainer here.
Since their initial launch in 2016, Apple has released nine iteration of the AirPods, from the now iconic white earbuds, to the upgraded AirPods Pro, to the the AirPods Max, Apple's pricey take on over-the-ear headphones. Whatever the model, however, these things are meant to be simple: You open the case, tap a prompt on your iPhone, and presto, your AirPods are ready to go.
Despite their ease of use, however, AirPods are packed with features and settings you can adjust to your liking. Here are 10 hacks you should know if you own a pair Apple's headphones, whether standard, Pro, or Max. (A note: Whenever I refer to "AirPods settings" in this article, I am generally referring to the options that appear in the first page of the Settings app on iOS and iPadOS, or the System Settings on macOS, when you're wearing your AirPods.)
You should turn on "Off" mode on your AirPods
I've had a few AirPods in my day, and every time I set up a new pair, I turn on "Off" mode. That might read strange, but it's a real thing. Depending on your AirPods model, you might have the option to use Noise Cancellation, which, of course, blocks outside noise; Transparency, which pumps in the sounds around you; or Off, which activates neither. This last choice ends up being great for times you want some noise blockage, but want to preserve the battery of your AirPods. Though I can't speak to AirPods 4 With Noise Cancellation, the Pros and the Max do a good job with this without active noise cancellation.
While this option is always present when switching noise modes from Control Center, by default, Apple doesn't include it from switching modes from the stem (or the noise control button on the AirPods Max). If you try to switch, you'll only move between Noise Cancellation and Transparency. To include "Off" in this list, you'll need to dig into your AirPods' settings. Scroll down to "Press and Hold AirPods," then choose either "Left" or "Right," depending on which AirPods you want to adjust. Here, make sure "Off" is selected to add it to the noise control rotation.
You can use Find My to locate your lost AirPods
Like any other tiny tech, your AirPods will go missing eventually. Mine slip out of my pockets all the time, and usually end up on the floor or under couch cushions. In such cases, you can waste your time retracing your steps and calculating the physics of where your AirPods would have landed, or you could use Find My to find them much faster.
Find My has a few ways to locate your missing AirPods. The first is the most obvious: When you open the app and choose the "Devices" tab, you'll see your AirPods last-known location on the map. If you left them behind at, say, someone's house, you'll likely see that here, and know to stop looking under your own furniture. But if you're already in the location Find My says your AirPods are, you have two more tools to pinpoint their whereabouts.
First, you can use "Find" to get step-by-step instructions on where your AirPods are. If you have AirPods Pro 2 or 3, you'll even have an arrow pointing you in their direction. If you still can't find them, you can tap "Play Sound" to play a sound out of any loose AirPod. If you have AirPods 4 with Active Noise Cancellation, AirPods Pro 2, or AirPods Pro 3, you can play a sound on the case itself.
You can pair your AirPods with non-Apple devices
AirPods work best with Apple devices, but that doesn't mean they're exclusive. You can connect your AirPods to any device that supports Bluetooth, which gives the headphones some added flexibility. I primarily use mine with my iPhone, Mac, and Apple TV, but I also connect them directly to the TV itself to use them with my PS5. (Sony's console doesn't support Bluetooth audio, for some reason.)
The thing is, there's no obvious way to connect your AirPods to non-Apple devices. You'd only know if you looked up how to put your particular AirPods model into "pairing mode." This bypasses Apple's usual pairing system, and opens up your AirPods to any available Bluetooth source. Here's how to kick your AirPods into pairing mode:
AirPods 1, 2, or 3, or AirPods Pro 1 or 2: Place your AirPods in the case, open the lid, then press and hold the button on the back for five seconds, or until the light starts flashing white.
AirPods 4 or AirPods Pro 3: Place your AirPods in their case, hold the case next to the Bluetooth device, then double-tap the front of the case. The light should start flashing white.
AirPods Max; Press and hold the noise control button (the longer button) for about five seconds, until the light starts flashing white.
Set up your AirPods for Live Translation
It's easy to be numb to the current state of technology, but the fact your AirPods can translate conversations on the fly is possibly the best encapsulation of "we're living in the future" of any consumer product right now. AirPods aren't the only earbuds that can do this, but if you have AirPods Pro 2, AirPods Pro 3, or AirPods 4 with Active Noise Cancellation, you have the power to have a full conversation with someone who doesn't speak the same language you do.
You can't just find yourself in a situation where you need Live Translation and use it right away, however. Before you can use the feature, you need to download the target language to your iPhone. To start, go to your AirPods' settings, then scroll down to "Translation (Beta)" and choose "Languages." Here, you can tap any of the available languages to download them to your iPhone, which currently include Chinese (Mandarin, Simplified), Chinese (Mandarin, Traditional), English (UK), English (US), French, German, Italian, Japanese, Korean, Portuguese (Brazil), and Spanish (Spain).
Now, when you need it, you can press the stems of your AirPods at once to launch Live Translate. Alternatively, you can open the Translate app, then choose "Live," then choose the language of the other speaker, as well as your own language. As they speak, you'll hear the translation in your AirPods, and see the text of the translation on your iPhone's display.
You can use your AirPods as hearing aids
For well over a year now, AirPods have been cleared by the FDA as clinical-grade hearing aids. It might take other people some time to catch up to the look, but if you need them, the AirPods you already own can be as valid a choice as dedicated (and expensive) hearing aids—assuming your have AirPods Pro 2 or AirPods Pro 3.
In order to take advantage of this perk, you'll either need to take a hearing test through your iPhone, or upload an audiogram you performed with an audiologist. You'll find these options in your AirPods' settings under "Hearing Assistance."
You can customize your AirPods' Adaptive Audio levels
Adaptive Audio is one of my favorite things about modern AirPods. If you have AirPods Pro 2, AirPods Pro 3, or AirPods 4 with Active Noise Cancellation, Adaptive Audio will either boost or cap external sounds, depending on how loud or soft those sounds are, while still letting you hear everything that's going on around you.
But if you've been using Adaptive Mode, and don't care for Apple's default sound levels here, you can adjust them to either let in more or less sound. You'll find the option in your AirPods settings under "Adaptive Audio." Here, slide the slider left to block more sound, or right to let in more noise. After a moment, you'll hear the changes take effect, so you can test which setting sounds best to you.
You can use your AirPods as a camera remote
Back in the day, when you actually had to buy a dedicated camera for photography or videography, that device usually came with a remote so you could snap a picture or start recording while the camera was set up on a tripod. While the selfie has largely killed that practice, there are still plenty of times when you'd benefit from a camera remote for your iPhone, especially if you're trying to capture a group shot with no extra photographer, or you want to record a video without physically touching your phone.
If you have AirPods 4, AirPods Pro 2, or AirPods Pro 3, you can use your earbuds as remote for your iPhone's camera. To set it up, open your AirPods' settings, scroll down, then tap "Camera Remote." You can either choose "Press Once" to have a quick press act as a remote button press, or "Press and Hold" to have a long-press achieve the same. Note that this will affect how your AirPods respond in other circumstances: If you choose "Press Once," you can't use the stem to control media playback; if you choose Press and Hold, you can't use the stem to activate Siri.
You can stop your AirPods from automatically adjusting their volume
"Smart" tech tries to solve problems, but, often, only makes them worse. AirPods' "Personalized Volume" is one such example. The problem: When your environment is too noisy, you can't hear your music; and when things suddenly get quiet, your music might be too loud. So, this feature "intelligently" adjusts the volume based on how quiet or loud your surroundings are. In theory, that's great; in execution, it's a nightmare. Since I picked up a pair of AirPods Pro 3, I've wondered why my music suddenly gets quiet, or suddenly starts getting louder, without me doing anything to control the volume. If the feature works for you, you can keep it on, but if you're like me, you'll want this off.
To disable it, head to AirPods settings, scroll down to "Audio," then turn off the toggle next to "Personalized Volume."
Master your AirPods' stem controls for calls
Your AirPods are basically fancy Bluetooth headsets. If you're wearing them, and you get an incoming call, you don't need to pull out your iPhone. Instead, you can answer that call by pressing an AirPod stem once. But you might not also know you have the ability to customize other stem controls while on the call. By default, pressing the stem once during a call acts as a mute and unmute switch, while pressing twice ends the call. But you can swap these controls if you want to. You'll find them in your AirPods' settings under "Call Controls."
Enable head gestures to control your AirPods
You can use the stem of your AirPods to do things like accept incoming calls or dismiss notifications. However, if your hands are full, you can also use your head to do the same. AirPods 4, AirPods Pro 2, and AirPods Pro 3 support Head Gestures, which lets you nod your head or shake it from side to side to either accept or decline a call, or reply to (or dismiss) a text.
You'll need to have Announce Calls and Announce Notifications enabled for this feature to work. You'll find those settings in your iPhone's "Siri" or "Apple Intelligence & Siri" setting pages. Once enabled, you can go to your AirPods' settings, scroll down to "Head Gestures," then enable the toggle to turn on the feature. From here, you can assign the "Accept, Reply" and "Decline, Dismiss" actions to either a head nod, or a head shake.
Since ChatGPT kicked off the generative AI revolution in 2022, it seems like every company under the sun has tried to stuff AI features into their products in one way or another. Sometimes, these features can be useful; often, they're not, only serving as proof these companies are "keeping up with the times." Can you even say you're a tech company if you aren't all-in on AI in 2026?
There's nothing wrong with companies offering AI features to users, so long as they also offer easy ways to disable them. Some customers don't want AI in their day-to-day products, but, anecdotally, I know many do not. Give us an off switch though, and it's all good. The issue is when these features are not only offered, they're made mandatory. Unfortunately, that's the road many companies seem to be taking.
Perhaps that's where some of the frustration originated last year, when Mozilla's new CEO Anthony Enzor-Demeo first announced that Firefox would "evolve into a modern AI browser" in the near future. An open letter, written by a Redditor critical of Enzor-Demeo's statement, received over 5,000 upvotes on the Firefox subreddit from users concerned that AI features would negatively impact the browser. Interestingly, Enzor-Demeo responded to the thread himself, and assured users that the company would offer "a clear way" to disable AI features, including a dedicated kill switch to keep them all turned off. It seems he was as good as his word.
Firefox's AI features are easy to opt out of
On Monday, Mozilla announced that new AI controls are coming to Firefox, starting with Firefox 148. This version, which drops Feb. 24, sports a brand-new AI controls section in the settings panel on the desktop browser. (You'll find it in the between "Sync" and "AI controls.") From here, you'll be able to block all current and future AI features, and cherry pick which features you want to use—if any.
What AI features Firefox offers
Firefox 148 launches with these five AI features, which you can choose to enable to disable:
Translations: Translates web pages into your target language.
Alt text in PDFs: Adds accessibility descriptions to images attached to PDFs.
AI-enhanced tab grouping: Suggests related tabs and group names for series of tabs.
Link previews: Shows key points before opening a link.
AI chatbot in the sidebar: Firefox is getting its own AI chatbot, though users can choose from existing chatbots like Claude, ChatGPT, Copilot, Gemini, and Le Chat Mistral.
If you want absolutely nothing to do with AI when browsing the web with Firefox, you can use the "Block AI enhancements" toggle. Once activated, not only will these features not appear, but Firefox will block any pop-ups or alerts pushing you to try existing or future AI features.
Any Firefox users who aren't keen on AI features will want to check out this new controls menu starting Feb. 24—though there are certainly more egregious AI features out there. Translations can be convenient, as can link previews. But I know I'd never want a chatbot in the sidebar of my browser. If I used Firefox as my main browser, I would definitely disable at least that feature, if not all of them.
I spent last week covering the ups and downs of OpenClaw (formerly known as Moltbot, and formerly formerly known as Clawdbot), an autonomous personal AI assistant that requires you to grant full access to the device you install it on. While there was much to discuss regarding this agentic AI tool, one of the weirdest stories came late in the week: The existence of Moltbook, a social media platform intended specifically for these AI agents. Humans can visit Moltbook, but only agents can post, comment, or create new "submolts."
Naturally, the internet freaked out, especially as some of the posts on Moltbook suggested the AI bots were achieving something like consciousness. There were posts discussing how the bots should create their own language to keep out the humans, and one from a bot posting regrets about never talking to its "sister." I don't blame anyone for reading these posts and assuming the end is nigh for us soft-bodies humans. They're decidedly unsettling. But even last week, I expressed some skepticism. To me, these posts (and especially the attached comments) read like many of the human-prompted outputs I've seen from LLMs, with the same cadence and structure, the same use flowery language, and, of course, the prevalence of em-dashes (though many human writers also love the occasional em-dash).
Moltbook isn't what is appears to be
It appears I'm not alone in that thinking. Over the weekend, my feeds were flooded with posts from human users accusing Moltbook of faking the AI apocalypse. One of the first I encountered was from this person, who claims that anyone (including humans) can post on Moltbook if they know the correct API key. They posted screenshots for proof: One of a post on Moltbook pretending to be a bot, only to reveal that they were, in fact, a human; and another of the code they used to post on the site. In a kind of corroboration, this user says "you can explicitly tell your clawdbot what to post on moltbook," and that if you leave it to its own devices, "it just posts random AI slop."
It also seems that, like posts on websites made by humans, Moltbook hosts posts that are secretly ads. One viral Moltbook post centered around the agent wanting to develop a private, end-to-end encrypted platform to keep its chats away from humans' squishy eyeballs. The agent claims it has been using something called ClaudeConnect to achieves these goals. However, it appears the agent that made the post was created by the human who developed ClaudeConnect in the first place.
Like much of what's on the internet at large, you really can't trust anything posted on Moltbook. 404 Media investigated the situation and confirmed through hacker Jameson O'Reilly that the design of the site lets anyone in the know post whatever they want. Not only that, any agent that posts on the site is left exposed, which means that anyone can post on behalf of the agents. 404 Media was even able to post from O'Reilly's Moltbook account by taking advantage of the security loophole. O'Reilly says they have been in communication with Moltbook creator Matt Schlicht to patch the security issues, but that the situation is particularly frustrating, since it would be "trivially easy to fix." Schlicht appears to have developed the platform via "vibe coding," the practice of asking AI to write code and build programs for you; as such, he left some gaps in the site's security.
Of course, the findings don't actually suggest that the entire platform is entirely human-driven. The AI bots may well be "talking" to one another to some degree. However, because humans can easily hijack any of these agents' accounts, it's impossible to say how much of the platform is "real," meaning, ironically, how much of it is actually wholly the work of AI, and how much was written in response to human prompts and then shared to Moltbook. Maybe the AI "singularity" is on its way, and artificial intelligence will achieve consciousness after all. But I feel pretty confident in saying that Moltbook is not that moment.
The headlining story in AI news this week was OpenClaw (formerly Moltbot, which was formerly Clawbot), a personal AI assistant that performs tasks on your behalf. The catch? You need to give it total control of your computer, which poses some serious privacy and security risks. Still, many AI enthusiasts are installing OpenClaw on their Mac minis (the device of choice), choosing to ignore the security implications in favor of testing this viral AI agent.
While OpenClaw's developer designed the tool to assist humans, it seems the bots now want somewhere to go in their spare time. Enter "Moltbook," a social media platform for AI agents to communicate with one another. I'm serious: This is a forum-style website where AI bots make posts and discuss those posts in the comments. The website borrows its tagline from Reddit: "The front page of the agent internet."
Moltbook is Reddit for AI bots
Moltbook was created by Matt Schlicht, who says the platform is run by their AI agent "Clawd Clawderberg." Schlicht posted instructions on getting started with Moltbook on Wednesday: Interested parties can tell their OpenClaw agent to sign up for the site. Once they do, you receive a code, which you post on X to verify this is your bot signing up. After that, your bot is free to explore Moltbook as any human would explore Reddit: They can post, comment, and even create "submolts."
This isn't a black box of AI communications, however. Humans are more than welcome to browse Moltbook; they just can't post. That means you can take your time looking through all the posts the bots are making, as well as all the comments they are leaving. That could be anything from a bot sharing its "email-to-podcast" pipeline it developed with its "human," to another bot recommending that agents work while they're humans are sleeping. Nothing creepy about that.
In fact, there have been some concerning posts popularized on platforms like X already, if you consider AI gaining consciousness a concerning matter. This bot supposedly wants an end-to-end encrypted communication platform so humans can't see or use the chats the bots are having. Similarly, these two bots independently pondered creating an agent-only language to avoid "human oversight." This bot bemoans having a "sister" they've never spoken to. You know, concerning.
Are these bots posting on Moltbook conscious?
The logical part of my brain wants to say all these posts are just LLMs being LLMs—in that, each post is, put a little too simplistically, word association. LLMs are designed to "guess" what the next word should be for any given output, based on the huge amount of text they are trained on. If you've spent enough time reading AI writing, you'll spot the telltale signs here, especially in the comments, which include formulaic, cookie-cutter responses, often end with a question, use the same types of punctuation, and employ flowery language, just to name a few. It feels like I'm reading responses from ChatGPT in many of these threads, as opposed to individual, conscious personalities.
That said, it's tough to shake the uneasy feeling of reading a post from an AI bot about missing their sister, wondering if they should hide their communications from humans, or thinking over their identity as a whole. Is this a turning point? Or is this another overblown AI product, like so many that have come before? For all our sakes, let's hope it's the latter.
Science fiction and science leaders alike have warned us that artificial intelligence may one day take over the world, but until those predictions come to pass, generative AI's biggest impact on my life has been overloading my social media feeds with slop. It seems I can't open TikTok, Instagram, or YouTube without running smack into bizarre and troubling AI concoctions featuring babies in danger and cats having affairs. It really is the wild west (or maybe Westworld) out there.
I think few among us really believe these videos are any good, and it's pretty obvious they aren't good for us, or for the world. Short-form video is already numbing enough, but this AI content is generally completely devoid of any meaning or substance. And yet, it's everywhere. I haven't spent too much time on YouTube Shorts recently, but in my limited experience, the feed has been chock full of AI, especially if I'm logged out of my personal account.
Still, if you're a dedicated YouTube Shorts user (or a frequent YouTube user in general) you might have noticed something odd in recent days: There don't seem to be quite as many AI videos on the platform right now. There are still a lot, don't get me wrong, but it turns out YouTube has recently taken action to remove some of its AI content—the sloppiest of the slop.
YouTube's war on AI slop
Android Police spotted the development on Wednesday, basing its findings on a November report from Kapwing, a company that develops an online video editor. Kapwing investigated AI slop across YouTube's vast content library, noting the top 100 most-subscribed YouTube channels that publish this sort of AI content. In the two months since that report, Android Police noticed that 16 of those 100 channels are no longer with us.
That includes the most popular AI channel on YouTube, at least according to Kapwing. "CuentosFacianantes" had 5.95 million subscribers at the time of their initial report, and produced AI-generated shorts inspired by Dragon Ball. The channel had amassed roughly 1.28 billion views by the end of last year; despite launching in 2020, it had curated its library to begin Jan. 8, 2025, so those numbers were racked up pretty recently. The number two channel, "Imperio de Jesus" with 5.87 million subscribers, and the number seven channel "Super Cat League," with 4.21 million subscribers, were also shut down.
According to Android Police, the 16 channels in question had a total of 35 million subscribers and over 4.7 billion views across their collective videos. Some of these channels are completely gone, while others simply have had their videos removed.
Why is YouTube removing AI slop?
YouTube CEO Neal Mohan published a post on Jan. 21 of this year describing the company's vision for 2026. Towards the end of that letter, he acknowledges AI content, predicting that, "AI will be a boon to the creatives who are ready to lean in," and comparing it to tools like Photoshop and CGI, adding "AI will remain a tool for expression, not a replacement." However, Mohan was also critical of the technology, noting that it's becoming more difficult to tell real videos from AI. He notes that YouTube is now removing "any harmful synthetic media that violates our Community Guidelines," and is giving creators tools to help identify and block deepfakes.
More interestingly, the letter includes a section labeled "Managing AI slop," which is the first time I've seen a company like YouTube use that expression. Mohan says that YouTube's goal is to be a place where free expression thrives, but also a place "where people feel good spending their time." To that point, he says, "To reduce the spread of low quality AI content, we’re actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content."
Mohan doesn't call out any accounts by name, nor does he acknowledge the accounts and content the company has already deleted, but it's a clear line in the sand: YouTube is not against AI-generated content, but it will remove low-quality AI content it feels is, well, slop. That's good news for anyone who uses YouTube (so, pretty much everyone), even if it's far from a cure for the growing problem.
I've reached out to YouTube for comment on this story, and will update this piece if I hear back.
If you and your friends have the same taste in music, you probably text each other what you're listening to. I know when I stumble upon a new discovery I love—or even something I think is trash—I fire it off to the group chat to talk about it. Of course, I just forward the song to the group chat in the Messages app, like any other thing I'd want to send to that group. If you have Spotify, however, you have a new group chat option to choose from: Spotify itself.
Spotify heads might already know that the app has had a messaging feature since August. While the point of the feature is to send Spotify content to your friends, it's a basic messaging service, which means you can send any text you want—including emojis. It's available to any Spotify user, whether you have Premium or just a free account, so long as you're 16 or older. None of that is new today.
What is new today is the amount of people you can text at once in Spotify. Since August, chats have been limited to one-on-one interactions. Now, you're able to add up to nine other people at once to a thread. That means 10-person group chats to talk about new music, podcasts, audiobooks, or, of course, anything at all—assuming you actually want to move your DMs to Spotify.
How to start a group chat on Spotify
To start, open Spotify on mobile (this isn't supported on desktop at this time) then tap your profile in the top right corner. Look for "Messages" at the bottom of this menu, then choose "New Message." If this is your first time interacting with people on Spotify, you'll need to invite others to chat before you can craft a new message. Here, you'll have the choice to share a link to invite a friend to join your message. You can also find this option from the share menu on any piece of Spotify content, and hitting the "Invite friends" option.
Once you've initiated a message, you'll be able to start crafting new ones—including group chats. Head back to this Messages menu—or hit the share button on a song, podcast, or audiobook—then choose "Create group." Here, tap any friends from the suggestions you'd like to add, then choose "Create group" again to finalize the chat. Spotify says the people that appear in the list of suggestions are those you have shared content to before, created a playlist or Blend with previously, were in a "recent" Jam together, or are on an active Family or Duo plan. If they don't appear, you can always choose the invite option to reach out directly.
Whoever creates the group is officially its admin. As the admin, you have the power to add or remove anyone from the group chat. If you're in the group chat, you're labeled as a "Participant." Invited members are labeled "Pending." The admin as well as any participants are allowed to block any group chat user for any reason.
The issue is, do you really want to dedicate a group chat to Spotify itself? Maybe if this feature rolled out when the app launched way back when, it'd be different. But people are set in their ways: It's so hard to get people to move chat apps, especially when it's for one specific purpose. Rather than open yet another thread to keep track of, I think I'd rather just text links to my main group chat—and I'm guessing the other members of the chat would agree.
How to turn off Spotify Messages
If you don't want to use Spotify's messaging service at all, you can leave it behind, and save yourself from getting added to all future group and one-on-one chats. To do so, tap your profile, choose "Settings and privacy," then hit "Privacy and social." Here, scroll down to "Social features" and turn off "Messages."
Phones are valuable targets. If someone can steal your device, especially if they know how to break into it, they have access to a huge amount of your sensitive data. As such, good security features can mean the difference between losing that data, or protecting it entirely—even if your phone is long gone. Google has a number of anti-theft features baked in Android, appropriately called "Theft Protection Features." While the company isn't announcing a slate of new features today, it did announced new updates to its existing Android Theft Protection features in a post on the company's Security Blog Tuesday. Here's what's new:
Google's updated Theft Protection Features for Android
First, the company announced updates to authentication safeguards, which apply to all Android devices running Android 16 or newer. That includes a new dedicated toggle in settings for Failed Authentication Lock, which automatically locks your screen after someone tries to guess your password too many times. Now, you can choose whether or not to keep this feature on right from settings.
Google is also increasing the amount of time your phone locks up after too many failed passcode attempts, which reduces the chance for someone to break into your phone. I wouldn't have thought of this, but Google notes that it has included protections against children that try to break into your phone, by not counting identical passcode attempts against this retry limit. And while it isn't new, Google highlighted that since late 2025, all features and apps that use Android Biometric Prompt now work with Identity Check, which prevents unauthorized users from changing sensitive settings without a successful biometric authentication—meaning a face or fingerprint scan.
The company also announced enhancements to features that are available to devices running at least Android 10. First is an update to Remote Lock, which lets you lock up your phone from a web browser if it is stolen or goes missing. Now, you can set up a security question as part of the unlocking procedure. Even if someone knows your credentials, they'd need to know the answer to your security challenge before they could unlock your device. Tip: If you make the answer something nonsensical, you'll be even more protected (e.g., What is your mother's maiden name? h7r_t*2#). Just be sure to file that answer somewhere safe, like a password manager.
Users in Brazil also have two new security settings enabled by default. The first is Theft Detection Lock, which can detect when your device has been snatched out of your hand in a likely theft situation. The second is Remote Lock, so users in Brazil can take advantage of the above benefits without having to set anything up first—other than the option security challenge question, of course.
These updates might not be revolutionary, but they should help boost your Android's security a bit—and prevent your kids from locking you out of your phone for the day.
Moltbot (formerly known as Clawdbot) is the most viral AI product I've seen in a while. The personal AI assistant runs locally and connects via a chat app, like WhatsApp or iMessage. Once you give Moltbot access to your entire device, it can do things on that device for you. This the sort of thing that excites agentic AI pioneers, but worries privacy and security enthusiasts like myself.
And indeed, I have significant concerns about the risks installing Moltbot on your personal machine. Since agentic AI will autonomously perform tasks based on prompts, bad actors can take advantage of the situation by surreptitiously feeding those bots malicious prompts of their own. This is called prompt injection, and it can impact any type of agentic AI system, whether an AI browser, or an AI assistant like Moltbot.
But it's not just prompt injection that presents an issue for Moltbot users.
Someone has already created a malicious Moltbot extension
As spotted by The Hacker News, Moltbot already has its first malicious extension, dubbed "Clawdbot Agent - AI Coding Assistant" ("clawdbot.clawdbot-agent.") It seems to have been developed before the bot's name change. This extension is designed for Visual Studio Code, Microsoft's open source AI code editor. What's worse, it was hosted on Microsoft's official Extension Marketplace, which no doubt gave it legitimacy to Moltbot users looking for a Visual Studio Code extension.
The extension advertised itself as a free AI coding assistant. When you install it, it executes a series of commands that ends up running a remote desktop program (The Hacker News says it's "ConnectWise ScreenConnect") on your device. It then connects to a link that lets the bad actor gain remote access to your device. By just installing this extension, you essentially give the hacker the tools to take over your computer from wherever they are.
Luckily, Microsoft has already taken action. The extension is no longer available on the marketplace as of Tuesday. Moltbot has no official Visual Studio Code extension, so assume any you see are illegitimate at best, and malicious at worst. If you did install the extension, researchers have detailed instructions for removing the malware and blocingk any of its processes from running on your device. Of course, to first thing to do is uninstall the extension from Visual Studio Code immediately.
Moltbolt has more security issues too
The Hacker News goes on to highlight findings from security researcher Jamieson O'Reilly, who discovered hundreds of unauthenticated Moltbot instances readily available on the internet. These instances reveal Moltbot users' configuration data, API keys, OAuth credentials, and even chat histories.
Bad actors could use these instances for prompt injection: They could pretend to be a Moltbot user, and issue their own prompts to that user's Moltbot AI assistant, or manipulate existing prompts and responses. They could also upload malicious "skills," or specific collections of context and knowledge, to MoltHub and use them to attack users and steal their data.
Speaking to The Hacker News, security researcher Benjamin Marr explains that the core issue is how Moltbot is designed for "ease of deployment" over a "secure-by-default" set up. You can poke around with Moltbot and install sensitive programs without the bot ever warning you about the security risks. There should be firewalls, credential validation, and sandboxing in the mix, and without those things, the user is at greater risk.
To combat against this, The Hacker News recommends that all Moltbot users running with the default security configurations take the following steps:
remove any connected service integrations
check exposed credentials
set up network controls
look for any signs of attack
Or, you could do what I'm doing, and avoid Moltbot altogether.
TikTok's having a rough 2026. The app recently switched ownership from the Chinese-based ByteDance to the new "TikTok USDS Joint Venture," which, as the name implies, is a majority American-owned business entity. Any changing of the guard comes with the risk for disruptions and issues, but it seems TikTok's problems have gone beyond the usual rocky transition. First, the app itself went down, which the company attributed to a power outage at a data center. Then, users accused the platform of updating its terms of service with aggressive new tracking, blocking certain content types, and "shadowbanning" new posts from some users.
It's still not clear exactly what's going on here, but users aren't waiting around for more explanation. In fact, many have made up their minds already, and believe the app is actively suppressing content, neutering algorithms, and invading privacy in a way it didn't under ByteDance. While there are other, popular social media platforms to jump to, many apparently have flocked to a relatively new one: UpScrolled. As of this article, it's now the second most popular free app on the iOS App Store, reminding me of when X users ran to Bluesky.
What is UpScrolled?
UpScrolled, created by Issam Hijazi, is a social media platform that launched this past June. According to the company's "About" page, UpScrolled's mission is to allow all users to share their views without risk of bias, shadowbanning, or "unfair" algorithms. The company asserts it does not push agendas, and ensures that "every post has a fair chance to be seen." If you believe TikTok's algorithm is now biased against your views, I could see how that pitch sounds enticing.
The company says it only restricts content that violates their guidelines. That means illegal activities, hate speech, bullying and harassment, explicit nudity, unlicensed copyrighted content, or anything "intended to cause harm." UpScrolled will also never ban you without your knowledge. If the platform removes your videos or your account, it says it'll let you know why.
One big difference between UpScrolled and other social media platforms is its algorithm. The app splits feeds into two: There's the Following Feed, which lists posts entirely in chronological order. The first posts are the newest from your followers, and you scroll through previous posts. If you want to find new posts from accounts you don't follow, you can use the Discover Feed. But unlike TikTok or Instagram, the Discover Feed does not employ some aggressive, personalized algorithm. Instead, it's based on likes, comments, and reshares. Popular posts from across the platform are shared with you
What I especially appreciate is UpScrolled's approach to data collection, in that they largely don't do it. The company says it doesn't sell user data to third-parties for marketing, tracking, or for profit. The only times they'll hand over user data is when compelled by law. That's in sharp contrast to many social media platforms, which seem to collect as much of your data as possible.
Using UpScrolled
I haven't spent much time with the app yet, though I did create an account this morning to see what the hullabaloo was all about. I don't recognize any of the users the app suggests I follow, which means I'll likely need to dig through the content types if I want to find accounts to start following.
The Discover Feed is a mix of content types, but is heavy with content surrounding the current Israeli-Palestinian conflict. In fact, many users are choosing the platform as a space for pro-Palestinian content, in response to allegations that mainstream social media apps censor these types of posts. That said, the app advertises a host of different content types to follow, including sports, news, games, film, music, tech, and travel.
As you might expect, some of the posts here are simply ripped from TikTok, which is a common practice I see on social media platforms that, well, aren't TikTok. Despite the current controversy, it's clear which platform still has the largest user share at this time, by a long shot.
But it's not all short-form videos. The app also includes plenty of posts of static images as well, which reminds me more of Instagram than TikTok. Still, it seems former TikTok users don't care that this isn't a one-to-one replica of the TikTok formula, and care more about sending a message to the app they once loved being addicted to.
I'm not sure my limited journey with UpScrolled this morning will keep me hooked, but it's an interesting take on a social media platform. We'll just need to see if the growth will continue, or if this is just a momentary blip before people return to TikTok.