Normal view

There are new articles available, click to refresh the page.
Today — 18 June 2024Main stream

When It’s OK to Use AI at Work (and When It’s Not)

18 June 2024 at 08:30

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

Almost as soon as ChatGPT launched in late 2022, the world started talking about how and when to use it. Is it ethical to use generative AI at work? Is that “cheating?” Or are we simply witnessing the next big technological innovation, one that everyone will either have to embrace, or fall behind dragging their feet?

AI is now a part of work, whether you like it or not

AI, like anything else, is a tool first and foremost, and tools help us get more done than we can on our own. (My job would literally not be possible without my computer.) In that regard, there’s nothing wrong, in theory, with using AI to be more productive. In fact, some work apps have fully embraced the AI bandwagon. Just look at Microsoft: The company basically conquered the meaning of “computing at work,” and it's adding AI functionality directly into its products.

Since last year, the entire Microsoft 365 suite—including Word, PowerPoint, Excel, Teams, and more—has adopted “Copilot,” the company’s AI assist tool. Think of it like Clippy from back in the day, only now way more useful. In Teams, you can ask the bot to summarize your meeting notes; in Word, you can ask the AI to draft a work proposal based on your bullet list, then request it tighten up specific paragraphs you aren’t thrilled with; in Excel, you can ask Copilot to analyze and model your data; in PowerPoint, you can ask for an entire slideshow to be created for you based on a prompt.

These tools don’t just exist: They’re being actively created by the companies that make our work products, and their use is encouraged. It reminds me of how Microsoft advertised Excel itself back in 1990: The ad presents spreadsheets as time consuming, rigid, and featureless, but with Excel, you can create a working presentation in an elevator ride. We don’t see that as “cheating” work: This is work.

Intelligently relying on AI is the same thing: Just as 1990's Excel extrapolates data into cells you didn’t create yourself, 2023's Excel will answer questions you have about your data, and will execute commands you give it in normal language, rather than formulas and functions. It’s a tool.

What work shouldn’t you use AI for?

Of course, there’s still an ethical line you can cross here. Tools can be used to make work better, but they can also be used to cheat. If you use the internet to hire someone else to do your job, then pass that work off as your own, that’s not using the tool to do your work better. That’s wrong. If you simply ask Copilot or ChatGPT to do your job for you in its entirety, same deal.

You also have to consider your own company’s guidelines when it comes to AI and the use of outside technology. It’s possible your organization has already established these rules, given AI’s prominence over the past year and a half or so: Maybe your company is giving you the green light to use AI tools within reason. If so, great! But if your company decides you can’t use AI for any purpose as far as work in concerned, you might want to log out of ChatGPT during business hours.

But, let’s be real: Your company probably isn’t going to know whether or not you use AI tools if you’re using them responsibly. The bigger issue here is privacy and confidentiality, and it’s something not enough people think about when using AI in general.

In brief, generative AI tools work because they are trained on huge sets of data. But AI is far from perfect, and the more data the system has to work with, the more it can improve. You train AI systems with every prompt you give them, unless the service allows you to specifically opt out of this training. When you ask Copilot for help writing an email, it takes in the entire exchange, from how you reacted to its responses, to the contents of the email itself.

As such, it’s a good rule of thumb to never give confidential or sensitive information to AI. An easy way to avoid trouble is to treat AI like you would you work email: Only share information with something like ChatGPT you’d be comfortable emailing a colleague. After all, your emails could very well be made public someday: Would you be OK with the world seeing what you said? If so, you should be fine sharing with AI. If not, keep it away from the robots.

If the service offers you the choice, opt out of this training. By doing so, your interactions with the AI will not be used to improve the service, and your previous chats will likely be deleted from the servers after a set period of time. Even so, always refrain from sharing private or corporate data with an AI chatbot: If the developer keeps more data than we realize, and they're ever hacked, you could put your work data in a precarious place.

Four Ways to Build AI Tools Without Knowing How to Code

18 June 2024 at 08:00

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

There’s a lot of talk about how AI is going to change your life. But unless you know how to code and are deeply aware of the latest advancements in AI tech, you likely assume you have no part to play here. (I know I did.) But as it turns out, there are companies out there designing programs to help you build AI tools without needing a lick of code.

What is the no-code movement?

The idea behind “no-code” is simple: Everyone should have the accessibility to build programs, tools, and other digital services regardless of their level of coding experience. While some take a “low-code” approach, which still requires some coding knowledge, the services on this list are strictly “no-code.” Specifically, they’re no-code solutions to building AI tools.

You don’t need to be a computer scientist to build your own AI tools. You don’t even need to know how to code. You can train a neural network to identify a specific type of plant, or build a simple chatbot to help customers solve issues on your website.

That being said, keep your expectations in check here: The best AI tools are going to require extensive knowledge of both computer science and coding. But it’s good to know there are utilities out there ready to help you build practical AI tools from scratch, without needing to know much about coding (or tech) in the first place.

Train simple machine-learning models for free with Lobe

If training a machine learning model sounds like something reserved for the AI experts, think again. While it’s true that machine learning is a complicated practice, there’s a way to build you own model for free with as few tools as a laptop and a webcam.

That’s thanks to a program called Lobe: The free app, owned by Microsoft, makes it easy to build your own machine learning model to recognize whatever you want. Need your app to differentiate between colors? You can train it to do that. Want to make a program that can identify different types of plants? Train away.

You can see from the example video that you can train a model to identify when someone is drinking from a cup in only a few minutes. While you can include any images you may have previously taken, you can also simply snap some photos of you drinking from a cup from your webcam. Once you take enough sample photos of you drinking and not drinking, you can use those photos to train the model.

You can then test the model to see how well (or not) it can predict if you’re drinking from a cup. In this example, it does a great job whenever it sees the cup in hand, but it incorrectly identifies holding a hand to your face as drinking as well. You can use feedback buttons to tell the model when it gets something wrong, so it can quickly retrain itself based on this information and hopefully make more accurate predictions going forward.

Google also has a similar tool for training simple machine-learning models called Teachable Machine, if you’d like to compare its offering to Microsoft’s.

Build your own AI chatbot with Juji Studio

AI chatbots are all the rage lately. ChatGPT, of course, kicked off the modern AI craze because of its accessible yet powerful chat features, but everything from Facebook Messenger to healthcare sites have used chatbots for years. While OpenAI built ChatGPT with years of expertise, you can make your own chatbot without typing a single line of code.

Juji Studio wants to make building a light version of ChatGPT, in the company’s words, as easy as making PowerPoint slides. The program gives you the tools to build a working chatbot you can implement into your site or Facebook Messenger. That includes controlling the flow of the chatbot, adjusting its personality, and feeding it a Q&A list so it can accurately answer specific questions users might have.

Juji lets you start with a blank canvas, or base your chatbot on one of its existing templates. Templates include customer service bots, job interview bots, teaching assistant bots, and bots that can issue user experience surveys. No matter what you choose, you’ll see the “brains” of your bot in a column on the left side of the screen.

It really does resemble PowerPoint slides: Each “slide” corresponds to a different task for the chatbot to follow. For example, with the customer service chatbot, you have an “invite user questions until done” slide, which is pre-programmed to listen to user questions until the user gives a “done” signal. You can go in and customize the prompts the chatbot will ask the user, such as asking for an account number or email address, or even more personal questions, like asking about a bad experience the user had, or the best part of their day.

You can, of course, customize the entire experience to your needs. You can build a bot that changes its approach based on whether or not the user responds positively or negatively to an opinion-based question:

Build custom versions of Copilot or ChatGPT

Chatbots like Copilot and ChatGPT can be useful for a variety of tasks, but when you want to use AI for a specific function, you'll want to turn to GPTs. GPTs, not to be confused with OpenAI's GPT AI models, are custom chatbots that can be built to serve virtually any purpose. Best of all, there's no coding necessary. Instead, you simply tell the bot what you want, and the service walks you through the process to set up your GPT.

You can build a GPT that helps the user learn a language, plans a meal and teaches you how to make it, or generates logos for different purposes. Really, whatever you want your chatbot to do, you can build a GPT to accomplish it. (Or, at least create a chatbot that's more focused on your task than ChatGPT or Copilot in general.)

You can access Copilot GPTs if you subscribe to Copilot Pro. OpenAI used to lock its GPTs behind a subscription, but the company is making them free for all users. Plus, OpenAI lets users put their custom-built GPTs on the GPT Store. If you don't want to make your own, you can browse other users' creations and try them out for yourself.

Create anything you want with Bubble

For the ultimate no-code experience, you’ll want to use a tool like Bubble. You use an interface similar to something like Photoshop to build your app or service, dragging and dropping new UI elements and functions as necessary.

But while Bubble is a no-brainer for us code-illiterates to build things, it’s also integrated with AI. There are tons of AI applications you can include in your programs using Bubble: You can connect your builds to OpenAI products like GPT and DALL-E, while at the same time taking advantage of plugins make by other Bubble members. All of these tools allow you to build a useful AI program by yourself—something that uses the power of GPT without needing to know how it works in the first place.

One of the best ways to get started here is by taking advantage of OpenAI Playground. Playground is similar to ChatGPT, in that it’s based on OpenAI’s large language models, but it isn’t a chatbot. As such, you can use Playground to create different kinds of products and functions that you can then easily move to a Bubble project using the “View Code” button.

A Brief History of AI

18 June 2024 at 07:00

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

You wouldn’t be blamed for thinking AI really kicked off in the past couple years. But AI has been a long time in the making, including most of the 20th century. It's difficult to pick up a phone or laptop today without seeing some type of AI feature, but that's only because of working going back nearly one hundred years.

AI’s conceptual beginnings

Of course, people have been wondering if we could make machines that think for as long as we’ve had machines. The modern concept came from Alan Turing, a renowned mathematician well known for his work in deciphering Nazi Germany’s “unbreakable” code produced by their Enigma machine during World War II. As the New York Times highlights, Turing essentially predicted what the computer could—and would—become, imagining it as “one machine for all possible tasks.”

But it was what Turing wrote in “Computing Machinery and Intelligence” that changed things forever: The computer scientist posed the question, “Can machines think?” but also argued this framing was the wrong approach to take. Instead, he proposed a thought-experiment called “The Imitation Game.” Imagine you have three people: a man (A), a woman (B), and an interrogator, separated into three rooms. The interrogator’s goal is to determine which player is the man and which is the woman using only text-based communication. If both players were truthful in their answers, it’s not such a difficult task. But if one or both decides to lie, it becomes much more challenging.

But the point of the Imitation Game isn’t to test a human’s deduction ability. Rather, Turing asks you to imagine a machine taking the place of player A or B. Could the machine effectively trick the interrogator into thinking it was human?

Kick-starting the idea of neural networks

Turing was the most influential spark for the concept of AI, but it was Frank Rosenblatt who actually kick-started the technology’s practice, even if he never saw it come to fruition. Rosenblatt created the “Perceptron,” a computer modeled after how neurons work in the brain, with the ability to teach itself new skills. The computer has a single layer neural network, and it works like this: You have the machine make a prediction about something—say, whether a punch card is marked on the left or the right. If the computer is wrong, it adjusts to be more accurate. Over thousands or even millions of attempts, it “learns” the right answers instead of having to predict them.

That design is based on neurons: You have an input, such as a piece of information you want the computer to recognize. The neuron takes the data and, based on its previous knowledge, produces a corresponding output. If that output is wrong, you tell the computer, and adjust the “weight” of the neuron to produce an outcome you hope is closer to the desired output. Over time, you find the right weight, and the computer will have successfully “learned.”

Unfortunately, despite some promising attempts, the Perceptron simply couldn’t follow through on Rosenblatt’s theories and claims, and interest in both it and the practice of artificial intelligence dried up. As we know today, however, Rosenblatt wasn’t wrong: His machine was just too simple. The perceptron’s neural network had only one layer, which isn’t enough to enable machine learning on any meaningful level.

Many layers makes machine learning work

That’s what Geoffrey Hinton discovered in the 1980s: Where Turing posited the idea, and Rosenblatt created the first machines, Hinton pushed AI into its current iteration by theorizing that nature had cracked neural network-based AI already in the human brain. He and other researchers, like Yann LeCun and Yoshua Bengio, proved that neural networks built upon multiple layers and a huge number of connections can enable machine learning.

Through the 1990s and 2000s, researchers would slowly prove neural networks’ potential. LeCun, for example, created a neural net that could recognize handwritten characters. But it was still slow going: While the theories were right on the money, computers weren’t powerful enough to handle the amount of data necessary to see AI’s full potential. Moore’s Law finds a way, of course, and around 2012, both hardware and data sets had advanced to the point that machine learning took off: Suddenly, researchers could train neural nets to do things they never could before, and we started to see AI in action in everything from smart assistants to self-driving cars.

And then, in late 2022, ChatGPT blew up, showing both professionals, enthusiasts, and the general public what AI could really do, and we’ve been on a wild ride ever since. We don’t know what the future of AI actually has in store: All we can do is look at how far the tech has come, what we can do with it now, and imagine where we go from here.

Living with AI

To that end, take a look through our collection of articles all about living with AI. We define AI terms you need to know, walk you through building AI tools without needing to know how to code, talk about how to use AI responsibly for work, and discuss the ethics of generating AI art.

Yesterday — 17 June 2024Main stream

Here's When Apple Plans to Roll Out Its Biggest Apple Intelligence Features

17 June 2024 at 15:30

Apple made a splash during last week's WWDC keynote when it announced Apple Intelligence. It's the company's official foray into the trendy AI features most tech companies have adopted already. While Apple Intelligence might have generated the most headlines over the past week, many of its main features will not be present when you update your iPhone, iPad, or Mac this fall.

According to Bloomberg's Mark Gurman, Apple is staggering the rollout of these highly-anticipated AI features. A key reason is, simply, these features just aren't ready yet. Apple has been scrambling for over a year to implement generative AI features in its products, after the tech exploded in late 2022. (Thanks, ChatGPT.) Many of these features are quite involved, and will take more time to get right.

That said, Apple probably could release these features sooner and in larger batches if it wanted to, but there's a strategy here: By rolling out big AI features in limited numbers, Apple can root out any major issues before adding more AI to the mix (AI hallucinates, after all), and can continue to build up its cloud network without putting too much pressure on the system. It helps that the company is keeping these features to a specific, small pool of Apple devices: iPhone 15 Pro and 15 Pro Max (and likely the iPhone 16 line), as well as M-Series Macs and iPads.

Apple Intelligence in 2024

If you installed the iOS 18 or macOS 15 beta right now, you might think no Apple Intelligence features were going to be ready in the fall. That's because Apple is delaying these AI features for beta testers until sometime this summer. As the public beta is scheduled to drop in July, it seems like a safe assumption that Apple is planning on dropping Apple Intelligence next month. Again, we don't know for sure.

There are some AI features currently in this first beta, even if they aren't strictly "Apple Intelligence" features: iOS 18 supports transcriptions for voice memos as well as enhanced voicemail transcriptions, and supports automatically calculating equations you type out. It's a limited experience, but seeing as it's only the first beta, we'll see more features soon.

In fact, Apple currently plans to roll out some flagship features with the first release of Apple Intelligence. That includes summaries for webpages, voice memos, notes, and emails; AI writing tools (such as rewriting and proofreading); and image generation, including the AI-generated emojis Apple is branding "Genmoji." You'll also receive AI summaries of notifications and see certain alerts first based on what the AI thinks is most important.

In addition, some of Siri's new updates will be out with iOS 18's initial release. This fall, you should notice the assistant's new UI, as well as the convenient new option for typing to Siri. But most of Siri's advertised features won't be ready for a while. (More on that below.)

The timeline for ChatGPT integration is also a bit up in the air: It may not arrive with the first release of iOS 18 in the fall, but Gurman believes it'll be here before the end of the year. For developers, Xcode's AI assistant, Swift Assist, is likely not out until later this year.

Apple Intelligence's new Siri won't be here until 2025

The largest delay appears to be to Siri's standout upgrades, many of which won't hit iOS and macOS until 2025. That includes contextual understanding and actions: The big example from the keynote was when a demonstrator asks Siri when her mom's flight is getting in, and the digital assistant is able to answer the question by pulling data from multiple apps. This "understanding" that would power many convenient actions without needing to explicitly tell Siri what you want it to do, needs more time to bake.

In addition, Apple is taking until next year for Siri's ability to act within apps from user commands. When available, you'll be able to ask Siri to edit a photo then add it to a message before sending it off. Siri will actually feel like a smart assistant that can do things on your iPhone, iPad, and Mac for you, but that takes time.

Siri also won't be able to analyze and understand what's happening on your screen until 2025. Next year, you should be able to ask Siri a simple question based on what you're doing on your device, and the assistant should understand. If you're trying to make movie plans with someone to see Inside Out 2, you could ask Siri "when is it playing?" and Siri should analyze the conversation and return results for movie times in your area.

Finally, Apple Intelligence remains English-only until at least next year. Apple needs more time to train the AI on other languages. As with other AI features, however, this is one that makes a lot of sense to delay until it's 100% ready.

AI might be the focus of the tech industry, but big AI features often roll out to disastrous ends. (Just look at Google's AI Overviews or Microsoft's Recall feature.) The more time Apple gives itself to get the tech right, the better. In the meantime, we can use the new features that are already available.

How to Watch the Latest Nintendo Direct

17 June 2024 at 12:00

Nintendo is back with some news: The company just announced a new Nintendo Direct in a post on X (formerly Twitter). According to the post, this event will focus on Nintendo Switch games slated for release in the second half of 2024, but beyond that, we don't know much else.

Before you get your hopes up, no, this event will not reveal any information about the Nintendo Switch 2. That's not speculation, either: Nintendo said as much in their announcement post, directly stating, "There will be no mention of the Nintendo Switch successor during this presentation."

It's a smart move on the company's part: Nintendo undoubtedly knows the gaming community's collective focus is on the Nintendo Switch 2, and following Nintendo's president's confirmation of the console's existence last month, it would make some sense for Nintendo to acknowledge it in a new Direct. Squashing those expectations early means fans can go into this event without being disappointed by the lack of Switch 2 updates.

But what is Nintendo actually going to announce, here? The Switch subreddit is full of guesses: Some hope Nintendo will finally announce Switch ports for Wind Waker HD and Twilight Princess HD, the two remastered Zelda games from the Wii U still not on the company's latest console. Others hope for Metroid Prime news, whether that's remastered versions of the second and third Prime games, or the long-awaited fourth game in the series. Maybe there will be more retro games added to Nintendo Switch Online, or a brand-new top-down Zelda game, which would be the first in the series since 2013's A Link Between Worlds on 3DS.

Of course, this is all purely speculation: Now that we're heading into the last year of the OG Switch, there's really no telling what Nintendo will do here. We'll just have to wait and see.

How to watch the latest Nintendo Direct

Nintendo is holding its latest Direct event on Tuesday, June 18 at 7 a.m. PT (10 a.m. ET). The event will last for about 40 minutes, so block off your schedule until 7:40/10:40.

You can tune in from Nintendo's official YouTube page, or click the video below to stream from this article.

Before yesterdayMain stream

iOS 18's Satellite Messaging Is a Game Changer

14 June 2024 at 12:30

With the iPhone 14, Apple introduced a new way to communicate: Emergency SOS via Satellite. With it, you can reach out to emergency services even when you have no signal. The feature guides you on how to connect your iPhone to the nearest satellite overhead, and once connected, allows you to contact help (albeit, much more limited and slowly than usual).

It's a fantastic safety feature, both for those who frequent areas of low cellular coverage, as well as in emergencies when cell service is unavailable. But that latter point is really the main downside of the feature: It's only available for emergencies. If you don't have any service and you're perfectly safe, you can't use the feature to simply send a message to a friend or family member to check in. Unless you want to get the police involved in your update, you'll just have to wait until you're back within range of a cell signal or wifi.

Messages via satellite

That changes with iOS 18: Apple's upcoming OS (currently in beta testing) includes an update to its satellite communications feature. When it drops, you'll be able to send, via satellite, any message, not just emergency ones. So, when you happen to be totally without service, not only can you send an update letting people you're okay, you can keep up with your chats as you normally would.

When it comes to iMessage, almost nothing about the experience is compromised. You'll be able to send and receive messages, emojis, and Tapbacks (the reactions such as "thumbs up" or "Ha Ha"). Plus, all of your messages are still end-to-end encrypted, so there's no security breach using satellites for relaying your messages vs. cell towers or the internet. You don't need to do anything special to trigger the feature, either: Once your iPhone loses a network connection, and switches to "SOS only," you'll see a notification on the Lock Screen inviting you to message via satellite. You don't even need to tap this alert, though. Just start typing a message, and if there's no service, your iPhone will send it via satellite automatically.

You'll know this is happening, because there will be a "Satellite" tag next to the "iMessage" tag in the text field in your thread. You might also be clued in because some messages may take quite a while to send and receive, as they're beaming up to a satellite first before being routed to their destination. As with Emergency SOS via Satellite, iOS will guide you on angling your iPhone towards the nearest satellite overhead. You'll need a clear view of the sky, with few (if any) tall obstructions, including trees and buildings. Assuming conditions are correct, however, you'll be able to message away.

iMessages will come in automatically, even over satellite, so while you might not keep up with the messages as quickly as you normally would, they'll all eventually arrive. However, SMS texts will only work if you initiate the conversation: If an Android friend texts you while you're out of service, for example, you won't receive it. But if you send a message, you'll receive their direct response.

Unfortunately, the feature doesn't support RCS, the texting protocol iOS 18 is finally adopting. While mildly disappointing, the feature itself is so cool I can completely overlook RCS' omission. Lack of service is no longer a hindrance to missing out on communications. You won't drive through a remote road and receive a barrage of missed iMessages once you reconnect to service: Those messages will still appear on your iPhone as they were sent. You can take a trip somewhere without internet and still be able to give updates to people about your experience.

Of course, if you're the kind of person that enjoys these little breaks from society, there's always the foolproof solution: turning off your iPhone altogether.

A Glitch Has Permanently Enabled Motion Smoothing on (Some) Roku TVs

13 June 2024 at 17:30

It would appear that some Roku TVs are now self-sabotaging their owners. And by that, I mean they're enabling motion smoothing without any way to turn it off.

Between this complaint on the Roku subreddit, this thread on Roku's customer forums, and reports from staffers of The Verge, this issue appears to be more than a one-off, even if it isn't necessarily widespread. The problem is occurring on TCL TVs running Roku OS 13: Reportedly, these TVs started using motion smoothing out of nowhere, with no notice, nor any option to disable the feature. Users looking at both standard settings or picture settings cannot find a "motion smoothing" option to turn off the feature.

This bug isn't going unaddressed: In the forums complaint, a Roku community mod confirmed the company is investigating the issue, and included the standard instructions for disabling motion smoothing.

What is motion smoothing, and why is it bad?

Motion smoothing (or "Action Smoothing," as Roku calls it) is a feature on HD and 4K TVs that essentially adds new frames to whatever content you're watching. Video is made up of individual pictures, or frames, and most shows and movies run at 24 or 30 frames per second. With motion smoothing, your TV analyzes the content and creates extra frames on the fly to smooth out the motion of the image. Some content, like live sports and video games, are made better by additional frames, since it can help keep track of fast-moving action. However, almost always, artificially adding the frames makes the image look worse, not better.

It's particularly bad when watching shows and movies: Doubling the frame rate and making the motion smoother is what gives this content the "soap opera" effect. Soap operas are filmed in a higher frame rate than most other shows and movies, so when you double the frame rate of 24 or 30 fps content, it looks like daytime television. That isn't a compliment.

How to disable Action Smoothing on Roku TVs

While some users experiencing this motion smoothing bug won't see the option to disable Action Smoothing at this time, other Roku TV users can control the setting.

First, press the Star button on your Roku TV remote while watching something, then scroll down and choose Advanced Picture Settings. Here, you should be able to control the Action Smoothing settings. Roku says that if you can't see this setting, your TV doesn't support the feature. That's likely not comforting to users watching the new season of House of the Dragon as Game of Thrones' first soap opera.

YouTube Is Experimenting With a Way to Kill Ad Blockers for Good

13 June 2024 at 15:00

YouTube is getting aggressive in its fight against ad blockers. According to the developer of SponsorBlock, an extension that automatically skips ahead of sponsored content in videos, YouTube is now experimenting with "server-side ad injection."

This is quite the escalation. In short, server-side ad injection means YouTube is adding advertisements to the video stream itself. Currently, the company delivers its ads to users as a separate video before the video you chose to watch. That allows ad blockers to identify the ad, stop it from playing, and load your video directly. If the ad is part of the video, however, the traditional ad blocker strategy breaks.

Even though SponsorBlock isn't an ad blocker, this change would break its services, too, as adding ads to the video itself throws off the timestamps of the video. SponsorBlock relies on these timestamps to skip ahead of sponsored segments: As ads vary in length and number, timestamp changes will be unpredictable, and tools like SponsorBlock won't work as they're currently designed.

It's the latest development in the running battle between YouTube and third-party ad blockers. While YouTube has always dissuaded viewers from using ad blockers, the company started cracking down on the tool last year: When using certain ad blockers in some browsers, users saw a pop-up warning them to disable their ad blocker. If they continued to use their ad blocker, they may find that YouTube wouldn't load for them at all. Even if YouTube didn't block videos entirely, the site might artificially slow down load times, or skip to the end of the video. YouTube has also gone after third-party clients with ad blockers built-in, so those are no longer a reliable alternative.

This new server-side injection strategy is not official policy yet, and is only reportedly in testing, but it's clear YouTube isn't backing down here—and it's not difficult to understand why. YouTube's main source of revenue, as with many corners of the internet, is advertising. By using an ad blocker, users block both YouTube and its creators from generating money from views.

Of course, using the internet without an ad blocker is a bit miserable, and has been for years. With the concerning rise of malicious advertising, too, using an ad blocker is actually good cybersecurity practice. Hell, even the FBI recommends you use one.

For YouTube, there's a clear solution: YouTube Premium. If you subscribe, you can watch YouTube mostly ad-free, without worrying about using an ad blocker that will break your experience with the site. While avid YouTube fans might find value in the service, as it also comes with YouTube Music, casual YouTube users might balk at adding another subscription to their ever-growing list of streaming services. There is a one-month free trial, so you can try it out without financial commitment. And if you are interested, YouTube now offers the following plans:

  • Individual: $13.99 per month, or $139.99 per year (saves $27.89)

  • Family: $22.99 per month, for you plus five others in your household

  • Student: $7.99 per month

How to Tell If a Prime Day Promotion Is Just Hype

12 June 2024 at 18:00

Prime Day is just around the corner. For two days in July, you’ll find promotions on products from companies both big and small, all vying for your clicks and your wallet. Many of these will claim to be great deals, and that not buying the item during Prime Day will mean you miss out on some big savings. But there are a few strategies you can use to quickly figure out whether that “amazing deal” really is all that.

How to tell a good Prime Day price from a bad one

One of the best things you can do to tell if a Prime Day deal is legit is to employ the use of a price tracker. These sites and tools keep tabs on the prices for any given product across the many different stores and vendors where it is sold, in order to give you the best possible price, as well as show you whether that current “deal” really is that much lower than the original price or other deals that are out there.

A common technique to make deals look good is to pump up the price of the product: That way, when the company slashes the price for something like Prime Day, it can claim a large discount, even if the overall price tag isn’t much lower than the original price (if it's lower at all). If something originally costs $60, a company can raise the price to $75, then cut it back down to $60, claiming it took 20% off. It’s accurate, but scummy, so watch out for it.

You can use a browser extension like Keepa to watch a product's price history. But other trackers, like Honey or Capital One Shopping, can help you find prices and price histories for items across multiple stores. Their browser extensions are especially useful: If there’s another store selling the same product you’re looking at on Amazon for less, you’ll get a pop-up letting you know, with a direct link to that store’s product page.

Knowing whether something is a good deal isn’t all about getting the best price, though. Sure, Honey might have confirmed this item isn’t any cheaper elsewhere on the web, but there’s more than just the general price tag to consider.

Amazon’s own products will have the best deals

It’s Amazon Prime Day, after all. The company is here to sell as much inventory as it can, but it’s happiest if you’re buying Amazon products from Amazon. As such, the best tech deals are likely going to be with Amazon’s own line of gadgets. Of course, just because an Amazon product is massively on sale, doesn't make it a "good deal." If you wanted a different brand over Amazon's, or if you just want to make sure you're getting the best version of a product, make sure to compare offerings from different companies, too.

Make sure you’re not buying an old piece of tech

I’m a big believer in old tech: I think we should be holding onto our devices for longer than many of us do. However, I don’t think companies should sell you old tech as if it were new, especially when new tech is right around the corner.

Amazon is actually sometimes helpful here: If you’re looking at an outdated version of a product, Amazon lets you know, and gives you a link to the current version of that device. However, that’s only true if Amazon carries that new version of the device or if there’s a direct successor to that product. Lines are blurred these days: Last year’s device isn’t necessarily obsolete just because there’s a new version out, so Amazon doesn’t always try to sell you on the newer product.

And that can be fine! Last-generation laptops, tablets, smartwatches, and phones have their place: Tech is advancing so rapidly that it can be frugal and practical to buy older tech that still works well. But Amazon telling you to buy something that won’t be able to update to the latest software later this year isn’t right. If you’re looking to buy a piece of tech on Prime Day, research is your friend. It’s more than OK to buy something that came out last year or the year before; what matters more is making sure the product still works as it should in 2024, and if it’ll last as long as you’d reasonably expect it to.

If the reason a device is such a good price is because it’s obsolete, that’s not a good deal.

Not everything that is “cheap” is good

On a similar note, be wary of cheap tech that simply isn’t very good. It might be affordable, but if it doesn’t work well, it’s not worth the cost.

Often, this issue arrises with the many brands you’ve never heard of selling items for pennies compared to other companies. Sure, you could save some money and go with these brands, but what about the long-term investment? After Amazon’s 30-day return policy is up, you’re sunk without a customer support channel, something many of these tiny companies lack themselves.

On the other hand, you might have heard of the brand, but the product itself just isn’t very good. It might seem like a steal to get a giant 65-inch 4K TV for $500, but if the picture quality is poor, was that really worth it? (No.)

Read the reviews (not on Amazon, if you can help it)

One way to make sure that TV is worth its steep price cut, or whether those cheap headphones are going to pass the listen test, is to read reviews for the products you’re considering buying. I’m not talking about Amazon reviews, either: Amazon’s ratings can be helpful, but they can also be compromised. Sometimes the reviews don’t even match the product they’re supposed to be talking about, which doesn’t bode well for the integrity of the review. And in the age of AI, you can never be too sure who's writing that customer review in the first place.

When it comes to tech, the best approach is to listen to the reviewers with technical experience, who put these products through their paces before issuing an opinion. An outlet like our sister site PCMag will help you figure out pretty quickly whether that TV is really worth the hype, and they show their work so you can understand how they came to their conclusions.

At the end of the day, it’s all about taking your time and doing your research—the opposite of Amazon’s “BUY IT NOW” strategy. Fight the urge to buy something on impulse, and make sure your money is going toward the best possible product for your needs.

This Tech Brand Will Get the Biggest Discounts During Prime Day

12 June 2024 at 13:00

Prime Day is coming up, and with it comes a surge of discounts and savings. While Amazon wants you to think each and every deal you come across is worth your time and attention, the truth is few of these deals are actually that great.

All the chaos overwhelms even the most seasoned online shopper, making it hard to know if you’re really saving money at all. But there’s one category that will undoubtedly stand out on Prime Days by design: tech made by Amazon itself. That means things like Fire tablets, Fire TV Cubes, and anything else specifically made by Amazon.

For typical tech sales, most retailers want you buying whatever they can convince you to plunk down money for, be it an iPhone or an Android, a Mac or PC, Xbox or PlayStation. They just want as much of your money as possible, and so traditional shopping events like Black Friday see deals across the spectrum of brands and manufacturers. But Amazon is different.

Amazon is both a store and a manufacturer

While the company sells tons of products from a wide variety of manufacturers, it also makes its own tech. If you’re looking for a new smart TV, Amazon makes one; if you just want the smart TV streaming device, Amazon makes that as well. For home security, you’re covered with Blink products. Alexa powers so many of these devices, so why not get one of its smart speakers or screens operated via the personal assistant? Amazon Basics even makes plenty of accessories, from USB cables to batteries.

Amazon now makes so much tech, it only makes sense that the company would prefer to sell you its version of a product over that of the competition. Instead of a Roku, buy a Fire TV stick. Instead of a Nest Mini, buy an Echo Pop. It seems Amazon has comparable products in almost every category—they even make their own earbuds (Echo Buds).

The quality of these products compared to the competition is certainly up for debate, and I encourage you to do your own research before buying any tech product—made by Amazon or otherwise. But whenever you do search for a tech product on Amazon, and especially during Prime Day, you’re going to be served up an Amazon alternative. It’s only in the company’s best interest to use the event to run big deals on its own products.

Amazon is still a market, and it’s good for them when you buy anything, so if you end up going with a Samsung TV over an Amazon TV, that’s still a sale. That's why you’ll still find plenty of deals throughout Prime Day for non-Amazon devices. But Amazon products will be pushed the hardest and likely see the biggest discounts and promotions.

Still, don’t impulse buy

Eye-grabbing discounts and tempting product bundles aside, don’t buy an Amazon device just because it looks like a killer deal. If you are in the market for a specific Amazon-made product and it goes on sale, great. That’s a smart purchase. If you were more interested in a non-Amazon device but the Amazon version is now significantly cheaper, though, it’s not necessarily worth chasing value over getting the product you actually want to buy.

Take the time to research the difference between Amazon’s version and the competition. If you do find an Amazon device you want to buy, you can set alerts to be notified when the product hits the price you're looking for. That’s good general advice for any Prime Day deal (or any big ticket purchase), but considering how hard Amazon will be pushing its own products in July, it’s especially important to keep in mind. If you buy anything on Prime Day, I hope you get deals on the best devices for you, whether Amazon made them or not.

Everything Apple Announced at WWDC 2024

11 June 2024 at 19:00

Follow Lifehacker's ongoing coverage of WWDC 2024.

Apple's WWDC 2024 keynote spanned nearly two hours, covering everything from new iOS updates to a deep dive into Apple's grand plans for AI. If you missed the event, you don't have to sit through the livestream on YouTube: Here's everything Apple announced at WWDC this year.

Apple TV+

severance s2
Credit: Apple/YouTube

Apple kicked off its presentation with a preview of upcoming Apple TV+ shows and movies, which has quite the lineup on the horizon. The company announced Severance's second season in addition to fresh content like Dark Matter, Presumed Innocent, Fly Me to the Moon, Pachinko, Silo, Slow Horses, Lady in the Lake, The Instigators, Bad Monkey, Shrinking, and Wolfs.

VisionOS 2

visionOS 2
Credit: Apple

Apple next took the time to show off some new features for visionOS 2. While a modest update, Vision Pro users can look forward to a larger and higher-res Mac Virtual Display, mouse support, and new gestures for Control Center as well as options like seeing the current time, battery level, and volume adjustments.

visionOS 2 also lets you create spatial 3D images from 2D images in Photos, and it will support using the headset on trains. Right now, Vision Pro has a plane mode, but in other moving vehicles, the device struggles to orient itself correctly.

iOS 18

ios 18 icons
Credit: Apple

We all thought this would be the moment of the Apple event, when the company would roll out its set of brand-new AI features. That... did not happen. Instead, the company highlighted some small but interesting new changes to iOS, and saved the big AI announcements for later.

First up, Apple is opening up customization on iOS. The company now lets you put your app icons wherever you want, change app icon colors to your preference, and adjust the tint of the icons to match your iPhone's dark theme. You can also completely customize Control Center: Apple lets you choose what functions you want where. It honestly doesn't feel 100% Apple. iOS 18 will also allow you to lock your apps behind Face ID, Touch ID, or your PIN, as well as bury any apps in a new Hidden folder.

Messages gets a big upgrade: Tapback icons (thumbs up, heart, "Ha Ha," etc.) now have a more colorful redesign, but you can also choose any emoji from your phone to react with instead. You can now schedule messages, as well, so you don't need to set yourself a reminder to send an important text down the line. We're also getting text effects and formatting: Now, you can add effects to individual words, and choose from rich font features like bold, italicize, underline, and strikethrough. In addition, you'll be able to message other phones via satellite when you don't have a cellular connection, and RCS is on its way.

Mail has a new look, too: You now have quick tabs at the top of the screen (Primary, Transactions, Updates, and Promotions) to sort through your emails. Mail also intelligently sorts through similar emails to deliver relevant information in one view, so you shouldn't have to scroll through multiple emails from your airline to find different flight data. Maps now has a ton of new hiking data, including new topographical maps and the ability to create hikes of your own. Tap to Cash lets you pay people by tapping phones together, while Game Mode reduces background activity to boost game performance and response time.

Photos has a big redesign: Rather than using multiple sections as the Photos app currently operates, the new app is all one view: Your photos grid lives at the top, while the albums, memories, and other sorted material live below. It's a different approach to Photos than Apple has tried before, and it remains to be seen how legacy iOS users will fare with the new UI.

AirPods and TV

man wearing airpods in crowded elevator
Credit: Apple/YouTube

AirPods are getting some new features with iOS 18, as well: You can now interact with Siri by nodding your head for "Yes," or shaking your head for "No." Voice Isolation is also coming to AirPods Pro. It's not clear how this is different from the Voice Isolation already available on iOS and macOS, but Apple is claiming the feature can block out city noises entirely while on a call—at least if Apple's demo is to be believed. Personalized Spatial Audio is also coming to gaming, so developers can integrate the feature into their games.

Apple is also rolling out "InSight," which essentially copies Prime Video's X-Ray: When you pause a show or movie, you'll see, well, insights into the actors as well as what music might be playing during a scene. Enhanced Dialogue is rolling out to more devices, and will use machine learning (aka AI) for an improved experience. 21:9 projectors will also be supported in the latest version of tvOS, and Apple is rolling out new screensavers, like a new Snoopy animation.

watchOS 11

watchos 11 on apple watch
Credit: Apple

Like iOS 18, watchOS 11 is more of a modest update rather than an overhaul. But there are some interesting features to note: The update introduces a new app called Vitals. Here, you can review metrics like heart rate and sleep, and track how they've changed over time. There's also a new feature called Training Load, which takes all your metrics and body measurements to estimate a proper training effort (with a score of 1 through 10) you should be following in your exercises. The feature takes your own experiences into account, too, if you feel you're working too hard or too little.

Speaking of working too hard, you can now pause your Activity Rings (thank god), so your watch doesn't need to shame you for missing one day of goals. You can even set goals based on the day, so Monday can have a lower exercise goal than Tuesday. You can also customize the Fitness app with tiles, so you can quickly check the datasets and features that matter most to you.

Custom workouts support pool swims, and pregnant users can keep up with additional health-related information more easily. Your Apple Watch will use AI to power a new live translation feature, but we don't know that much about it at this time. Smart Stack gets some upgrades as well (including new Translate and Precipitation widgets), and watchOS will get iOS' "Check In" feature. That way, you can automatically notify a friend when you return home from an outdoor run. More apps will be able to take advantage of newer watches' Double Tap feature, Tap to Cash will work with Apple Watches, and there are new versions of the Photos watch face.

For more information about Apple's latest update for Apple Watch, check out our article here.

iPadOS 18

ipads running ipados 18
Credit: Apple

iPadOS 18 is getting a lot of the same features as iOS 18, including message scheduling and Game Mode, but by far the biggest update is the new Calculator app.

No, really. After over a decade of refusing to put the calculator app on iPad, the mad geniuses at Apple have finally done it. iPadOS18 can do math. And it can do it really well. Specifically, the new calculator app for iPad will be able to take notes, taking advantage of machine learning to spice up the calculator experience for tablets.

Upon loading up the Calculator app, you’ll be able to use buttons for basic calculations as usual, but a quick tap will now take you to the app’s new Math Notes section. Here, you can write out formulas and other equations, and the page will automatically update with answers in your handwriting as soon as you draw an equals sign or other equivalent symbol. You’ll even be able to change already solved equations in real-time, and Math Notes will make revisions as appropriate.

There’s also support for graphs, which you can automatically create and adjust based on your already written equations. Your notes will appear in a history, like written notes do in the Notes app, so you’ll also be able to reference old problems days or weeks later. Teachers are certain to pull their hair out over this new “show your work” cheat tool, but Apple is positioning Math Notes as great for budgeting or even scientific prototyping.

You’ll also be able to access Math Notes capabilities in the standard Notes app, which is getting its own update with a new Smart Script feature.

Smart Script is a bit of an odd pitch, as the idea here is to correct your writing to look more like… your writing. Essentially, your iPad will now use machine learning to figure out what your handwriting generally looks like, then make small tweaks to any new writing to clean it up so it looks like its model of your handwriting.

If that sounds confusing, imagine the following: you’re in a lecture and you can’t afford to make perfectly pretty text with your notes, so you just scribble down what you hear and hope you can make sense of it later. Smart Script will try to take those scribbles and make them look like what your handwriting would resemble in a more ideal environment.

“It’s still your own writing, but it looks smoother, straighter, and more legible,” said Apple senior vice president of Software Engineering Craig Federighi.

Notes is also getting some new formatting options, including five new highlighter colors, the ability to collapse sections under heading and subheadings, and automatic content resizing. For instance, if you delete a paragraph, Notes will adjust your remaining content to fill the empty space.

For general navigation, a new floating tab bar will now appear at the top of several apps across the system and give easy access to different parts of that app, sort of like the menu bar in MacOS. It’ll call out a few key sections by default (the Apple TV version of the tab bar has shortcuts to Apple TV+ and MLS content), but you can also expand it into a sidebar for more detail or add new shortcuts of your own.

Finally, there’s accessibility and Shareplay, which should both help with using the iPad. iPadOS 18 will add eye-tracking for navigation, plus vocal shortcuts that will allow users to map custom sounds to specific tasks and actions. Meanwhile, Shareplay will now be able to show when a user taps or draws on their screen, which should help when walking someone through a task directly. In more extreme cases, helpers could (with permission) take direct control of a device.

M-series powered iPads will also get access to Apple Intelligence features like Image Playground, which we’ll touch on later.

macOS 15 (Sequoia)

mac running sequoia
Credit: Apple/YouTube

macOS 15 is here, and it’s got a new, California-based name: macOS Sequoia. Like iPadOS, it’ll get iOS 18 features where appropriate, like Tapbacks and the Passwords app, and it’ll even have a version of Math Notes specific to typed text. But for exclusives, the focus this year is on window layout, device syncing, and gaming.

Perhaps the biggest of these is a new continuity feature: iPhone mirroring. Now, so long as your iPhone is nearby, you can pull up its screen on a window in your Mac, where you’ll be able to interact with apps and notifications. Your iPhone will remain locked while you do this, and you’ll even be able to share files directly from your Mac to your mirrored screen in supported apps like Unfold.

For when you’re just working on your Mac, you’ll also finally be able to pin windows. Third-party apps such as Magnet already offer this, but this is the first time it will be a native feature. Now, when you drag a window (or tile, as Apple calls them) to the edge of your screen, you’ll be able to slot it into a suggested position on your desktop. There’s also keyboard and menu shortcuts for more advanced customization.

For video calls, a new presenter preview will also allow you to double check which of your windows you’re about to share before going live. There are also custom backgrounds to choose from, or you can use a photo.

Finally, there’s gaming. The updates here are more on the developer side, but they show a promising future for gaming on Mac. Basically, Apple is releasing a new game porting toolkit to make games originally built for Windows run on Mac. This should mean a wider library of titles available on Apple’s computers, with confirmed ports already on the way for blockbuster titles like Control and Assassin’s Creed Shadows.

Like on iPad, any Mac running an M-series chip will also get access to Apple Intelligence.

Safari

safari running on macos15
Credit: Apple/YouTube

Safari is getting a few updates specific for macOS Sequoia. These include a new video mode, article summaries, and extra bonuses powered by machine learning.

Now, when Safari detects a video on the page, a new Viewer mode will kick in. You’ll be able to click a button next to your address bar to hide everything on the page but the video, and when you click away from your browser, Safari will go into picture-in-picture mode to play your video in a corner of your screen.

For text-based content, the new Reader mode “instantly removes distractions from articles,” plus gives you a sidebar with what looks to be an AI-generated table of contents as well as a summary (Apple hasn’t been fully clear on how these work yet).

Finally, Highlights uses machine learning to add all sorts of context to your browsing. For instance, if you’re reading an article about a singer, Highlights will pull up a link to one of their songs in Apple Music. Similarly, if you’re looking at a hotel, Highlights will show you its location.

Apple Intelligence

iphone using apple intelligence to check flight info
Credit: Apple/YouTube

Apple’s big showstopper for WWDC this year was its entry into AI. Titled Apple Intelligence (pun intended), the company’s AI does a little old and a little new.

First up, what we don’t know. We don’t know what training data Apple is using for its AI, we don’t know how detailed its image generation can get, and we don’t know specific release dates (just release windows). But as for everything else, Apple was surprisingly forthcoming, possibly even more than Google was about Gemini during Google I/O.

Maybe the most exciting reveal was a revamped Siri. While Apple was the first to market with a digital assistant, Apple Intelligence is finally giving Siri the update it needs to keep up with competitors like Alexa. With Apple’s AI, Siri can now understand context, answering questions based on what it sees on your screen or intuitively understanding which contact you mean when you say “pull up my texts with Mike.”

This opens up the bot to new possibilities when it comes to search and actions, allowing you to search your photos and videos for things like “my daughter wearing a red shirt,” or ask Siri to add an on-screen address to a specific contact card for you.

Siri will also be able to answer questions about Apple products, coming preloaded with tutorials on things like “how to turn on dark mode.” Answers will now display in a box right on your screen, rather than in a linked help page.

You’ll also be able to give Siri prompts to create custom Memory Collages from your photos, and the bot will naturally decipher references to specific contacts, activities, places, or musical styles to stitch them together.

Finally, Siri will be able to take on a more traditional AI chatbot role by leaning on ChatGPT. When you ask Siri a question it thinks ChatGPT could help with, the bot will ask permission to send the question to ChatGPT for a response. Apple’s promised that any requests to ChatGPT will hide your IP address, and that ChatGPT will not log requests made through Siri. You won’t need an account to ask ChatGPT questions, but usage limits will apply, and ChatGPT subscribers can link their accounts to access paid features.

Outside of the realm of Siri, Apple Intelligence is also giving iPhones a Clean Up mode that works a lot like Google’s Magic Eraser. In the Photos app, just tap the Clean Up button, then circle or tap a specific subject to have your phone intelligently cut them out of a photo.

In another Pixel-like feature, Apple Intelligence also allows the Notes app to record and transcribe audio, even from phone calls. Participants on your call will be notified when you turn recording on, so nobody is surprised.

There are a couple of other organization goodies here, too. Both notifications and the Mail app can use AI to prioritize and summarize what you see, and the Mail app will even allow you to use AI to compose smart replies.

Then there’s the more traditional features. Apple Intelligence will be able to rewrite or proofread text “nearly everywhere” you write, based either on custom prompts or pre-selected tones.

It’ll also be able to use ChatGPT to generate whole new text, using similar rules as Siri.

Image Playground will be your image generator, and will be able to incorporate AI art into your Notes, Messages, and more. You’ll have access to pre-selected subjects and art styles as well as a prompt box, although it’s not clear how much freedom you’ll have with the tool. Apple’s language emphasizes that users will “choose from a range of concepts” rather than type anything they want. What is clear is that Image Playground will have the same contextual approach as Siri, meaning it’ll be able to generate caricatures of people in your Contacts list with just a name.

Finally, there’s Genmoji, which are similar to Image Playground but are specifically custom, AI-generated emojis. Like regular emojis, they can be added inline to messages and shared as a sticker react.

That all sounds a little cool and a little scary, which is why Apple is emphasizing privacy with its AI. Apple wants to have most AI processing happen on-device, but for content that needs to touch the cloud, Apple is promising that data will never be stored and will only be used for requests. It’s also making its servers’ code accessible for third-party review.

The catch to all this? It relies on Apple Silicon neural engines. That means it’ll only come to devices with an A17 Pro chip or an M-series chip. This limits which iPads and Macs you can use, plus makes it so the only iPhones with Apple Intelligence (at least at the start) will be the iPhone 15 Pro and Pro Max.

Apple Intelligence will be available to try out in U.S. English in the summer and will come out in a beta form in the fall.

Adobe Still Swears You’re Overreacting to Its New Terms of Service

11 June 2024 at 12:00

Adobe is having a rough week. On Thursday, I reported that Photoshop users received a pop-up requiring them to agree to new terms that appeared to give Adobe access to their work. In response to the resultant outcry from furious creators, Adobe issued a response, clarifying their new terms of service document was largely the same as previous versions, with a few clarifying factors added to the update.

This only added fuel to the fire. Adobe wasn't out of the blue demanding access to creators' work; rather, they seemed to be saying, they already had that access. Adobe's press release attempted to assuage concerns, saying that the company would only access cloud-based user data for three specific purposes: Features that required access to content (like generating thumbnails); cloud-based features, like Photoshop Neural Filters; and to look for illegal or otherwise abusive content.

The company claimed it would not access any data stored locally, and would not train any Firefly Gen AI models on user content. However, a deep dive into the terms of service reveals that Adobe takes cloud-based user content, aggregates it with other user content, and uses that to train its "algorithms."

It all turned into a big mess (and a hit to Adobe's stock price), which is likely why the company issued a second statement on Monday, while everyone was distracted by Apple's WWDC announcements. Adobe says it is working on a new terms of service, including clearer language, that it will roll out to users by June 18. Importantly, the statement offers the following clarifications:

  • Adobe does not claim ownership of your content, and does not use your content to train generative AI.

  • You can opt-out of the "product improvement program," which scrapes "usage data and content characteristics" for features like masking and background removal.

  • Adobe will explain the licenses they require you to agree to when using their products in "plain English."

  • Adobe does not scan content stored specifically on your machine "in any way." However, they scan everything uploaded to their servers to make sure they aren't storing child sexual abuse material (CSAM).

None of this is really news if you've been following along. Adobe really wants you to know it doesn't access the content stored locally on your computer, nor do they train their generative AI models using your content. However, they will train other AI models with your data—just not AI models responsible for creating anything. Great.

It's good you can opt-out of that AI training if you wish, but it doesn't change the fact that Adobe has demanded quite a lot of access to your cloud-based content. I'm repeating the same advice I gave in my last piece on this issue: If you need to use Adobe products and you don't want the company accessing your work in any way, keep all your data local. That means storing all your Photoshop data on your computer or an external hard drive, rather than in the cloud. It's less convenient, but much more private.

The Best Ways Everyone Should Customize Windows 11

11 June 2024 at 10:30

So, you installed Windows 11 on your PC—or you bought a machine with the OS pre-installed—and now your computer looks like everyone else's. There's no shame in sticking to the defaults, but if you want to make Windows 11 a bit more personalized, there are plenty of settings and features you can change to make your machine work better for you.

Move the taskbar back

taskbar settings
Credit: Jake Peterson

If you're a long-term Windows user, Windows 11 might have been a bit of a shake up: When you first boot up your PC following installation, your taskbar and its options, which have always been left-justified, now sit in the center of the display.

While some might like the new placement, if you don't, you don't have to live with it. You can move it back to the left side of the display by heading to Settings > Personalization > Taskbar > Taskbar Behaviors, clicking the drop down menu next to Taskbar alignment, and choosing Left.

Clean up your taskbar

taskbar settings
Credit: Jake Peterson

In addition, you may want to clean some utilities from the taskbar you don't use every day. Right-click on the taskbar, click Taskbar settings, then consider disabling Task view (swiping up on the trackpad with three fingers triggers this feature anyway), Widgets, and Chat, if you don't typically use these options. You can also hit the drop down menu next to Search and click Hide if you don't want to see the search bar in your taskbar, although some may find it useful.

Customize your Start menu

start menu settings
Credit: Jake Peterson

Aside from moving the Start menu to the left side of the screen, along with the rest of the taskbar, there are some other ways to improve this iconic menu in Windows 11. You can right-click on any app in the Start menu to unpin it, if you don't want to see Microsoft's defaults like Instagram, Xbox, or Prime Video for Windows. Better yet, you can right-click to uninstall any apps you know you'll never use. You can add new options to the Start menu, like Settings, File Explorer, Documents, and Downloads, from Settings > Personalization > Start > Folders.

Choose your default apps

Default apps settings
Credit: Jake Peterson

Another way to customize Windows 11 is to choose the apps you want to make defaults, rather than the ones the OS chooses for you. The most common adjustment here is your default browser: Windows wants you to use Microsoft Edge, and makes it the default choice, but you can make any other browser you want the default.

Let's say you're trying to switch browsers. Head to Settings > Apps > Default apps, then click the browser you're trying to make the default. Click "Set default" at the top of the display, and Windows will change link types like .htm, .html, and .pdf to open in the browser you selected. Going forward, when you click a link in Windows, it should open in your chosen browser, rather than Edge. It won't change every file type here, however: If you want your new browser to be the one that opens when you open a .pdf link, for example, you'll need to click .pdf, then choose your browser from the list.

This applies to other app types, too, like email clients, media players, and photos apps.

Adjust your theme

theme settings
Credit: Jake Peterson

Windows 11, by default, uses a light theme, which works well for daytime use. But when you're using your PC at night, or if you just prefer a dark theme in general, switching your PC into Dark mode is the move. You'll find this option in Start > Settings > Personalization > Colors.

This isn't a secret feature by any means. In fact, you may have already have Dark mode enabled on your computer. But Windows has more theming options than light or dark: You can control the entire color scheme of your Windows 11 UI, as well. Windows has a small selection preset color themes you can try, in both Light and Dark modes, but if none of them speak to you, you can craft your own.

You'll mostly find these options in the Colors and Themes sections of the Personalization settings menu. In Colors, you can choose an "Accent color," which will change the color of menu icons throughout the OS. While Windows has a large selection, you can choose View colors under "Custom colors" to pick an exact hue you want here. You can also choose whether or not to have that accent color display on the Start menu and the taskbar, as well as on title bars and the borders of windows.

Configure your notifications

notification settings
Credit: Jake Peterson

If there's one default you should not live with, it's notifications. Constant alerts from all of the apps on your PC will make you crazy: Disabling most (if not all) will make Windows a much more pleasant experience.

To start, open Settings > System > Notifications. If you never want to receive another alert again, hit the slider next to Notifications to disable them all. However, you can also go through app-by-app to choose which can send you alerts. You can also configure exactly how each app notifies you: You can choose to have notifications appear, but silently, so you aren't bombarded by constant "dings." You can also choose to enable or disable notification banners, the alerts that appear in the bottom-right of your screen, as well as whether notifications appear in notification center.

Don't be afraid to dive into Focus assist, as well. This feature lets you set up times of day or situations on your PC where only some apps can notify you: If you want to only allow work apps like Teams and Slack to notify you during your 9-5 (and make sure they never notify you from 5 to 9) you can set that up here.

Customize your lock screen

lock screen settings
Credit: Jake Peterson

It's easy to keep your PC's lock screen to the default settings, since you don't look at it very long each time you sign into your computer. But it's nice to have a custom look to greet you when you boot up your machine.

You'll find these options in Settings > Personalization > Lock screen. If you've never messed with these settings, you probably have the default "Windows Spotlight" enabled. This feature rotates the background image each day, in addition to offering tips on using Windows. But you can choose any image you want for the lock screen from the dropdown next to Personalize your lock screen. Choose Picture to select a single image you want to appear on the lock screen, while Slideshow lets you choose an album to cycle photos from.

Disable lock screen widgets

lock screen widgets
Credit: Jake Peterson

Speaking of the lock screen, newer versions of Windows 11 sign you up for widgets, which display additional data on the lock screen. By default, this includes a widget for the weather as well as others for news updates, but you can adjust the widgets that appear here or disable them altogether.

Go back to Personalization > Lock screen, then look for Lock screen status. Here, you can change the default Weather and more option to a different app, like Calendar or Mail, or choose None to disable these widgets entirely.

Control which apps start when you log in

startup apps settings
Credit: Jake Peterson

Startup apps are the programs that boot up with your computer. It's why apps like Edge open up as soon as you log into your PC. While some startup apps are convenient, others can be both annoying, and unnecessarily taxing on your PC.

Head to Settings > Apps > Startup, and scroll through the list here. Disable any apps and services you don't think you need each and every time your PC reboots. If you know you want your browser open right away, go for it. But you probably don't need iTunes Mobile Device Helper to kick in all the time.

Similarly, you may want to disable the option that restarts the apps you were using when you shut down your PC. If you're someone that likes rebooting your computer to find all the apps you were using when you turned it off, keep this setting on. Otherwise, start with a clean slate every time from Settings > Accounts > Sign-in options, then disable Automatically save my restartable apps and restart them when I sign back in.

Choose a new touch keyboard

touch keyboard settings
Credit: Jake Peterson

If you have a PC with a touchscreen, or you just like using an on-screen keyboard, you can customize the look of your touch keyboard from Personalization > Touch keyboard. By default, it'll match your system theme, but there are a series of choices here, from solid colors, to gradients, and even some with images in them. If you don't like any of the options, you can customize the colors of the keys, the characters on each key, and the background of the keyboard by hitting Edit.

Check your app permissions

privacy settings
Credit: Jake Peterson

This is far from the flashiest customization you can make in Windows 11, but it's important: On any OS, not just Windows, you should periodically review and customize your app permissions. In Windows 11, you'll find these options in Settings > Privacy & security. Windows has a long list of permissions here, but some important ones to review are Location, Camera, Microphone, and Contacts. If you see an app in any of these lists that has no reason to access the permission at hand (for example, an app that has Camera permissions despite never publicly using your camera in its application) disable it. (And consider deleting any sketchy apps.)

A Beginner's Guide to Copilot Plugins

11 June 2024 at 10:00

Microsoft bills its AI chatbot, Copilot, as your "everyday AI companion." But while Microsoft has baked in the usual generative AI features you'd expect from such a chatbot, it isn't the only company in the mix. Other companies have created "plugins" for Copilot, adding third-party functionality to your AI conversations. Here's what you can do with them.

What are Copilot plugins?

Copilot's plugins are similar to browser extensions: You can enable a plugin to add extra features to Copilot that Microsoft didn't add themselves, or to enhance options that Copilot may simply search the web for otherwise.

To access Copilot's plugins, you need to be signed into a Microsoft account. It can either be a personal account or one used for work, but without it, you won't see the plugins menu when you load up Copilot's website. If you're on desktop, you'll probably want to use a Chromium-based browser to access plugins, like Chrome or Edge. When I tried clicking the Plugins menu in Safari, I was greeted with a blank space.

Once you log in, you'll see the Plugins menu to the right of Chats. Choose Plugins, and you'll see eight options. (Seven if you're on iOS.) Unlike ChatGPT Plugins (RIP), the selection is rather limited, and doesn't invite other developers to add onto the list. That said, there are some solid options here:

  • Search: This plugin is what enables Copilot to search the web as part of its responses. You can turn this plugin off if you don't want to connect Copilot to the larger internet, but without it, none of the other plugins will work. In that way, it doesn't really count as an optional plugin.

  • Instacart: The Instacart plugin lets you ask Copilot cooking-related questions about things like recipes and meal plans. Of course, since it's Instacart, you can also place your grocery orders if you click through the provided link.

  • Kayak: When Kayak is enabled, you can ask Copilot travel and accommodation questions, with information provided by Kayak, of course. The company says you can tell Copilot what your budget is too, so the plugin can stay within your set limits with its recommendations.

  • Klarna: This plugins lets you ask Copilot to use Klarna to compare prices on items from "thousands" of stores across the web.

  • OpenTable: This plugin lets you use OpenTable through Copilot, so you can both search for nearby restaurants as well as follow a link to make your reservation.

  • Phone: This plugin connects Copilot to your Android phone. (Apologies to those on iOS.) While it's a limited experience, you can read and send text messages using Copilot, as well as ask the assistant about your contacts. Microsoft says you need to use the Link to Windows app for this feature to work.

  • Shop: This isn't a generic shopping plugin: It pulls from Shop, a company that can aggregate results from different stores.

  • Suno: With this plugin enabled, you can prompt Copilot to generate music via Suno. I previously covered using Suno to make your own AI-generated music, and it's a wild experience, to say the least.

These are your plugin options when using Copilot, but they don't come without their limits. With the exception of Search, which needs to stay enabled at all times, you can have up to three plugins enabled at once. So, you can keep, say, Suno, Phone, and Kayak open, but you can't add Instacart to the mix if you already have those three turned on.

If you want to adjust which plugins you're using, you'll need to start a new conversation with Copilot from the "New topic" button. Unfortunately, you can't switch between plugins on the fly, so just remember that once you start chatting with Copilot, that conversation is locked to the plugins you started with.

It's also important to note that whenever you use one of these Copilot plugins, you share that data with the company behind the plugin. Keep that in mind as you use the service, as plugins aren't the most privacy-focused feature.

Using Copilot plugins

When you have your plugins enabled, you can ask Copilot to use them throughout your conversation. In order to ensure Copilot uses the specific plugin you're interested in, call it out in your response. For example, you could say, "Use Suno to generate a rock song about AI," or, "Use Instacart to come up with a chicken marsala recipe." You'll know Copilot is using the plugin when you see its logo above the response.

using copilot to generate a chicken marsala recipe
Credit: Lifehacker

Some plugins will require you to click through to the company's website in order to continue with its results. For example, if you ask Kayak to look for hotels in New York City for a weekend in October, it will give you a list of options, each with a summary of the hotel's reviews, prices, and amenities. But if you want to book one of those hotels, or learn more about it, you need to click through to the accompanying Kayak link to do so.

These plugins are neat, but I wouldn't suggest going out of your way to use them. If you have a specific task in mind, like ordering groceries with Instacart or booking dinner reservations with OpenTable, you're probably better off going to those sites individually. Copilot will send you to their sites anyway if you want to follow through on any of the results. That said, they're useful when you're looking for ideas: Copilot and Instacart may generate dinner inspo that you might otherwise not think of yourself. Plus, if you have an Android phone and use Copilot frequently, it's cool to be able to keep up with messages from the chatbot.

The Most Important Differences Between Microsoft Copilot and Copilot Pro

11 June 2024 at 09:30

Microsoft has largely billed its AI chatbot, Copilot, as a free-to-use AI experience. You can download the app, call up the chatbot in Windows 11, or use Copilot's website to do all the things you expect to with AI. But if you want more features, there is a paid version of Microsoft's AI: Copilot Pro. The Pro version costs $20 per month, which is a lot of money compared to, well, $0. Here's what you get for that subscription.

How Copilot and Copilot Pro are the same

The free version of Copilot is surprisingly feature-filled for a service that offers a $20 per-month optional subscription. Whether you pay or not, Microsoft offers Copilot as a web app, mobile app, and as a service in Windows, Bing, and Microsoft Edge. (This singular browser of choice shouldn't be surprising, as this is Microsoft's product, after all.)

Perhaps Copilot free's most notable feature is GPT-4 Turbo access, OpenAI's second-latest model behind GPT-4o. So long as you hit Use GPT-4 before chatting with the bot, you can use OpenAI's newer AI model, rather than the older GPT-3.5. It's the same model you get with Copilot Pro, but there are some limitations here, which I'll explain later on.

Copilot free and Pro also both support plugins. You can think of Copilot plugins as similar to browser extensions, as they let you add third-party functionality to the chatbot that otherwise wouldn't be there by default. For example, you can use the OpenTable plugin to ask Copilot about restaurants in your area, and receive links to book a reservation. Paying for Copilot won't necessarily enhance this experience, as there are no extra plugins hidden behind the paywall: There are eight options regardless of your plan.

You also get the same AI art generation tools: Both platforms come with DALL-E 3 access, so you can ask Copilot to make you an image whether you pay for it or not. You also can use Microsoft Designer for free, an AI art program that can generate art, social media templates, and messaging stickers, and offers some light AI photo editing as well.

What Copilot Pro gets you

One of Copilot Pro's biggest perks is priority access to GPT-4 Turbo. As I've already covered, all Copilot users have access to GPT-4 Turbo, but during "peak usage times," Microsoft will boot free users to a slower GPT model. Ironically, you may not feel your $20 impacting Copilot's speed, since the free version can use GPT-4 Turbo, too. But when demand is high, and free users are kicked down to GPT-3.5, your $20 keeps you locked into the higher-performing model.

Another roadblock Copilot free puts up are the limited credits for AI art generation: Microsoft gives you 15 for free, which effectively means you get 15 requests per day before you're locked out. If you pay for Pro, you get 100 credits per day, so you're able to request 85 more generations than free users. That's a lot of art (I personally can't imagine needing to generate 15 pieces a day, let alone 100), but if AI art generation is that important to you, you might find value here.

The key plus to Copilot Pro, in my opinion, is Microsoft 365 integration: If you pay for Pro, you can use Copilot in Word, Powerpoint, Excel, and OneNote. Copilot is available as both a chatbot, which lets you ask questions and make requests based on what's going on in the app, but it also has generative features of its own scattered through these programs. Here are some examples of ways you can use Copilot in these apps:

  • Word: Generate first drafts, add onto or rewrite draft you've already written, and generate a summary of a document.

  • Powerpoint: Create a presentation from a prompt, and summarize and restructure your slides.

  • Excel: Analyze your spreadsheet's data, filter and sort your data, and generate formulas.

  • OneNote: Summarize notes, create to-do lists, and plan events.

An important note: If you have a Microsoft 365 subscription, you can use Copilot in both the web and desktop versions of these apps. However, if you only pay for Copilot Pro, you'll only have access to the web apps.

Finally, Copilot Pro also opens the door to Copilot GPTs, which are custom-built versions of the chatbot designed to perform whatever task you want it to. You can build a GPT that focuses on teaching users about a specific topic, helps you plan and make meals, or generates logos for companies. If you have Pro, you can start building a GPT from Chats > Copilot GPTs > See all Copilot GPTs > Create a new Copilot GPT. Here, Copilot walks you through the creation process, asking questions about what you want this GPT to do, and advising you on the best way to set up your GPT so it performs as you want it to.

Should you pay for Copilot Pro?

As someone that doesn't use a ton of AI in my day-to-day life, I can't say I'd jump at adding another $20 to my monthly subscription bill. While the extra perks are cool, particularly Copilot in Microsoft 365 apps, as a chatbot, you get most of the features you'd expect from the free version of Copilot. If you're solely interested in casually engaging with AI and don't have a need for a robotic assist in apps like Word, Powerpoint, and Excel, Copilot free is more than enough.

That said, Microsoft is offering a one month free trial for Copilot Pro for new customers. As long as you set a reminder to cancel the trial, you can give Pro a shot and see if its extra features are worth it for your needs.

What It Will Be Like to Text With Android Users on iOS 18

10 June 2024 at 19:30

Follow Lifehacker's ongoing coverage of WWDC 2024.

Apple announced plenty of new iOS features during the company's big WWDC 2024 keynote, from sweeping changes to Messages to a totally customizable Home Screen. But one announcement was simply thrown in without any fanfare on Apple's part: RCS support.

Now, this wasn't some bombshell new feature: Apple had already confirmed it would be bringing RCS support to iOS last year. Although the company never confirmed a specific timeline for the rollout, it seemed like iOS 18 would be the time for the company to official adopt the messaging protocol. As it happens, that speculation was correct.

RCS coming to iOS means a lot, but, namely, texting someone with an Android phone is going to be a lot easier. SMS, the current protocol iPhone to Android communications use, is outdated, and missing many of the features modern messaging protocols come standard with today. That includes high-resolution image and video sharing, functioning group chats, and typing indicators—not to mention end-to-end encryption. Instead, you get bad image quality, broken group chats, insecure messaging, and, of course, green bubbles.

Apple took a long time to adopt RCS, mainly because it was good business for them to make messaging the competition an objectively worse experience. But for a variety of reasons, not the least which being a host of world governments cracking down on anti-competitive behaviors, RCS is coming to iOS with iOS 18.

If you think that means the end of green bubbles, however, think again: Apple did show off one screenshot of RCS in action in its iOS 18 features list, and you might be forgiven at first glance for thinking it was a typical iPhone-to-Android chat:

iphone rcs
Credit: Apple

Alas, this is RCS in iOS 18. You can tell because the image of the plants is actually visible, the Android user sent a voice memo, and, of course, the label on the text field reads "Text Message • RCS."

It seems Apple decided to stick with green bubbles after all: It wants you to know you're not texting an iPhone, even if it can no longer can make the experience that bad. Only time will tell if the "green bubble stigma" fades into obscurity: Will iPhone people judge Android users less now that texting them doesn't suck? Or will they forever see the green bubble as a sign that texting this person is going to be a drag?

What's more, will RCS support convince some iPhone users to switch to Android, knowing that their iPhone friends won't have to deal with SMS anymore? That's Apple's nightmare, and likely why it's advertising this feature as little as possible.

How to Install the iOS 18 Beta

10 June 2024 at 18:00

(Follow Lifehacker's ongoing coverage of WWDC 2024 here.)

After months of rumors and speculation, WWDC's big keynote has come and gone. With it, Apple unveiled iOS 18, the next big update for all compatible iPhones. While the company won't be rolling out iOS 18 to the general public until this fall, the software is available to test right now—so long as you're willing to accept the risks.

What's new in iOS 18?

This year, Apple presented us a rather modest iPhone update, at least for most of our devices. The biggest features at the top of the keynote included the ability to choose where you could place your apps on the Home Screen (insert joke about Android having this for years here), an overhaul to Messages (including the ability to schedule messages, thank god), and a new Passwords app that gives iCloud Keychain users a reason not to switch to LastPass or 1Password.

But the biggest new features by far are powered by "Apple Intelligence," Apple's brand name for its AI systems. With Apple Intelligence, you'll be able to ask Siri to perform tasks in various apps, ask the AI to rewrite any text for you, and generate images and "Genmoji." While these features are technically part of iOS 18, they aren't available in the current beta, and will roll out to testers this summer. (They're also only available on iPhone 15 Pro and 15 Pro Max.)

Why you shouldn't install the iOS 18 beta

Look, I'm not the Apple police: I can't stop you from installing iOS 18 as it stands right now on your iPhone. If you want to try all the new features Apple mentioned on its virtual stage today, that's up to you.

However, the boilerplate disclaimer with any beta is this: Installing software that is currently in testing and hasn't been approved for general release is risky. By installing iOS 18's first beta, you are welcoming potential bugs and glitches Apple has not sorted out yet, which could potentially lead to the loss of any data not properly backed up before the install. Unless you know what you're doing, or you're okay losing all your stuff, it's best not to install beta software on your main devices.

This isn't even the public version of the iOS 18 beta: This is the iOS 18 developer beta, the version of the software meant for iOS developers. Apple gives these users an early look at the new software so they can test it with their apps to make sure all programs are optimized by iOS 18's rollout in the fall. If you want to wait for the public beta, which will have some of iOS 18 dev beta's issues ironed out, that will come some time in July.

However, if you just like living life on the edge, you can absolutely install iOS 18 on your compatible iPhone right now.

Installing the iOS 18 dev beta

The first thing you should do before installing beta software on your iPhone is to back it up. Don't rely on iCloud for this: If your iPhone makes a new backup after installing the beta update, it will overwrite the old backup. Instead, connect your iPhone to a Mac or PC and back it up through Finder or iTunes.

In the past, you needed an iOS developer account to install the dev beta, which led users who didn't want to pay Apple's $100 account fee to download the beta from random sources across the web. Apple put a stop to that, and now lets you install the dev beta directly in Settings, even if you aren't an iOS developer.

To start, open Settings > General > Software Update, then hit Beta Updates. Choose iOS 18 Developer Beta, then tap Back. Once you see iOS 18 Developer Beta load on this screen, hit Download and Install.

What to Expect From Apple at WWDC 2024

7 June 2024 at 17:30

Apple's WWDC 2024 is nearly here. In just a matter of days, we'll get our first look at major new software announcements from Cupertino, including iOS 18, macOS 15, and all the AI features in between. This is going to be a big year for Apple, and Apple users alike.

When is WWDC 2024?

WWDC 2024 (or WWDC24, as Apple is stylizing it) is scheduled for June 10 to June 14. It's at this event where Apple will reveal all of its upcoming software updates, including iOS 18, macOS 15, watchOS 11, and new update for tvOS and visionOS.

"Wait," you may be asking yourself, "WWDC is five days long? Isn't it just a two-hour presentation?" Au contraire! WWDC is actually a week-long conference for Apple developers and students where they can meet up and learn from one another. What most of us think of as WWDC—the presentation of new software—is really just the kickoff; that presentation will take place on Monday, June 10.

What will Apple announce at WWDC 2024?

Without a doubt, WWDC 2024's biggest focus will be on AI. Apple reportedly has a lot in store concerning artificial intelligence, and I've been following the rumors and leaks for months.

AI is coming to many apps and features within the iOS, iPadOS, and macOS ecosystems: Siri is reportedly going to get a huge AI boost, with the ability to interact with individual features within apps. ("Siri, delete this email," or "Siri, summarize this article.") Safari may get an AI summarizer for searches and articles. Multiple apps will use the tech to power reply suggestions, if you want to let AI decide how best to respond to a message. You'll be able to turn what you're typing into an emoji through AI-generated images. The Notes app will have an audio recorder which can use AI to generate transcriptions of your recordings, as will the Voice Memos app. Photos reportedly is getting AI photo editing tools, which may borrow inspiration from the Pixel's Magic Eraser to remove subjects from photos.

Apple may also outsource some of its AI processing to the cloud through OpenAI. Not only does this mean Apple may rely on OpenAI to power a ChatGPT-like chatbot on iOS, but the company will also be processing user data through the cloud, a move that may put the privacy-focused company in a bind. That said, Apple has a plan for running AI features through the cloud more securely, through the Secure Enclaves in Mac server farms.

According to Bloomberg's Mark Gurman, Apple is calling its AI "Apple Intelligence" (A.I., get it?), and will be positioning the above as opt-in features in beta. In addition, you'll probably need an iPhone 15 Pro or newer (following Apple's fall hardware release), or a Mac or iPad with an M1 chip or newer to run these features.

But not all of the announcements will be about AI. Gurman reported on some additional non-AI plans Apple has in the works: On iOS 18, you may be able to place app icons anywhere you want to on the display, as well as customize the app icons directly through Settings. Control Center may get some new widgets for music, Home control, as well as Shortcuts for the first time. Messages is getting more than a few new features, including the ability to trigger message effects with single words, new colorful icons for Tapback (as well as the option to use an emoji instead), as well as the option to schedule a message. Of course, we're also waiting on Apple to formally announce a timeline for RCS support for iPhone, a change that will make it much easier to chat with Android users on iOS. It will likely come with a version of iOS 18, but we don't know which one yet.

Apple will also be launching a new Passwords app for all its devices: This should improve the functionality of its current iCloud Keychain option, while offering users a first-party alternate to password managers across the Apple ecosystem. The iPad is also rumored to be getting a calculator app for the first time (not to be confused by other built-in calculator options). The Health app will improve its blood pressure data management and offer hearing tests for AirPods, while the entire ecosystem will receive new wallpaper packs. Gurman thinks these wallpapers will play on nostalgia, resembling old icons and slogans on Mac, and old iPhone wallpapers on iOS.

We don't know a ton about watchOS 11, but Gurman says there will be a new Siri interface that changes formats depending on what you're asking. Plus, there will be big changes to various watchOS apps. Apple will also likely put a focus on Vision Pro, its big foray into mixed reality. While the tech world hyped up the Vision Pro upon its launch, news about the headset has died down considerably. Gurman says visionOS 2 features modest changes, like new environments, the Passwords app, and some ported versions of iPad apps—not too much to entice people to buy a $3,500+ headset.

How to watch WWDC 2024

As usual, you'll be able to watch Monday's livestream from Apple's website, as well as the company's YouTube channel. The event starts at 10 a.m. PT (1 p.m ET).

Adobe Has Responded to Criticism of Its New Terms of Service

7 June 2024 at 11:30

Update: Adobe responded for a second time on Monday, rolling out a blog post clarifying its terms of service (again). While the company wants to make it clear it doesn't claim ownership of your data, doesn't scan content saved onto your computer, and doesn't train its generative AI models on your work, the situation remains the same: Adobe has a lot of access to your data, and does train some AI models with it—unless you opt out.

Yesterday, I wrote about the controversy surrounding Adobe and its updated terms of service. Creators were irate after receiving a pop-up forcing them to agree to the new terms: If not, they could not access Photoshop, nor could they delete the app from their machines.

It wasn't only the fact that the terms were mandatory that alarmed so many users, however. The new language seemed to suggest that Adobe was claiming the right to access creators' work for a myriad of reasons: That rubbed many the wrong way, as many professionals have NDAs in place for their work with Adobe apps. Of course, legal situations notwithstanding, many also rejected the idea that Adobe could access work produced by these creators, simply because Adobe made the apps they were using in the first place.

Adobe remained silent on the issue, until publishing this blog post. In it, the company explains that its changes to its "Terms of Use" were actually small adjustments, and were meant to bring clarity to the company's moderation policies. The company posted a snippet of the terms to the blog post, with new additions highlighted in pink (including any items that were deleted from the previous terms):

adobe terms
Credit: Adobe

According to Adobe, what's new here is the company says it "may" (rather than "will only") access your content through automated and manual methods, and that it reviews content to screen for illegal content, including child sexual abuse material. If an automated system thinks something is illegal, then it flags the item for human review. The rest of the terms are apparently the same as they've always been, and the pop-up that appeared was a routine re-acceptance campaign for users to agree to the small changes.

Since that "access" was at the crux of the controversy, Adobe went into more detail in the blog post about why it needs it. The company says it needs access to user content for three specific reasons: to run standard functions in apps (like opening files or creating thumbnails); for cloud-based features, like Photoshop Neural Filters and Remove Background; and, as mentioned in the terms above, to screen for illegal activity or other abusive content.

Further, the company says it does not train Firefly Gen AI models on your content, nor will Adobe ever "assume ownership" over your work. If you're wondering why the company specifically says Firefly Gen AI models, and not a more general statement on training AI in general, that's because the company does use the content you store in the cloud, including images, audio, video, text, or documents, to train its AI. Any data you upload to Adobe's servers is fair game for this process, and is aggregated with everyone else's data in order to train Adobe AI to "improve [Adobe's] products and services."

This is not explicitly laid out in the blog post, but Adobe's support article says you can opt-out of this training by heading to the privacy settings of your account, then deactivating the toggle for Allow my content to be analyzed by Adobe for product improvement and development purposes under Content analysis.

What's the bottom line?

Adobe is likely not constantly scraping your work looking for insider secrets on your projects, and it flat-out says it won't claim ownership of your projects. However, the company can access anything you upload to Adobe servers: This access lets Adobe scan for illegal content, but also lets the company scrape your work to train its AI models.

While opting out of AI training is wise, the best way to continue using Adobe apps without worrying about Adobe's access is by keeping all projects local on your machine. If you don't use Adobe's cloud-based services, the company can only access your work for app-related tasks, like generating thumbnails—if the terms are to be believed.

These rules have also largely been in place for an undisclosed amount of time: The pop-up you may have seen this week was for you to agree to the small tweaks Adobe made to the terms, not to agree to sweeping changes. You already agreed to those policies—you just didn't know it. My recommendation? Limit your cloud-based work with Adobe going forward, unless you absolutely need to for work. The more of your content you can keep on your machine, the better.

Adobe Photoshop's New Terms of Service Demands the Right to Access Your Work

6 June 2024 at 11:30

Update: Adobe responded to this issue once on Friday, then again on Monday. The company hopes it has finally explained its terms of service clearly, and will be issuing a new terms of service for customers to agree to by June 18. However, nothing has really changed, and the situation largely remains the same as Thursday.

Creators opened Photoshop this week to find a new pop-up informing them of changes to the terms of service. That in and of itself isn't all that unusual: Companies change their TOS all the time, and just to bypass pop-ups, you've probably signed your life away (so to speak) more times than you can count.

However, upon closer inspection, Adobe's adjustments here are beyond the pale: Creators reading the pop-up realized Adobe wasn't changing a permission here and a permission there; rather, the company claims they now have the right to access the work generated from these programs for a myriad of purposes—including, no less, for training AI.

The terms of service, as listed on Adobe's site, appear to be from Feb. 17 of this year, and appear to apply to all Adobe apps. However, the company seems to have pushed out the pop-up to Photoshop users this week for the first time. While there are multiple sections describing these concerning new changes, section 2.2 summarizes the situation:

2.2 Our Access to Your Content. We may access, view, or listen to your Content (defined in section 4.1 (Content) below) through both automated and manual methods, but only in limited ways, and only as permitted by law. For example, in order to provide the Services and Software, we may need to access, view, or listen to your Content to (A) respond to Feedback or support requests; (B) detect, prevent, or otherwise address fraud, security, legal, or technical issues; and (C) enforce the Terms, as further set forth in Section 4.1 below. Our automated systems may analyze your Content and Creative Cloud Customer Fonts (defined in section 3.10 (Creative Cloud Customer Fonts) below) using techniques such as machine learning in order to improve our Services and Software and the user experience. Information on how Adobe uses machine learning can be found here: http://www.adobe.com/go/machine_learning.

Naturally, creators didn't take the new rules well. Sam Santala, founder of Songhorn Studios, posted on X, lambasting Adobe for locking him out of Photoshop unless he agrees to giving the company full access to his work:

This Tweet is currently unavailable. It might be loading or has been removed.

Director Duncan Jones was equally irate: His post called out Adobe for interfering with his movie, and the ridiculous nature of demanding access to creators' work for the sole reason that they are using the company's software to produce that work. Jones has since deleted this post.

duncan jones post about adobe tos
Credit: Jake Peterson

Adobe's pop-up blocks creators from using Photoshop until they agree to the terms of service changes. Santala says he can't even uninstall Photoshop without agreeing to the changes first, which effectively binds creators: Either allow Adobe unlimited access to your work, or let Photoshop turn into a digital paperweight on your computer.

X is full of creators lobbing similar complaints towards Adobe, though the company appears yet to comment on the situation. As of this article, the terms of service still reflect these changes.

Save Your Google Maps Timeline Data Before Google Deletes It

5 June 2024 at 18:30

Once known as Location History, Google Maps' Timeline feature is a fun way to view a chronicle of the places you've visited, and routes to took to get there. You don't need to do anything to keep tabs on your adventures, as Google Maps simply logs your precise location at random times—even when you aren't using a Google app.

Your Timeline data has traditionally been tied to your Google Account. Any time you use a connected device to go on a trip, you're able to access that data anywhere you've signed into your Google Account. That's convenient, but not the most privacy-conscious option—especially for a feature that continuously logs your precise location in the background.

However, as reported by Android Police, Google is transitioning away from Timeline, at least on the web. Going forward, the plan is to generate and keep Timeline data on your device itself. While you can still preserve your previous Timeline escapades and link them to your current device, going forward, each device will generate its own data that will never be sent to the cloud. That means if you use your iPhone as your main Google Maps GPS device but take an Android phone on an individual road trip, neither device will know about the other's adventures. As a byproduct of this change, Google will no longer display any Timeline data on the web, as all data will be locked to users' devices.

Here's where things get a bit dicey: Google may also delete your past Timeline data unless you take action. The company said in an email sent to affected users that it will "try" to save the last 90 days of Timeline data from the first device you use with your Google Accounts after Dec. 1, 2024. Older data will simply be erased.

How to preserve your Google Maps Timeline data

Luckily, it's easy enough to keep the Timeline data you already have. According to Google's email to users, you first need to update the Google Maps app on the device you'd like to save the data to. (Remember, this data saves to specific devices now, so you'll only be able to access it from this one device.) From there, you should receive a prompt via push notification, email, or in-app notification. Follow the instructions, tap Done, and you'll be all set.

As noted, the deadline is Dec. 1, 2024, so you have nearly six months to get this done. Still, if you really care about Timeline, it might be better to set it up on your preferred device at your earliest convenience. Also consider backing up your Timeline data. That way, if you move devices, you'll be able to restore it.

That said, if you don't have Timeline enabled, you will need to enable it from your general Google Account settings first. (I didn't have the feature turned on myself. When I try to access Timeline from the web after enabling it, I get a pop-up that tells me the feature isn't available on the web anymore, and to use the app instead. Oh well.)

google maps settings
Credit: Lifehacker

Google Is an Even Bigger Privacy Nightmare Than You Think

3 June 2024 at 17:00

Saying "Google is a privacy nightmare" in 2024 probably isn't telling you anything you don't already know. It's an open secret that one of the biggest tech companies in the world gobbles up our data, with and without our consent, and uses it in a bunch of different ways, some of which you might find unscrupulous.

But Google still has the capacity to shock: 404 Media has revealed details from six years' worth of privacy and security reports contained within an internal Google database. These previously unreported privacy incidents number into the thousands, and were disclosed by Google employees to the company.

The incidents run the gamut in terms of severity, and it's worth noting that some affected only a limited pool of users, or were rapidly addressed by Google. However, as a whole, the collection of incidents 404 Media shared today is as fascinating as it is concerning.

Privacy issues affecting children, YouTube users

Many of these incidents affected children. One claim suggests Google exposed over one million email addresses from Socratic.org users following an acquisition of the company, including those belonging to minors—and it's possible those users' IP addresses and geolocation data were also exposed. Another claim says a "Google speech service" logged all audio for an hour, and the recordings included speech information for around 1,000 children; a filter that was set to block data collection when it detects children's voices failed to work. And during the launch of the YouTube Kids app, children that pressed the microphone button on an Android keyboard had their audio logged.

Other incidents also involved YouTube. Most notably, Nintendo's YouTube account was mildly compromised after a Google employee was able to access its private videos. That employee then leaked news that Nintendo was preparing to reveal in an upcoming announcement, although Google says the incident was "non-intentional." YouTube also suggested videos to users based on videos those people had deleted from their watch histories, which goes against YouTube's internal policies. It's not clear why it happened. YouTube's blurring feature also left uncensored versions of pictures available for view, and videos uploaded as "Unlisted" or "Private" had a short window when they were publicly viewable.

Waze leaked addresses and Google Docs links were made public

It doesn't stop there. Other general privacy and security issues include problems with the carpool feature in Waze, which reportedly leaked both trip information and the addresses of users. Someone reportedly manipulated affiliate tracking codes through AdWords (Google's ad platform at the time) by modifying customer accounts; a raid of Google's Jakarta office was leaked through a warning from Google's security service; and for a time, Google Drive and Google Docs on iOS treated the “Anyone with the link" setting as a "Public" link.

The most egregious incident, in my view, impacted people who weren't actively using a Google service in the first place. The report alleges that Google's Street View feature transcribed and saved license plate numbers alongside geolocation information. That's a pretty big mistake, Google. Not that any of us actually consented to Google taking photographs of nearly every street in the world, but the company is supposed to censor identifying information, like faces, license plates, and, of course, where in the world you happened to be when that Street View photo was taken—not log it away.

To Google's credit, the company told 404 Media that these reports were all addressed, and are from over six years ago. Google says it is all part of the company's process for reporting product issues: If an employee detects a problem, such as a privacy or security violation, they can flag it and send along to the appropriate department for triage. The company also said some of these flags ended up not being issues at all, or stemmed from problems impacting third-party services.

Too big to avoid

Admittedly, all products and services, especially at the scale at which Google operates, are going to have issues from time to time. No company makes the perfect system, and when issues happen, what's important is how the company responds, and what it changes to ensure the issue doesn't occur again. It's tough to be so understanding, though, when you're talking about a company as gargantuan as Google. The search giant owns a piece of all of our data in some way, shape, or form, so when one of their products has an issue, whether it involves revealing censored images, logging audio from users, or storing private data with geolocation tags, it's going to affect an outsized number of people.

It doesn't even matter if you pledge to swear off using Google products for good: You could abstain from internet-connected devices entirely, and still have your license plate scraped and stored by Street View. There's no getting away from it: Google is now everywhere, and we can only hope they are being as responsive and thorough as they claim in safeguarding our data.

You Can Finally Play 3DS Games on Your iPhone With This App

3 June 2024 at 15:30

Since Apple changed its App Store rules back in April, we've seen quite a few retro game emulators hit the market for the iPhone. You can use an emulator like Delta to play games from the NES to the DS, and Retroarch to relive some OG PlayStation titles. But, until now, there hasn't been a way to play Nintendo 3DS games on your iPhone. If you wanted to play something like Ocarina of Time 3D or Super Mario 3D Land, you'd need to choose another emulator platform—or get a 3DS itself.

That was the case, however, until Folium. Folium is the first emulator to hit Apple's App Store that supports playing 3DS games. Plus, it plays DS and Game Boy Advance games, so it's kind of perfect for anyone solely interested in Nintendo's final three eras of dedicated handhelds. That said, if you want earlier Game Boy games, or retro consoles like SNES or N64, you'll need to download another emulator.

Unlike Delta, Folium isn't free: The developer, Jarrod Norwell, is charging $4.99 for the app. If you want to play DS games for free, you may want to try another app, but if you're dead set on playing 3DS games on your iPhone, you'll need to pay for it. Seeing as $5 is eight times cheaper than a single 3DS game used to sell for, it's not a bad deal.

folium app store
Credit: Lifehacker

Like other retro emulators on iOS, Folium doesn't actually provide you with games to play. That would go against Apple's App Review guidelines, which allow emulators but not the distribution of copyrighted material. After paying for and installing Folium on your iPhone, you'll need to add your own ROMs, or game files, to the app. While the legality of this subject is murky, you'll find that emulator fans insist its legal, so long as you own the game you're trying to play through the emulation software.

Once you have your 3DS ROMs, you can run Folium on iPhones running iOS 15 or newer, iPads running iPadOS 15 or newer, Macs running macOS 12 or newer, and even Vision Pro. The app supports different upscaling filters, like HQx and xBRZ, so you can control how your older games reproduce on your modern smartphone display. For DS and 3DS games, you can also choose whether to boot into the Home Menu, or start the game right away.

Folium supports a range of controller options. Sure, you can play on the touch screen of your iPhone, but you may find pairing the app with one of the following dedicated controllers a more pleasant and accurate experience:

Apple's AI-Powered Siri Could Make Other AI Devices (Even More) Useless

31 May 2024 at 13:00

Thus far, AI devices like the Rabbit R1 and the Humane Ai pin have been all hype, no substance. The gadgets largely failed on their promises as true AI companions, but even if they didn't suffer consistent glitches from a rushed-to-market strategy, they still have a fundamental flaw: Why do I need a separate device for AI when I can do basically everything advertised with a smartphone?

It's a tough sell, and it's made me quite skeptical of AI hardware taking off in any meaningful way. I imagine anyone interested in AI is more likely to download the ChatGPT app and ask it about the world around them rather than drop hundreds of dollars on a standalone device. If you have an iPhone, however, you may soon be forgetting about an AI app altogether.

Siri might be the AI assistant we've been promised

Although Apple has been totally late to the AI party, it might be working on something that actually succeeds where Rabbit and Humane failed: According to Bloomberg's Mark Gurman, Apple is planning on a big overhaul to Siri for a later version of iOS 18: While rumors previously suggested Apple was working on making interactions with Siri more natural, the latest leaks suggest the company is giving Siri the power to control "hundreds" of features within Apple apps: You say what you want the assistant to do (e.g. crop this photo) and it will. If true, it's a huge leap from using Siri to set alarms and check the weather.

Gurman says Apple had to essentially rewire Siri for this feature, integrating the assistant with LLMs for all its AI processing. He says Apple is planning on making Siri a major showcase at WWDC, demoing how the new AI assistant can open documents, move notes to specific folders, manage your email, and create a summary for an article you're reading. At this point, AI Siri reportedly handles one command at a time, but Apple wants to roll out an update that lets you stack commands as well. Theoretically, you could eventually ask Siri to perform multiple functions across apps. Apple also plans to start with its own apps, so Siri wouldn't be able to interact this way within Instagram or YouTube—at least not yet.

It also won't be ready for some time: Although iOS 18 is likely to drop in the fall, Gurman thinks AI Siri won't be here until at least next year. Other than that, though, we don't know much else about this change at this time. But the idea that you can ask Siri to do anything on your smartphone is intriguing: In Messages, you could say "Hey Siri, react with a heart on David's last message." In Notes, you could say "Hey Siri, invite Sarah and Michael to collaborate on this note." If Apple has found a way to make virtually every feature in iOS Siri-friendly, that could be a game changer.

In fact, it could turn Siri (and, to a greater extent, your iPhone) into the AI assistant companies are struggling to sell the public on. Imagine a future when you can point your iPhone at a subject and ask Siri to tell you more about it. Then, maybe you ask Siri to take a photo of the subject, crop it, and email it to a friend, complete with the summary you just learned about. Maybe you're scrolling through a complex article, and you ask Siri to summarize it for you. In this ideal version of AI Siri, you don't need a Rabbit R1 or a Humane Ai Pin: You just need Apple's latest and greatest iPhone. Not only will Siri do everything these AI devices say they can, it'll also do everything else you normally do on your iPhone. Win-win.

The iPhone is the other side of the coin, though: These features are power intensive, so Apple is rumored to be figuring out which features can be run on-device, and which need to be run in the cloud. The more features Apple outsources to the cloud, the greater the security risk, although some rumors say the company is working on making even cloud-based AI features secure as well. But Apple will likely keep AI-powered Siri features running on-device, which means you might need at least an iPhone 15 Pro to run it.

The truth is, we won't know exactly what AI features Apple is cooking up until they hit the stage in June. If Gurman's sources are to be believed, however, Apple's delayed AI strategy might just work out in its favor.

You Can Now Talk to Copilot In Telegram

30 May 2024 at 18:00

Generative AI applications like ChatGPT, Gemini, and Copilot are known as chatbots, since you're meant to talk to them. So, I guess it's only natural that chat apps would want to add chatbots to their platforms—whether or not users actually, you know, use them.

Telegram is the latest such app to add a chatbot to its array of features. Its chatbot of choice? Copilot. While Copilot has landed on other Microsoft-owned platforms before, Telegram is among the first third-party apps to offer Copilot functionality directly, although it certainly isn't obvious if you open the app today.

When I first learned about Telegram's Copilot integration, I fired up the app and was met with a whole lot of nothing. That isn't totally unusual for new features, as they usually roll out gradually to users over time. However, as it turns out, accessing Copilot in Telegram is a little convoluted. You actually need to search for Copilot by its Telegram username, @CopilotOfficialBot. Don't just search for "Copilot," as you'll find an assortment of unauthorized options. I don't advise chatting with any random bot you find on Telegram, certainly not any masquerading as the real deal.

You can also access it from Microsoft's "Copilot for Telegram" site. You'll want to open the link on the device you use Telegram on, as when you select "Try now," it'll redirect to Telegram.

Whichever way you pull up the Copilot bot, you'll end up in a new chat with Copilot. A splash screen informs you that Copilot in Telegram is in beta, and invites you to hit "Start" to use the bot. Once you do, you're warned about the risks of using AI. (Hallucinations happen all the time, after all.) In order to proceed, hit "I Accept." You can start sending messages without accepting, but the bot will just respond with the original prompt to accept, so if you want to get anywhere you will need to agree to the terms.

copilot in telegram
Credit: Lifehacker

From here, you'll need to verify the phone number you use with Telegram. Hit "Send my mobile number," then hit "OK" on the pop-up to share your phone number with Copilot. You don't need to wait for a verification text: Once you share your number, you're good to go.

From here, it's Copilot, but in Telegram. You can ask the bot questions and queries for a variety of subjects and tasks, and the bot will respond in kind. This version of the bot is connected to the internet, so it can look up real-time information for you, but you can't use Copilot's image generator here. If you try, the bot will redirect you to the main Copilot site, the iOS app, or the Android app.

There's isn't much here that's particularly Telegram-related, other than a function that will share an invite to your friends to try Copilot. You also only have 30 "turns" per day, so just keep that in mind before you get too carried away with chatting.

At the end of the day, this seems to be a play by Microsoft to get Copilot in the hands of more users. Maybe you wouldn't download the Copilot app yourself, but if you're an avid Telegram user, you may be curious enough to try using the bot in between conversations. I suspect this won't be the last Copilot integration we see from Microsoft, as the company continues to expand its AI strategy.

Here's How Apple Is Planning to Secure Your AI Data

30 May 2024 at 15:00

It's no secret that Apple is working on AI features that will roll out with iOS 18 and macOS 15. When you update your iPhone, iPad, and Mac later this year, you may find a more natural-sounding Siri, or be able to generate emojis based on whatever you're talking about in Messages. Pretty cool—but how will Apple protect your data while the AI processes all these nifty new features?

While reports suggest Apple will be running many of these features on-device, at least with its newer products, rumors also say the company is planning on outsourcing much of the processing to the cloud. That's not atypical from the rest of the industry: Most AI processing right now is happening in the cloud, simply because AI processing is intense. It's why companies continue to push the capabilities of their NPUs (or neural processing units), which are specialized processors that exclusively handle AI functions. Apple has been using NPUs for years, but made a big show of touting the new M4 chip's beefy NPUs earlier this year, while Microsoft started a new AI-PC standard with its Copilot+ PC line.

Running AI on-device is more secure

Of course, whether or not your AI features are running on your phone or in the cloud probably doesn't matter to you, so long as the feature is working as it should. The issue, however, is that running these features on-device provides an inherently more secure experience. By pushing the processing to the cloud, companies risk exposing user data to anyone with access, especially when the service doing the processing needs to decrypt user data first. Exposure risks include the employees of the company in question, but also bad actors that may try to break into the company's cloud servers and scrape whatever customer information they can find.

This is already an issue with services like ChatGPT, and why I advise not to share any personal information with most cloud-based AI services: Your conversations are not private, and are all being fed to these servers, both for storage and to train the AI model. Companies with an investment in user privacy, like Apple, prefer to use on-device solutions whenever possible, since they can demonstrate that keeping user data isolated to their phone, tablet, or computer keeps it out of anyone else's hands.

How Apple will use 'Secure Enclave' to protect AI data

While newer Apple hardware should be powerful enough to run the AI features the company is cooking up, for older devices, or for features that are too power intensive, it may be forced to turn to cloud-based servers in order to offer those features at all. However, if a report from The Information and cited by Android Authority is accurate, the company may have found a solution: the Secure Enclave.

The Secure Enclave is already part of the hardware of most Apple products in use today. It's a part of the SoC (System on a Chip) that is kept separate from the processor, and its job is to store your most sensitive information, like your encryption keys and biometric data. That way, if the main processor is ever compromised, the Secure Enclave ensures bad actors can't access its data.

According to The Information, Apple is working on an AI-cloud solution that would send all AI user data to the Secure Enclaves of M2 Ultra and M4 Macs running in its server farms. There, those server Macs could process the request while preserving encryption, then send the results back to the user. In theory, this process would keep user data safe while also giving older devices access to Apple's latest AI features.

We won't know for sure whether this is Apple's plan until they reveal what they're working on at WWDC, if at all. If Apple stays hush-hush about how it will protect AI user data, we may never know exactly. But seeing as Apple touts itself as a company that cares about user privacy, the approach (or any approach that ensures cloud-based data is end-to-end encrypted) would make a lot of sense.

Google Just Announced Eight New Features for Android

30 May 2024 at 12:00

New features are the best part of any software update, but surprise new features are even better. Google just announced a new feature drop today, complete with eight new features to try on your Android device. Surprisingly, these features don't have too much to do with AI, Google's big focus right now. Seeing as its AI Overviews project is going quite poorly, it's almost refreshing to see a handful of traditionally useful features coming to Android.

You can now edit your sent messages

Google is finally rolling out the ability to edit your RCS messages after you've sent them. You have 15 minutes after sending a message to make any changes. To find the option, long-press on the message. Google didn't clarify whether there was a limit to the number of times you could change a message before that 15 minute timer expired, but the change puts the company in line with other messaging platforms like iMessage and WhatsApp.

New Emoji Kitchen combinations

Emoji Kitchen is a feature that lets you combine compatible emojis together to create something brand new. (For example, a winking emoji and a ghost emoji become a winking ghost.) Google is now releasing new combinations for the feature, but they haven't listed all possible combos just yet. In the press release, they highlight only one combination, headphones and disco ball, as a way to "get ready for festival season." Presumably, there are more to discover, however.

Switch between devices during a Google Meet call

Going forward, you'll be able to jump between your connected devices while on a Google Meet call. To do so, tap the Cast button and swap from, say, your web browser to your Android phone or tablet. This is a great feature for those of us who need to leave our desktops during a meeting, but want to keep up with the call. It's also great for the opposite: If someone calls you on your phone while you're out and about, but you're still chatting when you get back home, you can switch to your computer and wrap up the call from your desk.

Join your hotspot without the password

Google is rolling out "instant hotspot," which will let you connect your Android tablet or Chromebook to your phone's hotspot without needing to punch in the password each time. It's a small but welcome change that should make connecting to your hotspot feel a bit more like connecting to a known wifi network. (Even if you still have to choose to connect to your hotspot each time.)

Google Home Favorites widget

The Google Home Favorites widget is now available on the home screen for those who sign up for Public Preview. With it, you can control smart devices from your phone's home screen without needing to open the Google Home app first. I can see this being particularly convenient for quick actions, like turning smart lights on and off, or checking in on stats for devices like smart thermostats.

Google Home Favorites on Wear OS

In addition, Google is making a Google Home Favorites tile and complication (essentially a feature on the watch face) for your Wear OS smartwatch. So, same deal as above, just on your watch, if you'd prefer to adjust your smart home devices from your wrist.

PayPal is now on Google Wallet on Wear OS

In an update to Google Wallet, PayPal is now an option when paying for something with your Wear OS smartwatch, at least if you're in the U.S. or Germany.

Digital car keys

Google is taking this moment to roll out digital car keys on Android, starting with "select MINI models," and extending to select Mercedes-Benz and Polestar models at a later date. When you have a car that supports the feature, you'll be able to lock, unlock, or start your car with your phone, as well as share digital car keys with trusted contacts. Digital car keys, like those on iOS, are a slow-growing technology for a myriad of reasons, including cybersecurity and a lack of standardization. The more companies like Google embrace the tech, the likelier it is auto manufacturers will want to add the feature to their cars.


If you're looking for a new Android phone to try out these new features (as well as the rest Android has to offer), check out some of these recommendations from our sister site PCMag:

560 Million Ticketmaster Customers Allegedly Had Their Data Stolen

29 May 2024 at 12:30

Ticketmaster just had a massive data breach—if the hackers behind the attack are to be believed. According to HackRead, the ShinyHunters hacking group is claiming it hacked Ticketmaster, stealing 1.3TB of data from 560 million users. The hacking group posted the data on Breach Forums (a site ShinyHunters owns), offering all of its loot for $500,000 for a buyer willing to pay.

The reported data set includes personal information such as first and last names, home addresses, email addresses, phone numbers, the last four digits of credit and debit cards, card expiration dates, and customer fraud data. However, it also includes Ticketmaster account information as well, such as ticket sales, event information, and orders.

ShinyHunters says it has reached out to Ticketmaster regarding the hack, but that the company has not yet commented.

What to know about the lawsuit against Ticketmaster and Live Nation

This Ticketmaster hack comes just days after the Justice Department filed a lawsuit against Ticketmaster and Live Nation, accusing parent company Live Nation Entertainment of engaging in monopolistic practices and behaviors. Live Nation Entertainment stems from a 2010 merger between Ticketmaster and Live Nation, and since then, the DOJ claims the company has blocked venues from using other ticket companies through anti-competitive means.

The Justice Department believes Live Nation Entertainment's stronghold on the events industry has resulted in both inflated ticket prices and a worse experience for consumers. Ticketmaster could not handle demand for Taylor Swift's Eras Tour ticket sales back in 2022, for example, which lead some to wonder if more competition would have incentivized a better customer experience even among historic demand.

This legal battle will likely extend for some time, but it's an interesting backdrop for these breach allegations. If ShinyHunters really did breach Ticketmaster and steal the data of over 500 million users, it's a bad look at a bad time for the company. Plenty of companies are targets of data breaches these days, but when the government claims you're engaging in monopolistic activities, and you lose the data for over half a billion users, it doesn't put you in the best light.

What Ticketmaster users should do now

Unfortunately, there's not much to do at this point, as Ticketmaster has not yet publicly commented on the breach. We don't even know if it happened, so there's no official steps to take yet.

That said, there are some things you can do to keep yourself protected in general. First, it might not be a bad time to change your Ticketmaster password. It's not clear if this was part of the hack, but resetting your password is a good way to keep bad actors out of your account.

You may also want to start using a credit monitoring service, such as Equifax or Experian, to make sure you aren't the victim of fraud. If ShinyHunters really did steal this information, it may make it possible for other bad actors to steal your identity. These services can alert you to any fraudulent behavior, and walk you through how to respond.

All the AI Features Apple Is Planning to Roll Out This Year

30 May 2024 at 18:30

As the rest of the tech world scrambles to add as many AI features as possible to every product and service imaginable, Apple has kept quiet. In the eighteen months since OpenAI changed the game with the launch of ChatGPT, Apple has yet to roll out any substantial AI features to the iPhone, iPad, or Mac, even as Microsoft, Google, and Meta have seemed focused on nothing else.

But if the rumors are to be believed, that's changing this year, as Apple is expected to release new AI features for iOS 18 and macOS 15 at WWDC in June. There have been murmurs about the trillion-dollar company's AI plans for months now, and the hints keep on coming. According to Bloomberg's Mark Gurman, who has a reputable track record of reporting on Apple rumors, the company is planning an AI approach that isn't quite as flashy as some of the competition's efforts. Instead, Apple will roll out AI features that integrate with the apps iPhone and Mac users already know and use, such as Photos, Notes, and Safari. The initiative is known as "Project Greymatter." AppleInsider also confirms as much, and reports on a handful of additional AI features.

As a bit of an AI skeptic, I like this plan. It allows Apple to enter into the AI space while offering AI-powered features people might actually use, rather than wasting resources on "revolutionary" AI capabilities that most users will ignore once the novelty wears off.

How Apple plans to power its AI features

Whatever Apple ends up announcing this year, its AI features will need to be powered by...something. Your iPhone or Mac likely already has an NPU (neural processing unit), which is a part of the hardware specifically designed for running AI tasks. (Apple already does have some AI features, including Live Text, which uses the NPU to handle processing.) The company made a big deal about the NPU in the M4 chip in the new iPad Pros, which, once added to the new line of Macs, will likely power many of Apple's upcoming AI features.

However, not all features are ideal for on-device processing, especially for older iPhones and Macs. Gurman predicts that most of Apple's features will be able to run locally on devices made in roughly the last year. If your iPhone, iPad, or Mac is pretty new, the hardware should keep up. However, for older devices, or any features that are particularly power hungry, Apple is planning to outsource that processing power to the cloud.

Apple has reportedly been in talks with both OpenAI and Google to lease those companies' cloud-based AI processing to run some of its new features, but it's not clear whether (or when) those deals will materialize. Gurman says Apple plans to run some cloud-based features from server farms with M2 Ultra chips. If Apple can manage to handle the cloud processing on its own, I'm sure that's preferable to signing a deal with a competitor. We'll likely see how Apple's grand AI plan is coming together at WWDC.

Speaking of plans, here are the AI features Apple is rumored to be revealing in June:

Generative AI emojis

Emojis are a huge part of any iOS or macOS update (we may have already seen a handful of new emojis heading for a future version of iOS 18 and macOS 15). However, Gurman suggests Apple is working on a feature that will create a unique emoji with generative AI based on what you're currently typing. That sounds genuinely fun and useful, if done well. While there are a ton of emoji to choose from already, if nothing fits your particular mood, perhaps an icon created from whatever you're actively talking about with a friend will be a better choice. Over on Android, users have had "Emoji Kitchen," which lets you combine certain emoji to create something brand new, for a few years now. Apple seems poised to offer an effective iteration on that idea.

Building off this idea, AppleInsider says Apple is working on something called "Generative Playground," which allows users to create and edit AI-generated images. Perhaps the emojis are just one part of this experience.

Siri, powered by AI

I don't know about you, but I've never been too thrilled with Siri. The smart assistant frequently fails to follow through on my requests, whether because it misunderstands my query, or just ignores me altogether. For some reason, it's especially bad on macOS, to the point where I don't bother trying to use Siri at all on my MacBook. If Apple can find to supercharge Siri with AI, or at least make it reliable, that sounds great to me.

We learned about the possibility of Apple integrating AI with Siri earlier this month, based on info leaked to The New York Times by an unnamed source. Gurman's recent report suggests Apple is planning on making interactions with Siri "more natural-sounding," and while it's possible the company will outsource that work to Google or OpenAI, Gurman says the company wants to use its own in-house large language models (LLM). There may even be a specialized AI Siri in the the works for the Apple Watch.

That doesn't mean Apple is turning Siri into a chatbot—at least not according to Gurman, who reports that Apple wants to find a partner that can provide a chatbot for the company's platforms in time for WWDC, without Apple having to build one itself. Right now, it appears the company has sided with OpenAI over Google, so we may see ChatGPT on the iPhone this year.

According to AppleInsider, Apple plans to make other AI changes to Siri as well, including the ability to ask Siri on one device to control media playback on another device. (For example, asking Siri from your iPhone to pause a show on your Apple TV.)

Intelligent Search in Safari

Earlier this month, we learned Apple has at least one AI feature planned for Safari: Intelligent Search. This feature reportedly scans any given web page and highlights keywords and phrases in order to build a generative AI summary of this site. While Gurman says Apple is working on improved Safari web search, we don't know too much more about how Intelligent Search will work just yet.

AI in Spotlight search

Speaking of better search, Apple may use AI to make on-device search in Spotlight more useful. The feature always gives me mixed results, so I'd love if generative AI could not only speed up my searches, but also return more relevant results.

AI-powered accessibility features

Apple recently announced a handful of new accessibility features coming "later this year," which is almost certainly code for "coming in iOS 18 and macOS 15." While not all of these features are powered by AI, at least two that are seem really interesting. First, Apple is using AI to allow users to control their iPhone and iPad with just their eyes, without the need for external hardware. There's also "Listen for Atypical Speech," which uses on-device AI to learn and identify your particular speech patterns.

Automatic replies to messages

As of last year, iOS and macOS will suggest words and phrases as you type, to help you finish your sentences faster. (This feature is not to be confused with the three predictive text options in Messages that have been around for years now.) With iOS 18 and macOS 15, Apple may roll out automatic replies as well. This means that when you receive an email or a text message, the app may suggest a full reply based on whatever you're responding to. Soon, we may all be communicating with single button taps. (Hmm, do I want to tap the response that says "Sounds good: I'll meet you there," or "Sorry, I can't. Raincheck!")

Notes gets an AI overhaul

AppleInsider's sources say Notes will be quite changed by AI: In iOS 18 and macOS 15, you'll be able to record audio directly in a note, and the AI will transcribe that recording as well. Apple already transcribes voice messages sent in the Messages app (quite quickly, in my experience), so utilizing AI to transcribe recordings in Notes only makes sense. Gurman suggests Voice Memos will also get this transcription feature.

In addition, AppleInsider says the Notes app will also get something called Math Notes, which can recognize math notation and answer math-related questions. As you write out your equations, "Keyboard Math Predictions" can suggest ways to autocomplete the notation. I don't have much use for complex math these days, but I imagine these changes could make the Notes app popular with students.

Smart recaps

Gurman says one of Apple's big focuses is "smart recaps," or AI-generated summaries of information you may have missed while not looking at your iPhone, iPad, or Mac, including notifications, messages, web pages, articles, notes, and more. iOS already has a notification summary feature, but these smart recaps sound more involved.

AppleInsider backs this claim up, and in fact goes so far as to suggest a working name: Greymatter Catch Up. According to the outlet, you'll be able to pull up these AI-generated summaries from Siri as well.

Photo retouching

AppleInsider also confirms another Gurman leak: AI is coming to the Photos app. While Gurman simply says Apple plans to use its AI tools for "photo retouching," AppleInsider says the feature will allow you to select objects to remove from photos, similar to other AI photo editing tools on the market now.

It's not clear if there are more AI editing tools in the pipeline, though. The company has already built an AI image editor that takes in natural language prompts to perform edits; it's possible it will incorporate some of those features into an AI photo "enhancer" for the Photos app on iOS and macOS.

Microsoft's New 'Recall' Feature Is Equal Parts Cool and Dangerous

7 June 2024 at 19:30

We take the search function for granted—when it goes well. If you search for a particular email, photo, or document on your PC, and it pops right up, you don't think twice about it. But if you spend 10 minutes scouring your hard drive looking for that one file, you lose your mind. That's where Microsoft hopes its new Recall feature can help—even if it comes with some major security risks.

What is Recall?

Recall, at its core, is simple: The feature quietly takes screenshots of what you're doing on your PC throughout your session. Whenever you perform a search with Recall, it pulls from all these screenshots to find relevant moments in your PC activity history that might be what you're looking for, stitching them together into a scrollable timeline. For example, if you're looking for a slideshow you were crafting for work, searching for it may pull up the times you were working on it in PowerPoint, as well as the presentation you gave with it. The same goes for an image: If you're looking for the photo of your dog at the park, you may see it from the time you opened it in your photos library, but also in the messaging app you used to send the photos to friends and family.

Recall associates these screenshots with the active app, as well: As you scroll through your timeline, not only can you see which window you were looking at it with, Recall will tell you which app was running and when. So if you know you want the PowerPoint session itself from February, you can skip over any screenshots from Teams.

While it's certainly a novel feature, Microsoft wasn't the first to launch a feature like this. Rewind offers a similar experience over on macOS, recording all your activity (including transcribing your audio) in order to make everything you do on your Mac searchable. Of course, the big difference here is Recall is a Microsoft-built feature, while Rewind is only offered by a third-party developer on macOS.

You also won't be able to use Recall on just any PC, even if its running Windows 11. Instead, this is a Copilot+ PC-exclusive, Microsoft's new AI-powered PC standard. These machines are equipped with the Snapdragon X Plus and Snapdragon X Elite chips, which have a dedicated neural processing unit (NPU) for handling local AI processes. Unless you have one of these new machines, like the new Surface Pro or Surface Laptop, you won't be able to try Recall when it launches. (At least, not officially.)

Is Recall safe to use?

Look, there's no getting around it: Recall takes screenshots of almost everything you do on your PC (assuming you haven't adjusted these settings yourself). That means it won't stop taking screenshots when you enter or access sensitive information like passwords, your Social Security number, or banking data: If you can see it on-screen, chances are Recall is recording it. To use Recall is to accept this practice happening on your PC at all times.

That said, from Microsoft's perspective, Recall is totally safe to use. Because it only runs on Copilot+ PCs, Recall is entirely handled on-device, with no processing outsourced to the cloud. That means everything, from the AI processing to the screenshots themselves, happen on your PC.

Plus, you have control over which apps and websites Recall takes screenshots for. If you don't want Recall to take screenshots when you use WhatsApp, you can tell it not to. You can choose to pause Recall for periods of time as well, and delete either recently taken screenshots, or all screenshots stored on your device. Private browsing sessions in certain browsers, like Microsoft Edge and Chrome, as well as DRM content, like Netflix shows and movies, will also not be recorded. (Your secrets really are safe with private browsing, I guess.)

Recall will also now be opt-in only, according to The Verge: Previously, the feature was slated to be a default feature that you would have to disable after setting up your PC. Now, however, you'll have the option to turn it off during the initial setup process. On top of that, Microsoft will now require you to authenticate yourself with Windows Hello before both setting up Recall and accessing its data. Microsoft will also encrypt the screenshots Recall takes as well as the search index database: To access any of the data, you need to authenticate with Windows Hello.

This is excellent news, as Recall did not originally have these protections. Instead, the Recall database was decrypted when you logged into your PC, which opened a huge security hole for bad actors to take advantage of. This was apparent just from Microsoft's original demo of the feature, but we saw the true security risks this week thanks to security researcher Kevin Beaumont. Beaumont tested the feature out for himself on a PC without an NPU, and concluded that hackers would have no problem scraping your Recall information once you unlock your PC.

Beaumont discovered that when Windows would save these screenshots to your machine, it would actually save all of the text from the images, and would store this data as plain text. That's everything you might do on your PC—including accessing banking information, private websites, and messages—saved as plain text, minus the aforementioned exceptions, of course. This information wouldn't be deleted when you delete the associated data or app, either: If you deleted a message in Teams, for example, it would live on in your Recall database forever. It wouldn't matter if they were messages set to auto-delete: If it came on screen, it would be likely saved to the database.

Further, Beaumont found hackers could employ readily available infostealers to scrape your entire Recall database in seconds. They wouldn't even need physical access to your computer. All they would need from you is to log in, decrypt your drive, and they could use remote hacking software to steal your Recall data. Beaumont actually did it to his own PC: Windows' built-in security tool, Microsoft Defender, did identify the infostealer Beaumont was using, but after taking over 10 minutes to block it, the infostealer had scraped all of Beaumont's Recall data.

The changes Microsoft has implemented to Recall following these findings are definitely positive: Beaumont would likely not have been able to break in and scrape his Recall data quite so easily if the database was still encrypted following a login. But there are still questions here: Recall will likely still save data from apps and files you delete yourself, for example. Will there be any way to easily erase this data if you want it gone completely from your computer? Plus, there's no avoiding the issue that Recall saves all uncensored private information. It's a big risk should somebody figure out your Windows Hello PIN.

Microsoft is slowly figuring out how to implement Recall in a way that protects users, but it doesn't seem quite there yet. I no longer believe you should avoid this feature at all costs, but I also don't necessarily endorse it, either. My advice? Keep an eye on how Microsoft continues to evolve the security surrounding Recall. Perhaps it'll find the right combination of protections to ensure a feature like this can't be abused.


If you do want to try Recall, and any other Copilot+ PC-exclusive features, you can preorder one of Microsoft new Surface devices below:

❌
❌