Normal view

There are new articles available, click to refresh the page.
Yesterday — 17 June 2024Lifehacker

Now Adobe Is Getting Sued by the U.S. Government

17 June 2024 at 15:00

Adobe just can’t catch a break. After raising eyebrows earlier this month with new terms of service that had users worried the company would be poking through their files and potentially training AI on their work, the Photoshop maker is now coming under fire by the FTC, this time over alleged dishonest pricing. The government organization is suing Adobe over its hidden fees and hard to cancel subscriptions.

In a complaint filed on Monday, the FTC said, “Adobe has harmed consumers by enrolling them in its default, most lucrative subscription plan without clearly disclosing important plan terms.” In a related blog post, the regulator dinged Adobe for not making it clear that the subscription is a one-year commitment that charges 50% of any remaining payments when canceled, which can amount to “hundreds of dollars.”

The FTC also complained about Adobe’s poor treatment of customers who are trying to cancel. “Subscribers have had their calls or chats either dropped or disconnected and have had to re-explain their reason for calling when they re-connect,” reads the complaint.

In a statement, FTC Bureau of Consumer Protection director Samuel Levine said “Americans are tired of companies hiding the ball during subscription signup and then putting up roadblocks when they try to cancel.”

The lawsuit targets Adobe executives Maninder Sawhney and David Wadhwani directly, implicating them for their control and authority in implementing such practices.

The complaint follows an investigation that began in 2022. Despite being aware of the increased scrutiny, “Adobe has nevertheless persisted in its violative practices to the present day,” the FTC says.

A screenshot of Adobe's pricing for its default plan
Credit: Michelle Ehrhardt

Currently, there are 20 different plans for individual subscribers listed on Adobe’s website, with more options available as you click through the listed cards. The Creative Cloud All Apps plan, which is highlighted with a “Best Value” banner, does say that a “fee applies if you cancel after 14 days” for its “Annual, paid monthly” tier, although it does not provide specifics on the amount, even when you hover over an info button. Customers can go as far as entering payment information without seeing the final figure.

In a statement posted to the company's newsroom, Adobe General Counsel and Chief Trust Officer Dana Rao said, "We are transparent with the terms and conditions of our subscription agreements and have a simple cancellation process. We will refute the FTC's claims in court." 

Before yesterdayLifehacker

Microsoft Is Pulling Recall From Copilot+ at Launch

14 June 2024 at 14:30

It’s been a tough few weeks for Microsoft’s headlining Copilot+ feature, and it hasn't even launched yet. After being called out for security concerns before being made opt-in by default, Recall is now being outright delayed.

In a blog post on the Windows website on Thursday, Windows+ Devices corporate vice president Pavan Davuliri wrote that Recall will no longer launch with Copilot+ AI laptops on June 18th, and is instead being relegated to a Windows Insider preview “in the coming weeks.”

“We are adjusting the release model for Recall to leverage the expertise of the Windows Insider Community to ensure the experience meets our high standards for quality and security,” Davuluri explained.

The AI feature was plagued by security concerns

That’s a big blow for Microsoft, as Recall was supposed to be the star feature for its big push into AI laptops. The idea was for it to act like a sort of rewind button for your PC, taking constant screenshots and allowing you to search through previous activity to get caught up on anything you did in the past, from reviewing your browsing habits to tracking down old school notes. But the feature also raised concerns over who has access to that data.

Davuliri explains in his post that screenshots are stored locally and that Recall does not send snapshots to Microsoft. He also says that snapshots have “per-user encryption” that keeps administrators and others logged into the same device from viewing them.

At the same time, security researchers have been able to uncover and extract the text file that a pre-release version of Recall uses for storage, which they claimed was unencrypted. This puts things like passwords and financial information at risk of being stolen by hackers, or even just a nosy roommate.

Davuliri wasn’t clear about when exactly Windows Insiders would get their hands on Recall, but thanked the community for giving a “clear signal” that Microsoft needed to do more. Specifically, he attributed the choice to disable Recall by default and to enforce Windows Hello (which requires either biometric identification or a PIN) for Recall before users can access it.

Generously, limiting access to the Windows Insider program, which anyone can join for free, gives Microsoft more time to collect and weigh this kind of feedback. But it also takes the wind out of Copilot+’s sails just a week before launch, leaving the base experience nearly identical to current versions of Windows (outside of a few creative apps).

It also puts Qualcomm, which will be providing the chips for Microsoft’s first Copilot+ PCs, on a more even playing field with AMD and Intel, which won’t get Copilot+ features until later this year.

Samsung Just Announced Its Answer to the Apple Watch SE

13 June 2024 at 19:00

This week’s tech announcements might have led with big Apple news at WWDC, but they’re closing on Samsung’s humblest smart product yet: the Galaxy Watch FE.

Starting at $199, the Galaxy Watch FE is the Android smartwatch leader’s first attempt to capture the success of the simple Apple Watch SE, rather than the more glitzy Apple Watch Ultra 2 or Pixel Watch 2. That means no rotating bezel, a smaller battery, only one size option, a smaller display, and a last-gen Exynos W920 chip. But with almost the same sensor loadout (minus temperature tracking) as the standard Galaxy Watch6, plus a $100 discount from the company’s current cheapest option, it finally gives Samsung’s wearables an entry-level pick.

It also has IP68 dust resistance and 5ATM water resistance, plus the same NFC (for contactless payments), GPS, Bluetooth, and wifi connectivity as Samsung’s more expensive watches. LTE connectivity can be added on for an extra $50, and you’ll boot up the watch to the familiar Wear OS 4 experience.

It’s unusual for Samsung to announce this now rather than wait for one of its Unpacked events, although it does give the company something to announce during its biggest mobile competitor’s biggest week. 

If you’re like me and only use your smartwatch very casually, it’s worth waiting until the June 24 release date for this one, since it seems like it will provide an almost identical moment-to-moment experience as the now mid-range Galaxy Watch6, even with its compromises (you’ll have to wait for later this year if you want LTE, though).

Granted, even with its clear comparisons to the Apple Watch SE, it looks like Samsung is taking a slightly different approach here. The tradeoffs: being based on Samsung’s mid-range Galaxy Watch6 rather than its more premium Galaxy Watch6 Classic means the FE doesn’t have any kind of physical dial, whereas the Apple Watch SE has the same digital crown as the rest of Apple’s watch lineup. At the same time, the FE does have an always-on display, something Apple Watch owners can’t get without spending $400. It’s also $50 cheaper than the cheapest Apple Watch. 

Google’s AI Is Still Recommending Putting Glue in Your Pizza, and This Article Is Part of the Problem

12 June 2024 at 16:30

Despite explaining away issues with its AI Overviews while promising to make them better, Google is still apparently telling people to put glue in their pizza. And in fact, articles like this are only making the situation worse.

When they launched to everyone in the U.S. shortly after Google I/O, AI Overviews immediately became the laughing stock of search, telling people to eat rocks, use butt plugs while squatting, and, perhaps most famously, to add glue to their homemade pizza.

Most of these offending answers were quickly scrubbed from the web, and Google issued a somewhat defensive apology. Unfortunately, if you use the right phrasing, you can reportedly still get these blatantly incorrect "answers" to pop up.

In a post on June 11, Bluesky user Colin McMillen said he was still able to get AI Overviews to tell him to add “1/8 cup, or 2 tablespoons, of white, nontoxic glue to pizza sauce” when asking “how much glue to add to pizza.”

The question seems purposefully designed to mess with AI Overviews, sure—although given the recent discourse, a well-meaning person who’s not so terminally online might legitimately be curious what all the hubbub is about. At any rate, Google did promise to address even leading questions like these (as it probably doesn’t want its AI to appear to be endorsing anything that could make people sick), and it clearly hasn’t.

Perhaps more frustrating is the fact that Google’s AI Overview sourced the recent pizza claim to Katie Notopoulus of Business Insider, who most certainly did not tell people to put glue in their pizza. Rather, Notopoulus was reporting on AI Overview’s initial mistake; Google’s AI just decided to attribute that mistake to her because of it. 

“Google’s AI is eating itself already,” McMillen said, in response to the situation.

I wasn’t able to reproduce the response myself, but The Verge did, though with different wording: The AI Overview still cited Business Insider, but rightly attributed the initial advice to to Google’s own AI. Which means Google AI’s source for its ongoing hallucination is...itself.

What’s likely going on here is that Google stopped its AI from using sarcastic Reddit posts as sources, but it’s now turning to news articles reporting on its mistakes to fill in the gaps. In other words, as Google messes up, and as people report on it, Google will then use that reporting to back its initial claims. The Verge compared it to Google bombing, an old tactic where people would link the words “miserable failure” to a photo of George W. Bush so often that Google images would return a photo of the president when you searched for the phrase.

Google is likely to fix this latest AI hiccup soon, but it’s all bit of a “laying the train tracks as you go situation,” and certainly not likely to do anything to improve AI search's reputation.

Anyway, just in case Google attaches my name to a future AI Overview as a source, I want to make it clear: Do not put glue in your pizza (and leave out the pineapple while you’re at it).

Your X Likes Are Now Private, You Horny Little Devils

12 June 2024 at 15:30

In another move away from the old days of Twitter, X has started hiding the posts you’ve liked from other users. The news follows a previous promise from owner Elon Musk that X would start hiding public retweet and like counts until you click through to a post, although this change doesn’t go quite that far.

In a post on @XEng, the official account for X engineering updates, the site announced that while the like count still shows up under your notifications and that you will still be able to personally see which posts you have liked, others will now no longer be able to see that information. Essentially, the “Likes” tab on your profile page is now private, working more like the current Bookmarks system.

The one exception is a post’s author, who will still be able to see which accounts have liked their posts. (So you’ll still know when your mom likes that post about her grandkids.)

The change seems to already be live, with Musk posting “Important change: your likes are now private” to his personal account. As of writing, I can see my own likes page, but nobody else’s. Overall like count is still showing as normal for me on both my home feed and when I click through to posts.

Previously, public likes allowed users to find posts they might be interested in by looking through what their friends have liked. X competitor Threads uses a mix between the old and new system: Individual profiles don’t show a Likes tab, but you can click the amount of likes on a post to see who has liked it.

Hidden likes were previously restricted to paid users, although X director of engineering Haofei Wang explained in May the reasoning behind making them private for everyone. “Public likes are incentivizing the wrong behavior,” Wang said. “For example, many people feel discouraged from liking content that might be edgy in fear of retaliation.” (J.K. Rowling fans are no doubt breathing a sigh of relief.)

Suspiciously, X’s move to hide likes also comes shortly after its full-blown endorsement of porn on the platform.

This Tweet is currently unavailable. It might be loading or has been removed.

So like all the nudes you want, you thirsty little devils, without fear of being called onto the carpet for your proclivities.

Everything Apple Announced at WWDC 2024

11 June 2024 at 19:00

Follow Lifehacker's ongoing coverage of WWDC 2024.

Apple's WWDC 2024 keynote spanned nearly two hours, covering everything from new iOS updates to a deep dive into Apple's grand plans for AI. If you missed the event, you don't have to sit through the livestream on YouTube: Here's everything Apple announced at WWDC this year.

Apple TV+

severance s2
Credit: Apple/YouTube

Apple kicked off its presentation with a preview of upcoming Apple TV+ shows and movies, which has quite the lineup on the horizon. The company announced Severance's second season in addition to fresh content like Dark Matter, Presumed Innocent, Fly Me to the Moon, Pachinko, Silo, Slow Horses, Lady in the Lake, The Instigators, Bad Monkey, Shrinking, and Wolfs.

VisionOS 2

visionOS 2
Credit: Apple

Apple next took the time to show off some new features for visionOS 2. While a modest update, Vision Pro users can look forward to a larger and higher-res Mac Virtual Display, mouse support, and new gestures for Control Center as well as options like seeing the current time, battery level, and volume adjustments.

visionOS 2 also lets you create spatial 3D images from 2D images in Photos, and it will support using the headset on trains. Right now, Vision Pro has a plane mode, but in other moving vehicles, the device struggles to orient itself correctly.

iOS 18

ios 18 icons
Credit: Apple

We all thought this would be the moment of the Apple event, when the company would roll out its set of brand-new AI features. That... did not happen. Instead, the company highlighted some small but interesting new changes to iOS, and saved the big AI announcements for later.

First up, Apple is opening up customization on iOS. The company now lets you put your app icons wherever you want, change app icon colors to your preference, and adjust the tint of the icons to match your iPhone's dark theme. You can also completely customize Control Center: Apple lets you choose what functions you want where. It honestly doesn't feel 100% Apple. iOS 18 will also allow you to lock your apps behind Face ID, Touch ID, or your PIN, as well as bury any apps in a new Hidden folder.

Messages gets a big upgrade: Tapback icons (thumbs up, heart, "Ha Ha," etc.) now have a more colorful redesign, but you can also choose any emoji from your phone to react with instead. You can now schedule messages, as well, so you don't need to set yourself a reminder to send an important text down the line. We're also getting text effects and formatting: Now, you can add effects to individual words, and choose from rich font features like bold, italicize, underline, and strikethrough. In addition, you'll be able to message other phones via satellite when you don't have a cellular connection, and RCS is on its way.

Mail has a new look, too: You now have quick tabs at the top of the screen (Primary, Transactions, Updates, and Promotions) to sort through your emails. Mail also intelligently sorts through similar emails to deliver relevant information in one view, so you shouldn't have to scroll through multiple emails from your airline to find different flight data. Maps now has a ton of new hiking data, including new topographical maps and the ability to create hikes of your own. Tap to Cash lets you pay people by tapping phones together, while Game Mode reduces background activity to boost game performance and response time.

Photos has a big redesign: Rather than using multiple sections as the Photos app currently operates, the new app is all one view: Your photos grid lives at the top, while the albums, memories, and other sorted material live below. It's a different approach to Photos than Apple has tried before, and it remains to be seen how legacy iOS users will fare with the new UI.

AirPods and TV

man wearing airpods in crowded elevator
Credit: Apple/YouTube

AirPods are getting some new features with iOS 18, as well: You can now interact with Siri by nodding your head for "Yes," or shaking your head for "No." Voice Isolation is also coming to AirPods Pro. It's not clear how this is different from the Voice Isolation already available on iOS and macOS, but Apple is claiming the feature can block out city noises entirely while on a call—at least if Apple's demo is to be believed. Personalized Spatial Audio is also coming to gaming, so developers can integrate the feature into their games.

Apple is also rolling out "InSight," which essentially copies Prime Video's X-Ray: When you pause a show or movie, you'll see, well, insights into the actors as well as what music might be playing during a scene. Enhanced Dialogue is rolling out to more devices, and will use machine learning (aka AI) for an improved experience. 21:9 projectors will also be supported in the latest version of tvOS, and Apple is rolling out new screensavers, like a new Snoopy animation.

watchOS 11

watchos 11 on apple watch
Credit: Apple

Like iOS 18, watchOS 11 is more of a modest update rather than an overhaul. But there are some interesting features to note: The update introduces a new app called Vitals. Here, you can review metrics like heart rate and sleep, and track how they've changed over time. There's also a new feature called Training Load, which takes all your metrics and body measurements to estimate a proper training effort (with a score of 1 through 10) you should be following in your exercises. The feature takes your own experiences into account, too, if you feel you're working too hard or too little.

Speaking of working too hard, you can now pause your Activity Rings (thank god), so your watch doesn't need to shame you for missing one day of goals. You can even set goals based on the day, so Monday can have a lower exercise goal than Tuesday. You can also customize the Fitness app with tiles, so you can quickly check the datasets and features that matter most to you.

Custom workouts support pool swims, and pregnant users can keep up with additional health-related information more easily. Your Apple Watch will use AI to power a new live translation feature, but we don't know that much about it at this time. Smart Stack gets some upgrades as well (including new Translate and Precipitation widgets), and watchOS will get iOS' "Check In" feature. That way, you can automatically notify a friend when you return home from an outdoor run. More apps will be able to take advantage of newer watches' Double Tap feature, Tap to Cash will work with Apple Watches, and there are new versions of the Photos watch face.

For more information about Apple's latest update for Apple Watch, check out our article here.

iPadOS 18

ipads running ipados 18
Credit: Apple

iPadOS 18 is getting a lot of the same features as iOS 18, including message scheduling and Game Mode, but by far the biggest update is the new Calculator app.

No, really. After over a decade of refusing to put the calculator app on iPad, the mad geniuses at Apple have finally done it. iPadOS18 can do math. And it can do it really well. Specifically, the new calculator app for iPad will be able to take notes, taking advantage of machine learning to spice up the calculator experience for tablets.

Upon loading up the Calculator app, you’ll be able to use buttons for basic calculations as usual, but a quick tap will now take you to the app’s new Math Notes section. Here, you can write out formulas and other equations, and the page will automatically update with answers in your handwriting as soon as you draw an equals sign or other equivalent symbol. You’ll even be able to change already solved equations in real-time, and Math Notes will make revisions as appropriate.

There’s also support for graphs, which you can automatically create and adjust based on your already written equations. Your notes will appear in a history, like written notes do in the Notes app, so you’ll also be able to reference old problems days or weeks later. Teachers are certain to pull their hair out over this new “show your work” cheat tool, but Apple is positioning Math Notes as great for budgeting or even scientific prototyping.

You’ll also be able to access Math Notes capabilities in the standard Notes app, which is getting its own update with a new Smart Script feature.

Smart Script is a bit of an odd pitch, as the idea here is to correct your writing to look more like… your writing. Essentially, your iPad will now use machine learning to figure out what your handwriting generally looks like, then make small tweaks to any new writing to clean it up so it looks like its model of your handwriting.

If that sounds confusing, imagine the following: you’re in a lecture and you can’t afford to make perfectly pretty text with your notes, so you just scribble down what you hear and hope you can make sense of it later. Smart Script will try to take those scribbles and make them look like what your handwriting would resemble in a more ideal environment.

“It’s still your own writing, but it looks smoother, straighter, and more legible,” said Apple senior vice president of Software Engineering Craig Federighi.

Notes is also getting some new formatting options, including five new highlighter colors, the ability to collapse sections under heading and subheadings, and automatic content resizing. For instance, if you delete a paragraph, Notes will adjust your remaining content to fill the empty space.

For general navigation, a new floating tab bar will now appear at the top of several apps across the system and give easy access to different parts of that app, sort of like the menu bar in MacOS. It’ll call out a few key sections by default (the Apple TV version of the tab bar has shortcuts to Apple TV+ and MLS content), but you can also expand it into a sidebar for more detail or add new shortcuts of your own.

Finally, there’s accessibility and Shareplay, which should both help with using the iPad. iPadOS 18 will add eye-tracking for navigation, plus vocal shortcuts that will allow users to map custom sounds to specific tasks and actions. Meanwhile, Shareplay will now be able to show when a user taps or draws on their screen, which should help when walking someone through a task directly. In more extreme cases, helpers could (with permission) take direct control of a device.

M-series powered iPads will also get access to Apple Intelligence features like Image Playground, which we’ll touch on later.

macOS 15 (Sequoia)

mac running sequoia
Credit: Apple/YouTube

macOS 15 is here, and it’s got a new, California-based name: macOS Sequoia. Like iPadOS, it’ll get iOS 18 features where appropriate, like Tapbacks and the Passwords app, and it’ll even have a version of Math Notes specific to typed text. But for exclusives, the focus this year is on window layout, device syncing, and gaming.

Perhaps the biggest of these is a new continuity feature: iPhone mirroring. Now, so long as your iPhone is nearby, you can pull up its screen on a window in your Mac, where you’ll be able to interact with apps and notifications. Your iPhone will remain locked while you do this, and you’ll even be able to share files directly from your Mac to your mirrored screen in supported apps like Unfold.

For when you’re just working on your Mac, you’ll also finally be able to pin windows. Third-party apps such as Magnet already offer this, but this is the first time it will be a native feature. Now, when you drag a window (or tile, as Apple calls them) to the edge of your screen, you’ll be able to slot it into a suggested position on your desktop. There’s also keyboard and menu shortcuts for more advanced customization.

For video calls, a new presenter preview will also allow you to double check which of your windows you’re about to share before going live. There are also custom backgrounds to choose from, or you can use a photo.

Finally, there’s gaming. The updates here are more on the developer side, but they show a promising future for gaming on Mac. Basically, Apple is releasing a new game porting toolkit to make games originally built for Windows run on Mac. This should mean a wider library of titles available on Apple’s computers, with confirmed ports already on the way for blockbuster titles like Control and Assassin’s Creed Shadows.

Like on iPad, any Mac running an M-series chip will also get access to Apple Intelligence.

Safari

safari running on macos15
Credit: Apple/YouTube

Safari is getting a few updates specific for macOS Sequoia. These include a new video mode, article summaries, and extra bonuses powered by machine learning.

Now, when Safari detects a video on the page, a new Viewer mode will kick in. You’ll be able to click a button next to your address bar to hide everything on the page but the video, and when you click away from your browser, Safari will go into picture-in-picture mode to play your video in a corner of your screen.

For text-based content, the new Reader mode “instantly removes distractions from articles,” plus gives you a sidebar with what looks to be an AI-generated table of contents as well as a summary (Apple hasn’t been fully clear on how these work yet).

Finally, Highlights uses machine learning to add all sorts of context to your browsing. For instance, if you’re reading an article about a singer, Highlights will pull up a link to one of their songs in Apple Music. Similarly, if you’re looking at a hotel, Highlights will show you its location.

Apple Intelligence

iphone using apple intelligence to check flight info
Credit: Apple/YouTube

Apple’s big showstopper for WWDC this year was its entry into AI. Titled Apple Intelligence (pun intended), the company’s AI does a little old and a little new.

First up, what we don’t know. We don’t know what training data Apple is using for its AI, we don’t know how detailed its image generation can get, and we don’t know specific release dates (just release windows). But as for everything else, Apple was surprisingly forthcoming, possibly even more than Google was about Gemini during Google I/O.

Maybe the most exciting reveal was a revamped Siri. While Apple was the first to market with a digital assistant, Apple Intelligence is finally giving Siri the update it needs to keep up with competitors like Alexa. With Apple’s AI, Siri can now understand context, answering questions based on what it sees on your screen or intuitively understanding which contact you mean when you say “pull up my texts with Mike.”

This opens up the bot to new possibilities when it comes to search and actions, allowing you to search your photos and videos for things like “my daughter wearing a red shirt,” or ask Siri to add an on-screen address to a specific contact card for you.

Siri will also be able to answer questions about Apple products, coming preloaded with tutorials on things like “how to turn on dark mode.” Answers will now display in a box right on your screen, rather than in a linked help page.

You’ll also be able to give Siri prompts to create custom Memory Collages from your photos, and the bot will naturally decipher references to specific contacts, activities, places, or musical styles to stitch them together.

Finally, Siri will be able to take on a more traditional AI chatbot role by leaning on ChatGPT. When you ask Siri a question it thinks ChatGPT could help with, the bot will ask permission to send the question to ChatGPT for a response. Apple’s promised that any requests to ChatGPT will hide your IP address, and that ChatGPT will not log requests made through Siri. You won’t need an account to ask ChatGPT questions, but usage limits will apply, and ChatGPT subscribers can link their accounts to access paid features.

Outside of the realm of Siri, Apple Intelligence is also giving iPhones a Clean Up mode that works a lot like Google’s Magic Eraser. In the Photos app, just tap the Clean Up button, then circle or tap a specific subject to have your phone intelligently cut them out of a photo.

In another Pixel-like feature, Apple Intelligence also allows the Notes app to record and transcribe audio, even from phone calls. Participants on your call will be notified when you turn recording on, so nobody is surprised.

There are a couple of other organization goodies here, too. Both notifications and the Mail app can use AI to prioritize and summarize what you see, and the Mail app will even allow you to use AI to compose smart replies.

Then there’s the more traditional features. Apple Intelligence will be able to rewrite or proofread text “nearly everywhere” you write, based either on custom prompts or pre-selected tones.

It’ll also be able to use ChatGPT to generate whole new text, using similar rules as Siri.

Image Playground will be your image generator, and will be able to incorporate AI art into your Notes, Messages, and more. You’ll have access to pre-selected subjects and art styles as well as a prompt box, although it’s not clear how much freedom you’ll have with the tool. Apple’s language emphasizes that users will “choose from a range of concepts” rather than type anything they want. What is clear is that Image Playground will have the same contextual approach as Siri, meaning it’ll be able to generate caricatures of people in your Contacts list with just a name.

Finally, there’s Genmoji, which are similar to Image Playground but are specifically custom, AI-generated emojis. Like regular emojis, they can be added inline to messages and shared as a sticker react.

That all sounds a little cool and a little scary, which is why Apple is emphasizing privacy with its AI. Apple wants to have most AI processing happen on-device, but for content that needs to touch the cloud, Apple is promising that data will never be stored and will only be used for requests. It’s also making its servers’ code accessible for third-party review.

The catch to all this? It relies on Apple Silicon neural engines. That means it’ll only come to devices with an A17 Pro chip or an M-series chip. This limits which iPads and Macs you can use, plus makes it so the only iPhones with Apple Intelligence (at least at the start) will be the iPhone 15 Pro and Pro Max.

Apple Intelligence will be available to try out in U.S. English in the summer and will come out in a beta form in the fall.

Apple Is Finally Bringing AI to iPhone, iPad, and Mac

10 June 2024 at 17:30

Follow Lifehacker's ongoing coverage of WWDC 2024.

Apple is finally ready for its AI moment, after years of speculation and four generations of devices with almost unused neural engines inside. At its 2024 WWDC conference, the company formally announced Apple Intelligence (yes, pun intended), set to release in beta for iPhone, iPad and Mac this fall.

It’ll be interesting to see how Apple interacts with such a nascent technology. As with Apple Vision Pro, the company usually prefers to wait on trends until it can release a refined, largely frictionless take on them. AI, meanwhile, still frequently “gets it wrong,” as Google learned with its AI Overviews feature earlier this month.

Nevertheless, Apple is going full steam ahead with AI in messages, mail, notifications, writing, images, and perhaps most intriguingly, Siri. The company promised it can maintain its reputation for polish, too, with a greater focus on privacy and on-device processing than the competition.

Details on how exactly Apple’s AI works are light, but overall, the company’s promising to do more than Google, Rabbit, or pretty much any competition has done so far. Let’s break it down.

AI in Siri

AI in Siri
Credit: Apple

Perhaps the most innovative Apple Intelligence feature is Siri, which is getting a full makeover complete with a new logo.

It’s a moment that’s been a long-time coming: Since Siri introduced the world to the digital assistant in 2011, it’s been overtaken by competitors like Google Assistant and Alexa in many other respects. Now, Apple is doubling down on Siri, fully revamping it with AI even as Google inches towards replacing Google Assistant with Gemini. The result? A much more natural AI assistant than on Android.

Right now, on Android, replacing Assistant with Gemini will just take you to a shortcut for the web app. Unlike its “dumber” predecessor, Gemini can’t set reminders, adjust phone settings, or open apps, meaning its promises of more functionality actually come with less functionality.

That’s not supposed to be the case with the new Siri, which will maintain all its “dumb” features, but come with new contextual awareness. Now, when you open Siri, it’ll take a look at what’s on your screen, and will be able to offer advice based on what’s displayed. You could be looking at the Wikipedia page for Mount Rushmore, for instance, and ask, “What’s the weather here?” to get Siri to tell you a forecast for your trip.

Contextual awareness isn’t limited to what you have pulled up in the moment, either. Apple says Siri will also be able to search your libraries and apps to take “hundreds of new actions,” even in third-party programs. Say you save this article to your reading list right now. When Apple Intelligence comes to your iPhone, you could ask Siri to, “Bring up the Lifehacker article about WWDC from my reading list” to access it again.

AI in Siri with contextual awareness
Credit: Apple

Or, more personally, say you’re texting a friend about a podcast. With the new Siri, you could just ask, “Play that podcast Dave recommended this weekend,” and Siri will know what you’re talking about and pull it up.

The implications here are big, both for usefulness and privacy. Overall, promised contextual features include:

  • Contextual answers for questions

  • Contextual search in photos and videos (for example, you could ask Siri to bring up all photos of you wearing a red shirt)

  • Ability to take contextual actions for you, like adding an on-screen address to a contact card or applying auto-enhance touch-ups to photos for you

But Siri’s also hoping to bring an Apple Genius into your home, as Siri is coming preloaded with tutorials on how to use your iPhone, iPad, or Mac. Just ask the assistant, “How to turn on dark mode” or, “How to schedule an email,” and Siri will reference its training material and feed you an answer though an on-screen notification, rather than sending you to a help page. (We'll still be here for all your tech advice needs.)

Siri how to for Apple device
Credit: Apple

One of Siri’s more traditional, prompt-based features is the ability to create custom, AI-powered video montages. Right now, Apple’s Memory Collages just automatically generate in the background, algorithmically tying together photos the OS thinks are related and setting them to background music the software thinks will fit. Soon, you’ll be able to give Siri specific directions, referencing contacts, an activity or place, and a style of music. Siri will then contextually generate a fitting montage, with music pulled from Apple Music.

There’s also typical AI chatbot features, like the ability to ask questions. Oddly, Apple wasn’t clear on whether Siri will be able to answer questions directly (at least those not related to Apple devices), but the company has a back-up: Through Siri, you can ask ChatGPT your questions.

Because Apple’s privacy settings differ from ChatGPT’s (more on that later), Siri will prompt you to give it permission for ChatGPT each time you use it. Then, the assistant will ask your question for you, no account required. Like DuckDuckGo, Apple will also hide your IP address when using ChatGPT for you, and the company promises OpenAI will not log your requests. ChatGPT subscribers can also link their accounts to Siri for access to paid features, although Apple does warn that free users will face the typical data use limitations.

Siri’s AI features will be usable across iPhone, iPad, and Mac, and present what looks like a more natural AI-powered assistant than Google’s approach of starting over with Gemini. That said, if it seemed like it’s still limited compared to what other LLM chatbots can do, that’s because Apple Intelligence is much bigger than Siri.

Apple Intelligence is going above and beyond the Google Pixel

A big part of Apple’s AI presentation this year seemed targeted at Pixel, specifically its AI “feature drops.” Until now, Pixel transcriptions and Magic Editor have been big exclusives for Google, but Apple Intelligence is finally giving its biggest competitor a shot in the same arena.

First, iOS, iPadOS, and MacOS devices are getting their own takes on Magic Eraser and Live Transcription. In the Photos app, users can tap a new Clean Up icon to circle or tap subjects they want cut out of a picture. Photos will remove the offending subject, then use generative AI to fill in where they were. It’s not quite on the level of Magic Editor, which allows you to move subjects around once selected, but Google is firmly on notice.

Clean Up in iPhone
Credit: Apple

Similarly, the Notes app will be able to summarize and transcribe audio recordings for you, a boon for journalists like me. I’ve had colleagues choose the Pixel for its transcribing feature alone, and now I’ll finally be able to keep up on my iPhone. Even better—Notes will also be able to transcribe phone calls live.

That does present a legal issue, so actual usage is likely going to differ from state to state and from country to country, as recording laws differ depending on where you are. For now, Apple says the Phone app will warn you when a recording is about to start.

But beyond features that are similar to those on Google’s flagship, Apple’s also developing unique draws of its own. Here, the company is making it easier to manage your notifications and mail.

The standout features here are Priority Messages and Priority Notifications. With Priority Messages, Apple AI attempts to find “the most urgent emails” and push them to the top of your inbox. Priority Notifications takes a similar approach, but with lock-screen notifications from texts and apps.

Priority Notifications
Credit: Apple

With both of these, you’ll have the option to have the AI write you a summary of the mail or notification rather than previewing its content, helping you quickly browse your feed. In Mail, you’ll actually be able to get summaries across your whole inbox.

Apple pitches this as a great way to stay up-to-date with time sensitive information like boarding passes. Additionally, in Mail, you’ll be able to use Smart Reply to have AI quickly type out a reply for you based on the context of your email. You’ll also be able to get summaries across an entire conversation, not just for the first email.

With these updates, Apple is finally coming for Google’s software, hoping to dethrone the Pixel from being “the smartest smartphone.” But these innovations aren’t without risk. Take the Reduce Interruptions Focus mode, which will use AI to show “only the notifications that might need immediate attention, like a text about an early pickup from daycare.” Relying on Apple Intelligence for what to show you is putting a lot of trust in an untested model, although it’s promising for Apple that it has the confidence to push that kind of feature out at launch.

Apple can help you write and generate images

Speaking of risk, it’s time to talk about the bread and butter of AI: image and text generation.

Even as Google is telling people to use “squat plugs,” Apple apparently feels confident enough in its models that it’s trusting them to help you be creative. Enter Rewrite, Image Playground, and Genmoji. Across compatible first-party and even third-party apps, these will allow you to create content using both Apple’s own models, and in some cases, ChatGPT.

Rewrite is the most familiar of these. Here, Apple is promising system-level AI help with text “nearly everywhere” you write, including in Notes, Safari, Pages and more via developer SDKs. From a right-click style menu on highlighted text, users will be able to give Apple Intelligence a custom prompt, or select from a number of pre-selected tones, and the AI will then rewrite the text accordingly.

Not into having AI change your text? It’ll also be able to proofread it to point out errors, summarize it (useful if you’re reading rather than writing), or reformat it into a table or list.

It’s similar to Chrome’s new ability to rewrite text on a right-click, but with way more options and supposedly available across many more apps. It’s also more accessible than Copilot, which lives in a separate menu, siloed away from the rest of Windows.

You'll also be able to generate text from scratch, although Apple will lean on ChatGPT for this.

ChatGPT text generation on Mac
Credit: Apple

Image Playground and Genmoji are where things get more novel. Instead of having to go to a specific website like Dall-E or Gemini, Apple devices will now have image generation baked right into the operating system.

Available as its own app, baked into Messages, or integrated into other compatible apps via an SDK, Image Playground looks like your typical AI art generator, but powered by the same type of contextual analysis as Siri. For instance, you could give it a prompt, tell Image Playground to incorporate someone from your contacts list into it, and get art with a caricature of that person.

Again, Apple’s putting a lot of faith in its AI here. Say I send someone an image made with Image Playground and it hasn’t necessarily been flattering in how it’s depicted them: Yikes.

Image Playground
Credit: Apple

That said, it seems like there might be guardrails on the experience. Apple’s marketing language is a bit vague as to what the limits are here, but even with a prompt box prominently displayed in example, Apple is consistently telling us that we’ll have to “choose from a range of concepts” including “themes, costumes, accessories, and places.” It’s possible Apple won’t let users generate controversial images, an issue Bing and Meta have previously contended with. 

But let’s say you don’t want a full image with lots of detail anyway. Apple’s also introducing Genmoji, which are similar to Meta’s AI stickers. Here, you’ll be able to give Apple’s AI a prompt and get back custom emoji done up in a similar style to Unicode’s official options. Again, these can include cartoon representations of people from your contacts list, but like emoji, they can also be added inline to messages or shared as a sticker react. Again, we don’t know the limits of what Apple will allow here.

Genmoji on iPhone
Credit: Apple

We’ll have to wait until Apple’s AI images drop to properly see how well they compete against existing options, but perhaps the most interesting thing here is the ability to naturally generate images in existing apps. While Apple promises this will extend beyond Notes, one example the company showed had it selecting a sketch in Notes and generating a full piece of art based on it. Another had the AI simply generate a brand new image in Notes based on surrounding text.

That convenience, especially as AI remains split across dozens of sites and services, is sure to be a big selling point here.

Apple is promising private, on-device AI

Apple hasn’t been entirely forthcoming about the training materials for its AI, but something the company did push was its privacy.

Recently, moves from Meta and Adobe have raised concerns about AI’s access to its users data. Apple wants to put any such worries about its own AI to bed right away.

According to Apple, any data accessed by its AI is never stored and used only for requests. Further, Apple is making its servers’ code accessible to “independent experts” for review. But, at the same time, the company is looking to reduce the amount of times you have to access the cloud as much as possible.

Enter the A17 Pro chip (introduced in the iPhone 15 Pro and Pro Max) and the M-series of chips (used in iPads and Macs starting from 2020). Devices with these chips all have access to neural engines that Apple says will allow them to complete “many” requests on-device, without your information ever leaving your phone.

How exactly the split between on-device and on-cloud tasks will be handled is still up in the air, but Apple says that Apple Intelligence itself will be able to determine which requests your device is powerful enough to handle on its own and which will need help from servers before it decides where to send them.

While that’s still a promise, this would be a huge win for Apple, with competing features like Magic Editor and Gemini still requiring constant internet connections.

When can I try Apple Intelligence?

Apple didn’t give any specific dates on when Apple Intelligence will go live, instead giving viewers two windows to look forward to.

First, the company said Apple Intelligence will be “available to try out in U.S. English this summer,” although given what it said next, that’s likely to be a limited demo.

That’s because the full beta of Apple Intelligence is set for this fall, meaning it’ll likely come after the full releases of iOS 18, iPadOS 18, and macOS 15 via an update.

Perhaps the biggest hurdle Apple has to overcome with its AI, beyond making good on its security promises and ensuring its content generation doesn’t ruffle any feathers, is availability. While the promise of having most AI on-device is great for privacy and even for situations where internet connectivity is limited, it does have a caveat: Apple’s announcement for its AI only mentions it coming to iPhone 15 Pro, iPhone 15 Pro Max, and iPad or Macs with an M1 chip or later. Similarly, Siri and device language must be set to U.S. English to begin.

Xbox Is Releasing Three New Models This Year

10 June 2024 at 11:30

Summer is the season for new game announcements, but alongside a new Doom and the next Call of Duty, Sunday’s Xbox Games Showcase actually debuted some new Xbox consoles as well.

This isn’t an entirely new console generation, or even a mid-generation upgrade like the rumored PS5 Pro. Instead, Xbox is making good on its own rumors, specifically the ones about a discless Xbox Series X.

Coming “holiday 2024,” the now-confirmed All-Digital Xbox Series X comes in an all-white finish (as opposed to the original model's black) and will have all the same specs of the original Xbox Series X, including the 1TB storage drive. It will cost $449, which is a $50 discount from the current model and the same price as the competing all-digital PS5.

For those who want to beef up their Xbox Series X instead, there’s also a new “special edition” Xbox Series X in the works. It won’t be getting a performance upgrade, but storage will see a bump to 2TB and the console will be decorated with a “Galaxy Black” pattern, which essentially means it’ll have dozens of little stars decorating it. The system will launch this holiday in unspecified “limited quantities” for $599.

The 1TB Xbox Series S, which costs $349 ($50 more than the 512GB version), is also getting a new white color option. Previously, it was only available in black. It’s also slated for holiday 2024.

And that’s it. For folks who already have an Xbox, there’s little reason to swap out to one of these new models, but for those just now getting into the ecosystem, you’ll soon have plenty of additional color and capacity options.

Current Xbox Models

Until now, Xbox has used its white color scheme exclusively on the Xbox Series S, while black was reserved for the Series X and the 1TB Series S. Below is a list of all current Xbox Series S and Series X models.

Not All Copilot+ Laptops Are Getting Their AI Features Right Away

7 June 2024 at 14:30

Days before Apple’s WWDC event, Microsoft’s own AI laptop initiative just got some bad news. While the first Copilot+ devices are set to launch on June 18, it turns out only a portion of new laptops being sold under the Copilot+ name are going to actually have any kind of advanced AI features built-in at launch.

This comes as a bit of a surprise, as processor companies AMD and Intel both announced at Computex that they would be releasing Copilot+ compatible laptop chips later this fall. Prior to that, Microsoft had only ever talked about Copilot+ coming to Windows on Arm devices, which use chips from Qualcomm.

That’s a lot of branding to throw at you, but what this essentially means is that Copilot+ laptops are now set to be split between three different chipsets—and unfortunately, it looks like two of those chipsets are going to lag behind the other.

In statements to The Verge, representatives from Microsoft, Intel, AMD, and Nvidia (which is working with AMD) confirmed the non-Qualcomm laptops are going to have to wait for their Copilot+ features. While Arm-based laptops will arrive later this month with access to Recall and the gaming-focused Auto Super Resolution mode, laptops with AMD and Intel chips will have to wait for “free updates” to get those features, Microsoft marketing manager James Howell told The Verge.

More specifically, AMD PR manager Matthew Hurwitz indicated these “Copilot+ experiences” might not arrive until “the end of 2024.”

Perhaps some of this was to be expected. Intel’s own press release for its AI laptop chips mentions that Copilot+ features will come via free updates “when available.” But language and potential timetables weren't entirely clear until now.

While Microsoft does not directly manufacture AMD or Intel devices, with the exception of certain Surface models, this does put the Windows maker in a tricky spot as Apple prepares its own AI features to be rolled into macOS 15. Generally, an Apple user will be able to buy a new Mac and assume it can do all Mac things; Windows users looking to get into AI will be split across three options, and will have to do a lot more research about what they can and can’t do with their new machines right out of the box.

Apple Will Introduce a New Passwords App at WWDC

7 June 2024 at 13:30

Password managers are a great way to up your security online, and more and more companies are starting to bundle them into their software. Even Apple already has free password management features tucked away in iOS, Mac, and Safari, but they can be a bit hard to get to. Now, Bloomberg’s Mark Gurman says the company is about to introduce its own Passwords app to rival competitors like LastPass or 1Password.

Simply called Passwords, Gurman’s anonymous sources say, the app will come with iOS 18, iPadOS 18, and macOS 15. It will use the same iCloud Keychain technology powering Apple’s current password management systems, but users won’t have to dig through their Settings to get to it anymore.

As with iCloud Keychain, users will be able to generate and store login information, share passwords, and store passkeys in addition to passwords.

They’ll also be able to store passwords by category, meaning streaming service logins or social media logins could all be grouped together. Currently, iCloud Keychain stores passwords alphabetically.

Outside of the realm of passwords, the app will also be able to handle two-factor verification codes, allowing it to fulfill a similar function as Google Authenticator or Microsoft Authenticator.

Whether there will be other new features, such as Proton Pass’ dark web protection, are unclear. Basic use promises to be strong, however, as iCloud Keychain already allows users to store unlimited passwords.

In addition to the usual suspects, the app will also work on the Vision Pro headset, and surprisingly, Windows computers. The move could push Microsoft to update its Credential Manager, the free password manager built into Windows that, like iCloud Keychain, is currently buried away in settings apps.

Gurman says the new app will be announced as part of Apple’s WWDC keynote on Monday, although I expect AI to be the main focus of the event.

DuckDuckGo Now Lets You Talk to AI Anonymously

6 June 2024 at 08:00

Chatting with AI has proven to be a bit of a privacy risk as of late, with Meta admitting that it trains its AI by using your conversations with it (among other material posted to its services). Now, privacy-focused search engine and browser company DuckDuckGo is officially giving users a way to claw back some anonymity, but the solution does have its limits.

DuckDuckGo AI Chat, an intermediary for talking to AI chatbots while keeping them from seeing some of your information, is now officially available for everyone. You’ll be able to use the service to talk to OpenAI’s GPT 3.5 Turbo, Anthropic’s Claude 3 Haiku, and now Meta’s Llama 3 and Mistral’s Mixtral 8x7B. You won’t even need DuckDuckGo’s browser to access it. Simply navigate to either duck.ai or duckduckgo.com/chat. Searches made on DuckDuckGo’s search engine will also now feature a Chat tab that will open up an AI conversation on the searched topic (or you could begin your search with !ai or !chat).

Once on the page, pick your chat model and discuss as usual. This is where DuckDuckGo’s privacy features come into play.

According to DuckDuckGo, “all chats are completely anonymous: they cannot be traced back to any one individual.” This means that when you talk to an AI model via DuckDuckGo, the service will ask your questions on your behalf, preventing the chatbot’s owners from seeing your IP address. Users also have a “Fire” button, which can instantly clear the chat to start over.

DuckDuckGo also promises that chats made using its service “are not used for any AI model training.” which might have you raising an eyebrow. Even if DuckDuckGo is hiding your IP address, the chatbot will still send its conversations with you back to its servers, right?

Well, according to DuckDuckGo, the company has “agreements in place with all model providers to ensure that any saved chats are deleted by the providers within 30 days, and that none of the chats made on [its] platform can be used to train or improve the models.”

This is where the service hits its limits. For instance, you won’t always get access to totally up-to-date models here, like GPT-4o. The nature of these agreements means DuckDuckGo has to negotiate to add new chatbots to its service, although the company promises “more to come.”

There’s also the ability for partnered companies to keep your chats in its servers for up to 30 days, which can be concerning even if DuckDuckGo says “all metadata is removed.”

Still, it’s a step in the right direction when it comes to interacting with AI, especially as Meta is forcing users to submit essays to protect their data on its own platforms (and not even giving some users the option). DuckDuckGo’s AI Chat is free within an unspecified daily limit, although DuckDuckGo says it is exploring a paid plan with higher limits and access to more advanced models.

Users who would rather not see any AI in DuckDuckGo can disable the new feature from this Search Settings menu.

How to Keep Porn Off Your X Feed Now That It's Officially Allowed

4 June 2024 at 17:30

Following a recent uptick in what seems to be bots making sexually suggestive posts on X, Elon Musk’s social media website appears to be giving up on policing them. The website formerly known as Twitter updated its policies over the weekend to allow “consensually produced and distributed adult nudity or sexual behavior” on the app, albeit with certain guardrails in place to at least nominally protect underage users.

The move does follow X’s past treatment of suggestive content, in that the site has never been known to crack down on porn. But with “pussy in bio” showing up uninvited under so many otherwise tame posts, the move to officially allow nudes could be seen as a bad sign for people who might want to use X in SFW environments.

Technically, these posts should also count as spam, and this author would argue that consent should also take the viewer into account, not just the people involved in making the content. With that in mind, here’s the best ways we’ve found to clear your X timeline of porn.

Try reporting it

Again, even if porn is allowed, if a bot posts a nude on your post and links you to an outside site, that should count as spam. Your first port of call in this case should be to report it.

First, click on the three-dot menu in the top-right of the post, then click Report post next to the flag icon. There, choose Spam. Click the Next button to submit your report, where you’ll be prompted to Mute or Block the poster.

Instead of Spam, you could alternatively report the post for Abuse & Harassment or Sensitive or disturbing media. X only allows you to choose one reason for reporting, so pick the one that fits best. Abuse & Harassment is best reserved for suspected illegal content, like that involving child sexual abuse material, while Sensitive or disturbing media might be better applied to nudity not clearly placed behind a content warning.

Your mileage will vary on reports. I report pretty much any porn bot that wanders its way into my comments, and it has yet to stop them. Regardless, it’s good practice to help protect your fellow users.

Enable the Sensitive Content filter

X has its own built-in filter for sensitive content, although it still has its limitations and can be a bit draconian, since it puts a blanket ban on anything the site considers “sensitive content,” which can bundle violence and gore in with porn.

If you’re OK putting a general clean filter on your profile, open X in a browser (I could not see this option in the app), click on your profile icon, click on Settings and privacy, and open the Privacy and safety menu. From there, click Content you see, then make sure “Display media that may contain sensitive content” is unclicked.

This won’t stop accounts from posting nudes in your replies, but it will force you to click through before you’re able to view them.

While here, you can also click Search settings and toggle on Hide sensitive content to keep flagged posts from showing up in your searches.

This isn’t a perfect solution, since it affects your eyes only and doesn’t keep spam from interacting with you. It also relies on users either flagging their own posts as sensitive or Twitter’s system detecting sensitive posts before they get to your screen, but every little bit helps.

Try setting your account to under 18

This one is best used by parents on their kids’ accounts. Essentially, it works as the same as the above method, but prevents you from being able to click through to see the sensitive material.

It’s also super easy. When setting up an account, either don’t enter a birth date or ensure your birth date puts you under 18 years of age. X will consider you a minor, and will do its best to keep you from viewing adult content.

Mute suggestive words

To more selectively keep porn off your X feed, try instead blocking posts containing specific suggestive words.

Once again, navigate to your settings, open Settings and support, click on Privacy and safety, and open Mute and block. Here, you’ll be able to manage your Muted words, where you can try adding terms like the infamous “pussy in bio.”

It’s not a perfect solution, for a few reasons. First, you can only mute these words in your timeline, rather than in replies to your posts. Second, bots are clever and it can be hard to predict every term they’ll use. Instead of saying “pussy in bio,” for instance, they might say “pu$$y in bi0.”

Still, it doesn’t hurt to toss a few terms in here. You’ll be able to choose whether to apply the mute to anyone or just to people you don’t follow, and for how long you want the mute to be active.

Use a parental control or porn blocking app

While X once supported block lists, which would allow communities to create and share files that would mass block certain users, the feature is no longer active. The best alternative now lives in external apps, such as Canopy.

These apps run on your device and all work in different ways, so be careful when signing up to make sure your choice doesn’t infringe on your privacy too much.

Canopy is a paid option, but is one of the more general use ones out there. When active, it monitors your screen with an AI algorithm and censors out offending images in real time. Note that this will require giving the app some high-level permissions, although Canopy’s FAQ promises the company will “never sell your data” and only uses what it considers necessary “to provide you with a superior product.”

Canopy is a paid service starting at $7.99 per month for individual users, but is a good choice when you still want to use X for SFW content, as an app that relies on blocklists might keep you from visiting the site on the whole.

Finally, Someone Is Making an AI Desktop

3 June 2024 at 18:30

A strange quirk of AI computers, from Chomebook Plus to Copilot+, is that most of them so far have been laptops. There’s no reason desktops couldn’t run AI features like Recall, and that's something Qualcomm addressed during its Computex keynote on Sunday, where it hinted it might be working on a more stationary Copilot+ PC.

While advertising the company’s AI-forward Snapdragon X chips, Qualcomm CEO Christiano Amon told the crowd Snapdragon X and Copilot+ are “coming to all PC form factors,” while showing a slide that seemed to show Copilot+ running on a laptop, a few tablets, and most notably, a desktop computer.

During its own keynote, AMD showed that Qualcomm won’t be the only chip company powering Copilot+ machines, but even taking competing CPU manufacturers into account, this is the first time anyone has hinted at a Copilot+ desktop (outside of a dev kit, which can be difficult to get your hands on and isn’t really meant for the average user).

That’s about all the company was willing to share for now. The rest of Qualcomm’s conference focused on pushing Copilot+ in general, as well as the general performance of the Snapdragon X Elite chip.

Snapdragon X and Copilot+ on desktop could come in one of two forms. First is the Apple iMac approach, which would see Microsoft and its partner manufacturers putting Qualcomm’s laptop AI chips in a desktop and calling it a day. Alternatively, Qualcomm could design a more powerful Snapdragon X chip meant to take better advantage of the additional space a full PC tower would provide. The latter would certainly take more work, as Qualcomm would have to make sure its chip would play nice with GPUs and other components from fellow PC parts manufacturers.

The first Copilot+ laptops are set to launch on June 18, with models coming from Dell, Asus, Microsoft, and more, so it will probably take some time until another wave launches.

AMD's New AI Laptop Chips Will Work With Copilot+

3 June 2024 at 16:00

AMD CEO Lisa Su took the stage at Computex 2024 yesterday to reveal the company’s long-awaited Zen 5 CPUs, which include both powerful new desktop processors, and in a surprise move, its first-ever Copilot+ ready AI chips.

The CPUs will launch in July and include models for both desktop and laptop. Desktop models will focus on continued gains in more traditional tasks like gaming, while laptop models will now more directly compete with Qualcomm’s Snapdragon X chips.

Overall, new desktop models include Ryzen 5 9600X, Ryzen 7 9700X, Ryzen 9 9900X, and Ryzen 9 9950X, while new laptop models will start with Ryzen AI 9 HX 370 and Ryzen AI 9 365. This means desktop PCs will have almost a full lineup of new chips this summer (although AMD is sure to release additional, more specialized processors later on), but laptop users will have to wait for more affordable versions.

While manufacturer numbers should always be taken with a grain of salt, AMD promises that its top-of-the-line Ryzen 9 9950X chip will boast up to 56% faster productivity (in Blender) and up to 23% faster gaming (in Horizon Zero Dawn) over Intel’s competing Core i9-14900K chip. 

In a press briefing with The Verge, AMD senior technical marketing manager Donny Woligroski also said Zen 5 in general will boast “up to twice the AI performance of the last gen.”

While desktop PCs can certainly run AI software such as Photoshop, where that AI boost will perhaps be most prominent is in new AMD-powered laptops. In a rebranding of its Ryzen 9 laptop chips to Ryzen AI, the company is emphasizing its 50 TOPS neural processing unit, up from 16 TOPS in the last generation. On paper, this should allow for more computations-per-second than either the 38 TOPS unit in Apple’s M4 chip or the 45 TOPS unit in Qualcomm’s Snapdragon X chips.

The details are a bit technical, but it shows that traditional, x86 chips are just as ready to take on AI as Arm-based processors like Apple's and Qualcomm's. While the latter can boast strong battery life and have made recent gains in performance, they can struggle with app compatibility, and AMD is promising that it can still reign when it comes to raw power.

It’s also great news for anyone wanting to move to Copilot+ who doesn’t want to deal with emulating x86 programs on Arm, since AMD’s press conference confirmed that its new AI chips will power Copilot+ laptops from partners including MSI, Asus, Lenovo, and more. Previously, Qualcomm’s Arm chips seemed to have a grip on Microsoft’s new AI laptops.

Pricing for these chips has yet to be revealed, though PC builders can rest assured that they will all work with AMD’s existing AM5 socket, so there’s no need to upgrade your motherboard quite yet. Similarly, those with old-gen AM4 sockets are getting some additional support as well, with updated Ryzen 9 5900XT and Ryzen 7 5800XT chips also set to launch this July.

Google Finally Explained What Went Wrong With AI Overviews

31 May 2024 at 15:30

Google is finally explaining what the heck happened with its AI Overviews.

For those who aren’t caught up, AI Overviews were introduced to Google’s search engine on May 14, taking the beta Search Generative Experience and making it live for everyone in the U.S. The feature was supposed to give an AI-powered answer at the top of almost every search, but it wasn’t long before it started suggesting that people put glue in their pizzas or follow potentially fatal health advice. While they’re technically still active, AI Overviews seem to have become less prominent on the site, with fewer and fewer searches from the Lifehacker team returning an answer from Google’s robots.

In a blog post yesterday, Google Search VP Liz Reid clarified that while the feature underwent testing, "there’s nothing quite like having millions of people using the feature with many novel searches.” The company acknowledged that AI Overviews hasn’t had the most stellar reputation (the blog is titled “About last week”), but it also said it discovered where the breakdowns happened and is working to fix them.

“AI Overviews work very differently than chatbots and other LLM products,” Reid said. “They’re not simply generating an output based on training data,” but instead running “traditional ‘search’ tasks” and providing information from “top web results.” Therefore, she doesn’t connect errors to hallucinations so much as the model misreading what’s already on the web.

“We saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she continued. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice.” In other words, because the robot can’t distinguish between sarcasm and actual help, it can sometimes present the former for the latter.

Similarly, when there are “data voids” on certain topics, meaning not a lot has been written seriously about them, Reid said Overviews was accidentally pulling from satirical sources instead of legitimate ones. To combat these errors, the company has now supposedly made improvements to AI Overviews, saying:

  • We built better detection mechanisms for nonsensical queries that shouldn’t show an AI Overview, and limited the inclusion of satire and humor content.

  • We updated our systems to limit the use of user-generated content in responses that could offer misleading advice.

  • We added triggering restrictions for queries where AI Overviews were not proving to be as helpful.

  • For topics like news and health, we already have strong guardrails in place. For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.

All these changes mean AI Overviews probably aren’t going anywhere soon, even as people keep finding new ways to remove Google AI from search. Despite social media buzz, the company said “user feedback shows that with AI Overviews, people have higher satisfaction with their search results,” going on to talk about how dedicated Google is to “strengthening [its] protections, including for edge cases."

That said, it looks like there’s still some disconnect between Google and users. Elsewhere in its posts, Google called out users for “nonsensical new searches, seemingly aimed at producing erroneous results.”

Specifically, the company questioned why someone would search for “How many rocks should I eat?” The idea was to break down where data voids might pop up, and while Google said these questions “highlighted some specific areas that we needed to improve,” the implication seems to be that problems mostly appear when people go looking for them.

Similarly, Google denied responsibility for several AI Overview answers, saying that “dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression” were faked.

There’s certainly a tone of defensiveness to the post, even as Google spends billions on AI engineers who are presumably paid to find these kinds of mistakes before they go live. Google says AI Overviews only “misinterpret language” in “a small number of cases,” but we do feel bad for anyone sincerely trying to up their workout routine who might have followed its "squat plug" advice.

The Apple Watch SE Is (Probably) All the Smartwatch You Need

31 May 2024 at 10:30

Sometimes, the most expensive option isn’t the best one, and smartwatches are no exception. For the past two weeks, I’ve swapped out my typical Apple Watch SE for the Apple Watch Ultra 2, and for the most part, I much prefer the $249 pick to the $799 one.

From an always-on display to an Action button, there’s a lot to love about the Apple Watch Ultra 2. But it’s a highly specialized device, with a lot of features included that most people won’t need. Even if cost wasn't a factor, I’d bet a good amount of folks would still prefer either an Apple Watch SE or Apple Watch Series 9.

Let’s break down the pros and cons of Apple’s most expensive and least expensive watches to help you find which one is right for you.

Why get a smart watch?

I didn’t start wearing a smartwatch until just after the pandemic, when I started commuting to work more often. I’m not much of an athlete, and I thought the whole thing was kind of silly, like wearing a “please mug me” sign. I suppose people once said the same thing about flip phones.

The Apple Watch SE is meant for someone like me. It’s small, lightweight, and does just enough to win me over. I like to think of it like an updated iPod Nano. It’s a tiny box I can use to control my media, use tap-to-pay, and occasionally track walks. That’s all I need, but it was convenient enough that I’ve grown to love the thing and how it lets me keep my phone in my bag while on the train.

Others, however, have bought into smartwatches since the beginning. They love the detailed sensors higher-end models come with, the luxury looks available with an upgrade, and feeling their hefty cases on their wrists. For these people, the Apple Watch Ultra 2 is a great choice.

What does each Apple Watch model come with?

Apple currently sells three Apple Watch models, and I’ve tested two. None are bad choices, but they each cater to a different audience, so there’s a lot to take into account even if money is no object.

The cheapest Apple Watch is the Apple Watch SE, which comes with the smallest size option and bare minimum specs. For $249, its aluminum body packs a 1,000 nit display, a battery that’s advertised to hold up to an 18 hour charge, the S8 chip (powering features like Siri and Find my iPhone), and an optical heart rate monitor. You’ll get water resistance up to a 164 feet depth, and for a $50 upcharge, you can add in the ability to connect a cellular plan.

The Apple Watch Series 9 is one step up and starts at $399 (cellular adds $100 to the price, and you can pay even more for a stainless steel case). For that extra money, you’ll get a 2,000 nit always-on display, the S9 chip (unlocking the double tap feature and upgrading Siri as well as Find my iPhone), a new ECG, temperature sensing, fast charging, and a low power mode for your battery that’s purported to last up to 36 hours.

Finally, the Apple Watch Ultra 2 starts at $799 and only comes in one model, with additional upcharges being reserved for accessories. All Ultra 2 models have a titanium case, a 3,000 nit always-on display, the S9 chip, 328 feet of water resistance, an upgraded GPS, cellular compatibility, and a purported battery life of up to 36 hours in normal use and 72 hours in low power mode. There’s also an orange Action button on the side and additional sensors including a gyroscope and a depth gauge.

All Apple Watch models come with OLED screens, a digital crown, speakers (although they’re upgraded on the Ultra), and a menu button, but if we were to discuss everything that’s different about them, we’d be here all day. For more details, check Apple’s site, but even with everything I’ve already laid out, I’ve yet to touch on the most important difference.

Apple Watch Ultra 2 in a gym
Credit: Michelle Ehrhardt

The Apple Watch Ultra 2 is too big and heavy for my wrist

Remember how I called the Apple Watch SE an updated iPod Nano? That wouldn’t fly with the Apple Watch Ultra 2.

The SE comes in 40mm and 44mm sizes (I have the 40), and weighs a max of 33g even on the larger model with cellular included. The Apple Watch Ultra 2 only comes in 49mm, and weighs 61.4g.

This thing is chonky, especially for a smaller wrist like mine, and feels less like wearing a control center for your iPhone—or perhaps a whole separate iPhone entirely. It’s cool for bragging rights, but less so for the type of everyday use that sold me on smartwatches in the first place. 

Outside of its specialty use cases, the always-on display is about the only upgrade I actually enjoyed for most of my time using it. Everything else was just a burden.

If you’re like me, the Ultra 2 is just a bad pick, even if you can afford it and usually opt for top-of-the-line options like the M3 Max MacBook Pro. You’ll end up getting a lot you don’t use, and a worse experience with what you do, so do yourself a favor and cheap out.

When is the Apple Watch Ultra 2 worth it?

But that doesn’t mean the additions to the Ultra 2 are just back-of-the-box selling points that Apple is using to jack up the price. They’re genuinely useful for people who need it, i.e. outdoorsy folks.

The whole reason I started this comparison was to test out the updated Golfshot app, a golf course assistant for Android, iOS, and Apple Watch. Earlier this month, it got an update for Apple Watch Ultra and Ultra 2  that was a genuine game changer, and could totally make the upgrade worth it for me if I were a big golfer.

On top of adding driving ranges to the app’s lineup of courses, the update makes use of the Apple Watch Ultra line’s extra sensors to track your swing in detail every time. SwingID allows the app to track factors like tempo, rhythm, backswing, and the like, and while it’s available on Apple Watch Series 9, Apple Watch Ultra can track your swing at 800hz, allowing it to detect exactly when you hit the ball.

In just a short few hours of play, I managed to use this data to see what was causing my shots to veer off to the right so frequently, and ended the session straightening them out.

It’s cases like this where Apple Watch shines. For instance, the extra waterproofing and Depth app makes it a diving companion, while the detailed watch face options, extra large battery, loud speakers, and cellular connectivity make it useful for keeping hikers both informed and safe.

I’m not likely to use these features anytime soon, but given that competing activity watches like Garmin’s Mk3 Dive Computer can reach into the thousands of dollars, the Apple Watch Ultra could be a fair replacement for more specialized equipment.

The large size also puts Apple Watch Ultra in greater competition with luxury watches. I tend not to pick my outfit for bragging rights, but there’s no arguing that the Ultra doesn’t look slick, especially if you add on one of Apple’s official Hermès bands.

Do I need Apple Watch Ultra if I’m just going to the gym?

I like to view Apple Watch Ultra best as a specialty activity companion, and while I did try wearing it to the gym, I didn’t get much out of it. My typical day at the gym involves about a half-hour of using the elliptical and ten minutes of weight lifting, and for this, the Ultra only really gave me one benefit: the Action button.

On the side of both Apple Watch Ultras is an orange Action button that can be set to trigger anything from a stopwatch to a flashlight (which turns the watch’s screen white and sets it to max brightness). Most available Action button functions are also available as features on the Apple Watch SE and Series 9, but require digging through menus, so being able to turn them on with a single button press is convenient. It's a similar experience to the Action button on the iPhone 15 Pro and 15 Pro Max.

I set the Action button to start my workout, and I could press it again to pause workouts. I didn’t notice too much deviation in the recorded data on either watch, but because I usually just wait for my SE to detect when I’m working out before starting tracking, I was able to more accurately time my workout tracking on the Ultra 2. The SE, for all its lightweight convenience, can be a little slow to notice when I’m in the gym.

If you work out outside, there’s also the larger battery to take note of. I usually have to charge my SE every night, but I was able to get away with charging it every other night on the Ultra 2. You’ll still be set for hours either way, but you’re less likely to accidentally wear a dead watch with the more expensive model.

Aside from in-exercise tracking, there is also something to be said for the ECG and Cycle Tracking apps. While these are also available on the more modestly priced Series 9, the closest the SE offers is the ability to manually log cycles.

Apple Watch SE worn on a wrist
Credit: Michelle Ehrhardt

Which Apple Watch should I get?

The best Apple Watch for you may not be the one with the most bells and whistles. I prefer a small, lightweight device with a minimal interface that I can mostly use as a companion while commuting, so the SE isn’t just a budget compromise to me: It’s my favorite option.

The Ultra 2, meanwhile, is great for people who regularly dive, golf, hike, or engage in some other more intense outdoor hobby. Its extra sensors and more rugged design allow it to keep up with more expensive specialty equipment, and despite its bulk, it’s still capable of everyday use cases like tap-to-pay.

The Series 9 is a great compromise. Its smallest option is only a touch larger than the SE’s, it comes in more colors, and it has a few extra sensors without getting as big as the Ultra 2. It’s a good splurge pick, but be sure to look up whether you’ll actually use its extra sensors paying the extra $150 for it.

'AI Overviews' Is a Mess, and It Seems Like Google Knows It

29 May 2024 at 10:00

At its Google I/O keynote earlier this month, Google made big promises about AI in Search, saying that users would soon be able to “Let Google do the Googling for you.” That feature, called AI Overviews, launched earlier this month. The result? The search giant spent Memorial Day weekend scrubbing AI answers from the web.

Since Google AI search went live for everyone in the U.S. on May 14, AI Overviews have suggested users put glue in their pizza sauce, eat rocks, and use a “squat plug” while exercising (you can guess what that last one is referring to).

While some examples circulating on social media have clearly been photoshopped for a joke, others were confirmed by the Lifehacker team—Google suggested I specifically use Elmer’s glue in my pizza. Unfortunately, if you try to search for these answers now, you’re likely to see the “an AI overview is not available for this search” disclaimer instead.

Why are Google’s AI Overviews like that?

This isn’t the first time Google’s AI searches have led users astray. When the beta for AI Overviews, known as Search Generative Experience, went live in March, users reported that the AI was sending them to sites known to spread malware and spam.

What's causing these issues? Well, for some answers, it seems like Google’s AI can’t take a joke. Specifically, the AI isn’t capable of discerning a sarcastic post from a genuine one, and given it seems to love scanning Reddit for answers. If you’ve ever spent any time on Reddit, you can see what a bad combination that makes.

After some digging, users discovered the source of the AI’s “glue in pizza” advice was an 11-year-old post from a Reddit user who goes by the name “fucksmith.” Similarly, the use of “squat plugs” is an old joke on Reddit’s exercise forums (Lifehacker Senior Health Editor Beth Skwarecki breaks down that particular bit of unintentional misinformation here.)

These are just a few examples of problems with AI Overviews, and another one—the AI's tendency to cite satirical articles from The Onion as gospel (no, geologists actually don't recommend eating one small rock per day) illustrates the problem particularly well: The internet is littered with jokes that would make for extremely bad advice when repeated deadpan, and that's just what AI Overviews is doing.

Google's AI search results do at least explicitly source most of their claims (though discovering the origin of the glue-in-pizza advice took some digging). But unless you click through to read the complete article, you’ll have to take the AI’s word on their accuracy—which can be problematic if these claims are the first thing you see in Search, at the top of the results page and in big bold text. As you’ll notice in Beth’s examples, like with a bad middle school paper, the words “some say” are doing a lot of heavy lifting in these responses.

Is Google pulling back on AI Overviews?

When AI Overviews get something wrong, they are, for the most part, worth a laugh, and nothing more. But when referring to recipes or medical advice, things can get dangerous. Take this outdated advice on how to survive a rattlesnake bite, or these potentially fatal mushroom identification tips that the search engine also served to Beth.

Dangerous mushroom advice in AI Overviews
Credit: Beth Skwarecki

Google has attempted to avoid responsibility for any inaccuracies by tagging the end of its AI Overviews with “Generative AI is experimental” (in noticeably smaller text), although it’s unclear if that will hold up in court should anyone get hurt thanks to an AI Overview suggestion.

There are plenty more examples of AI Overview messing up circulating around the internet, from Air Bud being confused for a true story to Barack Obama being referred to as Muslim, but suffice it to say that the first thing you see in Google Search is now even less reliable than it was when all you had to worry about was sponsored ads.

Assuming you even see it: Anecdotally, and perhaps in response to the backlash, AI Overviews currently seem to be far less prominent in search results than they were last week. While writing this article, I tried searching for common advice and facts like “how to make banana pudding” or “name the last three U.S. presidents”—things AI Overviews had confidently answered for me on prior searches without error. For about two dozen queries, I saw no overviews, which struck me as suspicious given the email Google representative Meghann Farnsworth sent to The Verge that indicated the company is “taking swift action” to remove certain offending AI answers.

Google AI Overviews is broken in Search Labs

Perhaps Google is simply showing an abundance of caution, or perhaps the company is paying attention to how popular anti-AI hacks like clicking on Search’s new web filter or appending udm=14 to the end of the search URL have become.

Whatever the case, it does seem like something has changed. In the top-left (on mobile) or top-right (on desktop) corner of Search in your browser, you should now see what looks like a beaker. Click on it, and you’ll be taken to the Search Labs page, where you’ll see a prominent card advertising AI Overviews (if you don’t see the beaker, sign up for Search Labs at the above link). You can click on that card to see a  toggle that can be swapped off, but since the toggle doesn’t actually affect search at large, what we care about is what’s underneath it.

Here, you’ll find a demo for AI Overviews with a big bright “Try an example” button that will display a few low-stakes answers that show the feature in its best light. Below that button are three more “try” buttons, except two of them now no longer lead to AI Overviews. I simply saw a normal page of search results when I clicked on them, with the example prompts added to my search bar but not answered by Gemini.

If even Google itself isn’t confident in its hand-picked AI Overview examples, that’s probably a good indication that they are, at the very least, not the first thing users should see when they ask Google a question. 

Detractors might say that AI Overviews are simply the logical next step from the knowledge panels the company already uses, where Search directly quotes media without needing to take users to the sourced webpage—but knowledge panels are not without controversy themselves

Is AI Feeling Lucky?

On May 14, the same day AI Overviews went live, Google Liaison Danny Sullivan proudly declared his advocacy for the web filter, another new feature that debuted alongside AI Overviews, to much less fanfare. The web filter disables both AI and knowledge panels, and is at the heart of the popular udm=14 hack. It turns out some users just want to see the classic ten blue links.

It’s all reminiscent of a debate from a little over a decade ago, when Google drastically reduced the presence of the “I’m feeling lucky” button. The quirky feature worked like a prototype for AI Overviews and knowledge panels, trusting so deeply in the algorithm’s first Google search result being correct that it would simply send users right to it, rather than letting them check the results themselves.

The opportunities for a search to be coopted by malware or misinformation were just as prevalent then, but the real factor behind I’m Feeling Lucky’s death was that nobody used it. Accounting for just 1% of searches, the button just wasn’t worth the millions of dollars in advertising revenue it was losing Google by directing users away from the search results page before they had a chance to see any ads. (You can still use “I’m Feeling Lucky,” but only on desktop, and only if you scroll down past your autocompleted search suggestions.)

It’s unlikely AI Overviews will go the way of I’m Feeling Lucky any time soon—the company has spent a lot of money on AI, and “I’m Feeling Lucky” took until 2010 to die. But at least for now, it seems to have about as much prominence on the site as Google’s most forgotten feature. That users aren’t responding to these AI-generated options suggests that you don't really want Google to do the Googling for you.

❌
❌