Reading view

There are new articles available, click to refresh the page.

Apple's AI Plans Include 'Black Box' For Cloud Data

How will Apple protect user data while their requests are being processed by AI in applications like Siri? Long-time Slashdot reader AmiMoJo shared this report from Apple Insider: According to sources of The Information [four different former Apple employees who worked on the project], Apple intends to process data from AI applications inside a virtual black box. The concept, known as "Apple Chips in Data Centers" internally, would involve only Apple's hardware being used to perform AI processing in the cloud. The idea is that it will control both the hardware and software on its servers, enabling it to design more secure systems. While on-device AI processing is highly private, the initiative could make cloud processing for Apple customers to be similarly secure... By taking control over how data is processed in the cloud, it would make it easier for Apple to implement processes to make a breach much harder to actually happen. Furthermore, the black box approach would also prevent Apple itself from being able to see the data. As a byproduct, this means it would also be difficult for Apple to hand over any personal data from government or law enforcement data requests. Processed data from the servers would be stored in Apple's "Secure Enclave" (where the iPhone stores biometric data, encryption keys and passwords), according to the article. "Doing so means the data can't be seen by other elements of the system, nor Apple itself."

Read more of this story at Slashdot.

Journalists 'Deeply Troubled' By OpenAI's Content Deals With Vox, The Atlantic

Benj Edwards and Ashley Belanger reports via Ars Technica: On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers -- and the unions that represent them -- were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work." The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.." Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI. Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.

Read more of this story at Slashdot.

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic

A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

At the whim of 'brain one'

given the current discussions around ai and its impact on artistry and authorship, creating a film reliant on the technology is a controversial but inevitable move. however, the software that hustwit and dawes have built may just hit the sweet spot where human meets machine; where the algorithm works to respect the material and facilitate an artistic vision. from B–1 and the first generative feature film.

'eno' is the first documentary about the pioneering artist brian eno, and the first generative feature film. the narrative is structured at the whim of 'brain one', the proprietary generative software created by hustwit and digital artist, brendan dawes. using an algorithm trained on footage from eno's extensive archive and hustwit's interviews with eno, it pieces together a film that is unique at each viewing. as the order of scenes perpetually changes and what's included is never certain, the version you see is the only time that iteration will exist. "in some ways, the film is kind of like exploring the insides of his brain... it's different memories and ideas and experiences over the 50-year plus time frame." ENO Teaser: Australian Premiere of Brian Eno Film @ Vivid Sydney Opera House Sundance 2024: Generative AI Changes Brian Eno Documentary With Every View [Forbes] 'Eno' Review: A Compelling Portrait of Music Visionary Brian Eno Is Different Each Time You Watch It [Variety] 17-track Brian Eno compilation to accompany new doc [Uncut]

Google’s AI Overview is flawed by design, and a new company blog post hints at why

A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

Google Finally Explained What Went Wrong With AI Overviews

Google is finally explaining what the heck happened with its AI Overviews.

For those who aren’t caught up, AI Overviews were introduced to Google’s search engine on May 14, taking the beta Search Generative Experience and making it live for everyone in the U.S. The feature was supposed to give an AI-powered answer at the top of almost every search, but it wasn’t long before it started suggesting that people put glue in their pizzas or follow potentially fatal health advice. While they’re technically still active, AI Overviews seem to have become less prominent on the site, with fewer and fewer searches from the Lifehacker team returning an answer from Google’s robots.

In a blog post yesterday, Google Search VP Liz Reid clarified that while the feature underwent testing, "there’s nothing quite like having millions of people using the feature with many novel searches.” The company acknowledged that AI Overviews hasn’t had the most stellar reputation (the blog is titled “About last week”), but it also said it discovered where the breakdowns happened and is working to fix them.

“AI Overviews work very differently than chatbots and other LLM products,” Reid said. “They’re not simply generating an output based on training data,” but instead running “traditional ‘search’ tasks” and providing information from “top web results.” Therefore, she doesn’t connect errors to hallucinations so much as the model misreading what’s already on the web.

“We saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she continued. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice.” In other words, because the robot can’t distinguish between sarcasm and actual help, it can sometimes present the former for the latter.

Similarly, when there are “data voids” on certain topics, meaning not a lot has been written seriously about them, Reid said Overviews was accidentally pulling from satirical sources instead of legitimate ones. To combat these errors, the company has now supposedly made improvements to AI Overviews, saying:

  • We built better detection mechanisms for nonsensical queries that shouldn’t show an AI Overview, and limited the inclusion of satire and humor content.

  • We updated our systems to limit the use of user-generated content in responses that could offer misleading advice.

  • We added triggering restrictions for queries where AI Overviews were not proving to be as helpful.

  • For topics like news and health, we already have strong guardrails in place. For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.

All these changes mean AI Overviews probably aren’t going anywhere soon, even as people keep finding new ways to remove Google AI from search. Despite social media buzz, the company said “user feedback shows that with AI Overviews, people have higher satisfaction with their search results,” going on to talk about how dedicated Google is to “strengthening [its] protections, including for edge cases."

That said, it looks like there’s still some disconnect between Google and users. Elsewhere in its posts, Google called out users for “nonsensical new searches, seemingly aimed at producing erroneous results.”

Specifically, the company questioned why someone would search for “How many rocks should I eat?” The idea was to break down where data voids might pop up, and while Google said these questions “highlighted some specific areas that we needed to improve,” the implication seems to be that problems mostly appear when people go looking for them.

Similarly, Google denied responsibility for several AI Overview answers, saying that “dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression” were faked.

There’s certainly a tone of defensiveness to the post, even as Google spends billions on AI engineers who are presumably paid to find these kinds of mistakes before they go live. Google says AI Overviews only “misinterpret language” in “a small number of cases,” but we do feel bad for anyone sincerely trying to up their workout routine who might have followed its "squat plug" advice.

Apple's AI-Powered Siri Could Make Other AI Devices (Even More) Useless

Thus far, AI devices like the Rabbit R1 and the Humane Ai pin have been all hype, no substance. The gadgets largely failed on their promises as true AI companions, but even if they didn't suffer consistent glitches from a rushed-to-market strategy, they still have a fundamental flaw: Why do I need a separate device for AI when I can do basically everything advertised with a smartphone?

It's a tough sell, and it's made me quite skeptical of AI hardware taking off in any meaningful way. I imagine anyone interested in AI is more likely to download the ChatGPT app and ask it about the world around them rather than drop hundreds of dollars on a standalone device. If you have an iPhone, however, you may soon be forgetting about an AI app altogether.

Siri might be the AI assistant we've been promised

Although Apple has been totally late to the AI party, it might be working on something that actually succeeds where Rabbit and Humane failed: According to Bloomberg's Mark Gurman, Apple is planning on a big overhaul to Siri for a later version of iOS 18: While rumors previously suggested Apple was working on making interactions with Siri more natural, the latest leaks suggest the company is giving Siri the power to control "hundreds" of features within Apple apps: You say what you want the assistant to do (e.g. crop this photo) and it will. If true, it's a huge leap from using Siri to set alarms and check the weather.

Gurman says Apple had to essentially rewire Siri for this feature, integrating the assistant with LLMs for all its AI processing. He says Apple is planning on making Siri a major showcase at WWDC, demoing how the new AI assistant can open documents, move notes to specific folders, manage your email, and create a summary for an article you're reading. At this point, AI Siri reportedly handles one command at a time, but Apple wants to roll out an update that lets you stack commands as well. Theoretically, you could eventually ask Siri to perform multiple functions across apps. Apple also plans to start with its own apps, so Siri wouldn't be able to interact this way within Instagram or YouTube—at least not yet.

It also won't be ready for some time: Although iOS 18 is likely to drop in the fall, Gurman thinks AI Siri won't be here until at least next year. Other than that, though, we don't know much else about this change at this time. But the idea that you can ask Siri to do anything on your smartphone is intriguing: In Messages, you could say "Hey Siri, react with a heart on David's last message." In Notes, you could say "Hey Siri, invite Sarah and Michael to collaborate on this note." If Apple has found a way to make virtually every feature in iOS Siri-friendly, that could be a game changer.

In fact, it could turn Siri (and, to a greater extent, your iPhone) into the AI assistant companies are struggling to sell the public on. Imagine a future when you can point your iPhone at a subject and ask Siri to tell you more about it. Then, maybe you ask Siri to take a photo of the subject, crop it, and email it to a friend, complete with the summary you just learned about. Maybe you're scrolling through a complex article, and you ask Siri to summarize it for you. In this ideal version of AI Siri, you don't need a Rabbit R1 or a Humane Ai Pin: You just need Apple's latest and greatest iPhone. Not only will Siri do everything these AI devices say they can, it'll also do everything else you normally do on your iPhone. Win-win.

The iPhone is the other side of the coin, though: These features are power intensive, so Apple is rumored to be figuring out which features can be run on-device, and which need to be run in the cloud. The more features Apple outsources to the cloud, the greater the security risk, although some rumors say the company is working on making even cloud-based AI features secure as well. But Apple will likely keep AI-powered Siri features running on-device, which means you might need at least an iPhone 15 Pro to run it.

The truth is, we won't know exactly what AI features Apple is cooking up until they hit the stage in June. If Gurman's sources are to be believed, however, Apple's delayed AI strategy might just work out in its favor.

Russia and China are using OpenAI tools to spread disinformation

OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective."

Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images)

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.

The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.

Read 14 remaining paragraphs | Comments

OpenAI says Russian and Israeli groups used its tools to spread disinformation

Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.

Continue reading...

💾

© Photograph: Dado Ruvić/Reuters

💾

© Photograph: Dado Ruvić/Reuters

You Can Now Talk to Copilot In Telegram

Generative AI applications like ChatGPT, Gemini, and Copilot are known as chatbots, since you're meant to talk to them. So, I guess it's only natural that chat apps would want to add chatbots to their platforms—whether or not users actually, you know, use them.

Telegram is the latest such app to add a chatbot to its array of features. Its chatbot of choice? Copilot. While Copilot has landed on other Microsoft-owned platforms before, Telegram is among the first third-party apps to offer Copilot functionality directly, although it certainly isn't obvious if you open the app today.

When I first learned about Telegram's Copilot integration, I fired up the app and was met with a whole lot of nothing. That isn't totally unusual for new features, as they usually roll out gradually to users over time. However, as it turns out, accessing Copilot in Telegram is a little convoluted. You actually need to search for Copilot by its Telegram username, @CopilotOfficialBot. Don't just search for "Copilot," as you'll find an assortment of unauthorized options. I don't advise chatting with any random bot you find on Telegram, certainly not any masquerading as the real deal.

You can also access it from Microsoft's "Copilot for Telegram" site. You'll want to open the link on the device you use Telegram on, as when you select "Try now," it'll redirect to Telegram.

Whichever way you pull up the Copilot bot, you'll end up in a new chat with Copilot. A splash screen informs you that Copilot in Telegram is in beta, and invites you to hit "Start" to use the bot. Once you do, you're warned about the risks of using AI. (Hallucinations happen all the time, after all.) In order to proceed, hit "I Accept." You can start sending messages without accepting, but the bot will just respond with the original prompt to accept, so if you want to get anywhere you will need to agree to the terms.

copilot in telegram
Credit: Lifehacker

From here, you'll need to verify the phone number you use with Telegram. Hit "Send my mobile number," then hit "OK" on the pop-up to share your phone number with Copilot. You don't need to wait for a verification text: Once you share your number, you're good to go.

From here, it's Copilot, but in Telegram. You can ask the bot questions and queries for a variety of subjects and tasks, and the bot will respond in kind. This version of the bot is connected to the internet, so it can look up real-time information for you, but you can't use Copilot's image generator here. If you try, the bot will redirect you to the main Copilot site, the iOS app, or the Android app.

There's isn't much here that's particularly Telegram-related, other than a function that will share an invite to your friends to try Copilot. You also only have 30 "turns" per day, so just keep that in mind before you get too carried away with chatting.

At the end of the day, this seems to be a play by Microsoft to get Copilot in the hands of more users. Maybe you wouldn't download the Copilot app yourself, but if you're an avid Telegram user, you may be curious enough to try using the bot in between conversations. I suspect this won't be the last Copilot integration we see from Microsoft, as the company continues to expand its AI strategy.

Report: Apple and OpenAI have signed a deal to partner on AI

OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.

It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.

"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.

Read 7 remaining paragraphs | Comments

Tech giants form AI group to counter Nvidia with new interconnect standard

Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity'

An anonymous reader quotes a report from Reuters: Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet. The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months. These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others. The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet. In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.

Read more of this story at Slashdot.

You Can Use Pretty Much All of ChatGPT for Free Now

OpenAI continues to expand the options available to free ChatGPT users. The company started by making its newest model, GPT-4o, generally free to all users—though there are limitations unless you pay—and now it has expanded the accessibility of major 4o features by removing the paywalls on file uploads, vision (which can use your camera for input), and GPTs (or custom chatbots). Browse, data analysis, and memory, also formerly paywalled features, were already available to free users in a similarly limited capacity.

OpenAI has been clear about its plans to expand the offerings that its free users can take advantage of since it first revealed GPT-4o a few weeks back, and it has made good on those promises so far. With these changes, it makes paying for ChatGPT Plus even less important for many, which is surprisingly a good thing for OpenAI. More users means more usage testing—something that will only help improve the models running ChatGPT.

There will, of course, still be usage limits on the free version of ChatGPT. Once you reach those limits, you’ll be kicked back to GPT 3.5, as OpenAI hasn’t made GPT 4 or GPT 4 Turbo accessible in the free tier. Despite that, some paid users are not exactly happy with the change, with many wondering what the point of ChatGPT Plus is supposed to be now.

Paying users still get up to five times more messages with GPT-4o than free users do, but that hasn't stopped some from taking to social media to ask questions like “what about the paid users?” and “what do paid users get? False hopes of GPT5.”

ChatGPT Plus subscribers still get access to the ability to make their own GPTs, and based on everything we know so far, Plus users are the only ones who will get 4o's upcoming voice-activated mode, though that could certainly change in the future.

Giving more people access to ChatGPT’s best features brings the chatbot in line with one of its biggest competitors, Claude, which allows free users access to the latest version of its AI model (albeit it through a less powerful version of that model).

US Slows Plans To Retire Coal-Fired Plants as Power Demand From AI Surges

The staggering electricity demand needed to power next-generation technology is forcing the US to rely on yesterday's fuel source: coal. From a report: Retirement dates for the country's ageing fleet of coal-fired power plants are being pushed back as concerns over grid reliability and expectations of soaring electricity demand force operators to keep capacity online. The shift in phasing out these facilities underscores a growing dilemma facing the Biden administration as the US race to lead in artificial intelligence and manufacturing drives an unprecedented growth in power demand that clashes with its decarbonisation targets. The International Energy Agency estimates the AI application ChatGPT uses nearly 10 times as much electricity as Google Search. An estimated 54 gigawatts of US coal powered generation assets, about 4 per cent of the country's total electricity capacity, is expected to be retired by the end of the decade, a 40 per cent downward revision from last year, according to S&P Global Commodity Insights, citing reliability concerns. "You can't replace the fossil plants fast enough to meet the demand," said Joe Craft, chief executive of Alliance Resource Partners, one of the largest US coal producers. "In order to be a first mover on AI, we're going to need to embrace maintaining what we have." Operators slowing down retirements include Alliant Energy, which last week delayed plans to convert its Wisconsin coal-fired plant to gas from 2025 to 2028. Earlier this year, FirstEnergy announced it was scrapping its 2030 target to phase out coal, citing "resource adequacy concerns." Further reading: Data Centers Could Use 9% of US Electricity By 2030, Research Institute Says.

Read more of this story at Slashdot.

Very Few People Are Using 'Much Hyped' AI Products Like ChatGPT, Survey Finds

A survey of 12,000 people in six countries -- Argentina, Denmark, France, Japan, the UK, and the USA -- found that very few people are regularly using AI products like ChatGPT. Unsurprisingly, the group bucking the trend are young people ages 18 to 24. The BBC reports: Dr Richard Fletcher, the report's lead author, told the BBC there was a "mismatch" between the "hype" around AI and the "public interest" in it. The study examined views on generative AI tools -- the new generation of products that can respond to simple text prompts with human-sounding answers as well as images, audio and video. "Large parts of the public are not particularly interested in generative AI, and 30% of people in the UK say they have not heard of any of the most prominent products, including ChatGPT," Dr Fletcher said. This research attempted to gauge what the public thinks, finding: - The majority expect generative AI to have a large impact on society in the next five years, particularly for news, media and science - Most said they think generative AI will make their own lives better - When asked whether generative AI will make society as a whole better or worse, people were generally more pessimistic In more detail, the study found: - While there is widespread awareness of generative AI overall, a sizable minority of the public -- between 20% and 30% of the online population in the six countries surveyed -- have not heard of any of the most popular AI tools. - In terms of use, ChatGPT is by far the most widely used generative AI tool in the six countries surveyed, two or three times more widespread than the next most widely used products, Google Gemini and Microsoft Copilot. - Younger people are much more likely to use generative AI products on a regular basis. Averaging across all six countries, 56% of 18-24s say they have used ChatGPT at least once, compared to 16% of those aged 55 and over. - Roughly equal proportions across six countries say that they have used generative AI for getting information (24%) as creating various kinds of media, including text but also audio, code, images, and video (28%). - Just 5% across the six countries covered say that they have used generative AI to get the latest news.

Read more of this story at Slashdot.

'AI Overviews' Is a Mess, and It Seems Like Google Knows It

At its Google I/O keynote earlier this month, Google made big promises about AI in Search, saying that users would soon be able to “Let Google do the Googling for you.” That feature, called AI Overviews, launched earlier this month. The result? The search giant spent Memorial Day weekend scrubbing AI answers from the web.

Since Google AI search went live for everyone in the U.S. on May 14, AI Overviews have suggested users put glue in their pizza sauce, eat rocks, and use a “squat plug” while exercising (you can guess what that last one is referring to).

While some examples circulating on social media have clearly been photoshopped for a joke, others were confirmed by the Lifehacker team—Google suggested I specifically use Elmer’s glue in my pizza. Unfortunately, if you try to search for these answers now, you’re likely to see the “an AI overview is not available for this search” disclaimer instead.

Why are Google’s AI Overviews like that?

This isn’t the first time Google’s AI searches have led users astray. When the beta for AI Overviews, known as Search Generative Experience, went live in March, users reported that the AI was sending them to sites known to spread malware and spam.

What's causing these issues? Well, for some answers, it seems like Google’s AI can’t take a joke. Specifically, the AI isn’t capable of discerning a sarcastic post from a genuine one, and given it seems to love scanning Reddit for answers. If you’ve ever spent any time on Reddit, you can see what a bad combination that makes.

After some digging, users discovered the source of the AI’s “glue in pizza” advice was an 11-year-old post from a Reddit user who goes by the name “fucksmith.” Similarly, the use of “squat plugs” is an old joke on Reddit’s exercise forums (Lifehacker Senior Health Editor Beth Skwarecki breaks down that particular bit of unintentional misinformation here.)

These are just a few examples of problems with AI Overviews, and another one—the AI's tendency to cite satirical articles from The Onion as gospel (no, geologists actually don't recommend eating one small rock per day) illustrates the problem particularly well: The internet is littered with jokes that would make for extremely bad advice when repeated deadpan, and that's just what AI Overviews is doing.

Google's AI search results do at least explicitly source most of their claims (though discovering the origin of the glue-in-pizza advice took some digging). But unless you click through to read the complete article, you’ll have to take the AI’s word on their accuracy—which can be problematic if these claims are the first thing you see in Search, at the top of the results page and in big bold text. As you’ll notice in Beth’s examples, like with a bad middle school paper, the words “some say” are doing a lot of heavy lifting in these responses.

Is Google pulling back on AI Overviews?

When AI Overviews get something wrong, they are, for the most part, worth a laugh, and nothing more. But when referring to recipes or medical advice, things can get dangerous. Take this outdated advice on how to survive a rattlesnake bite, or these potentially fatal mushroom identification tips that the search engine also served to Beth.

Dangerous mushroom advice in AI Overviews
Credit: Beth Skwarecki

Google has attempted to avoid responsibility for any inaccuracies by tagging the end of its AI Overviews with “Generative AI is experimental” (in noticeably smaller text), although it’s unclear if that will hold up in court should anyone get hurt thanks to an AI Overview suggestion.

There are plenty more examples of AI Overview messing up circulating around the internet, from Air Bud being confused for a true story to Barack Obama being referred to as Muslim, but suffice it to say that the first thing you see in Google Search is now even less reliable than it was when all you had to worry about was sponsored ads.

Assuming you even see it: Anecdotally, and perhaps in response to the backlash, AI Overviews currently seem to be far less prominent in search results than they were last week. While writing this article, I tried searching for common advice and facts like “how to make banana pudding” or “name the last three U.S. presidents”—things AI Overviews had confidently answered for me on prior searches without error. For about two dozen queries, I saw no overviews, which struck me as suspicious given the email Google representative Meghann Farnsworth sent to The Verge that indicated the company is “taking swift action” to remove certain offending AI answers.

Google AI Overviews is broken in Search Labs

Perhaps Google is simply showing an abundance of caution, or perhaps the company is paying attention to how popular anti-AI hacks like clicking on Search’s new web filter or appending udm=14 to the end of the search URL have become.

Whatever the case, it does seem like something has changed. In the top-left (on mobile) or top-right (on desktop) corner of Search in your browser, you should now see what looks like a beaker. Click on it, and you’ll be taken to the Search Labs page, where you’ll see a prominent card advertising AI Overviews (if you don’t see the beaker, sign up for Search Labs at the above link). You can click on that card to see a  toggle that can be swapped off, but since the toggle doesn’t actually affect search at large, what we care about is what’s underneath it.

Here, you’ll find a demo for AI Overviews with a big bright “Try an example” button that will display a few low-stakes answers that show the feature in its best light. Below that button are three more “try” buttons, except two of them now no longer lead to AI Overviews. I simply saw a normal page of search results when I clicked on them, with the example prompts added to my search bar but not answered by Gemini.

If even Google itself isn’t confident in its hand-picked AI Overview examples, that’s probably a good indication that they are, at the very least, not the first thing users should see when they ask Google a question. 

Detractors might say that AI Overviews are simply the logical next step from the knowledge panels the company already uses, where Search directly quotes media without needing to take users to the sourced webpage—but knowledge panels are not without controversy themselves

Is AI Feeling Lucky?

On May 14, the same day AI Overviews went live, Google Liaison Danny Sullivan proudly declared his advocacy for the web filter, another new feature that debuted alongside AI Overviews, to much less fanfare. The web filter disables both AI and knowledge panels, and is at the heart of the popular udm=14 hack. It turns out some users just want to see the classic ten blue links.

It’s all reminiscent of a debate from a little over a decade ago, when Google drastically reduced the presence of the “I’m feeling lucky” button. The quirky feature worked like a prototype for AI Overviews and knowledge panels, trusting so deeply in the algorithm’s first Google search result being correct that it would simply send users right to it, rather than letting them check the results themselves.

The opportunities for a search to be coopted by malware or misinformation were just as prevalent then, but the real factor behind I’m Feeling Lucky’s death was that nobody used it. Accounting for just 1% of searches, the button just wasn’t worth the millions of dollars in advertising revenue it was losing Google by directing users away from the search results page before they had a chance to see any ads. (You can still use “I’m Feeling Lucky,” but only on desktop, and only if you scroll down past your autocompleted search suggestions.)

It’s unlikely AI Overviews will go the way of I’m Feeling Lucky any time soon—the company has spent a lot of money on AI, and “I’m Feeling Lucky” took until 2010 to die. But at least for now, it seems to have about as much prominence on the site as Google’s most forgotten feature. That users aren’t responding to these AI-generated options suggests that you don't really want Google to do the Googling for you.

OpenAI board first learned about ChatGPT from Twitter, according to former member

Helen Toner, former OpenAI board member, speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023. (credit: Getty Images)

In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.

"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.

Read 8 remaining paragraphs | Comments

OpenAI Forms Another Safety Committee After Dismantling Prior Team – Source: www.darkreading.com

openai-forms-another-safety-committee-after-dismantling-prior-team-–-source:-wwwdarkreading.com

Source: www.darkreading.com – Author: Dark Reading Staff 1 Min Read Source: SOPA Images Limited via Alamy Stock Photo Open AI is forming a safety and security committee led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman.  The committee is being formed to make recommendations to the full board on safety […]

La entrada OpenAI Forms Another Safety Committee After Dismantling Prior Team – Source: www.darkreading.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Anthropic Hires Former OpenAI Safety Lead To Head Up New Team

Jan Leike, one of OpenAI's "superalignment" leaders, who resigned last week due to AI safety concerns, has joined Anthropic to continue the mission. According to Leike, the new team "will work on scalable oversight, weak-to-strong generalization, and automated alignment research." TechCrunch reports: A source familiar with the matter tells TechCrunch that Leike will report directly to Jared Kaplan, Anthropic's chief science officer, and that Anthropic researchers currently working on scalable oversight -- techniques to control large-scale AI's behavior in predictable and desirable ways -- will move to report to Leike as Leike's team spins up. In many ways, Leike's team sounds similar in mission to OpenAI's recently-dissolved Superalignment team. The Superalignment team, which Leike co-led, had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years, but often found itself hamstrung by OpenAI's leadership. Anthropic has often attempted to position itself as more safety-focused than OpenAI.

Read more of this story at Slashdot.

Use This App on Mac, iPhone, and iPad for Free AI Transcription

Transcribing isn't fun at all. Good thing it's something AI is actually good at. Aiko is an app for Mac, iPad, and iPhone that users Whisper—open-source technology created by OpenAI—to transcribe audio files. Aiko does not upload the file to the cloud to make the transcription; everything happens on your device. And it works fairly quickly, too: I was able to transcribe a half hour radio drama in just a few minutes.

The application works best on devices with Apple Silicon processors (Intel Macs are technically supported but are extremely slow at transcribing); my 2022 iPhone SE was significantly faster than my 2018 Intel MacBook Pro, which took around three minutes to transcribe 10 seconds of talking. If you have the right hardware, though, this application is just about perfect.

A screenshot of Aiko. The interface is clear—it just says "Drop Audio or Video File" and there are two buttons: "Open" and "Record".
Credit: Justin Pot

To get started, you need to either point the application toward a file or start recording what you want to transcribe. You can add any audio or video file to the application, which will immediately get started on creating a transcription for you. The recording feature is mostly there for quick notes—the software advises you to record things using another application first if at all possible. The mobile version can grab audio from the Voice Memos app, which is a nice touch.

Three screenshots from the iPhone version of Aiko. The left shows a quick transcription; the center, the recording feature, which isn't much more than a microphone icon; the third, a transcription of the first episode of the classic Douglas Adams radio play "The Hitchhicker's Guide to the Galaxy" (the book was based on the play)
Credit: Justin Pot

The application will show you the text as the transcription happens, meaning you can start reading before the complete transcription is done. The application automatically detects the language being spoken, though you can set a different language in the settings if you prefer. You can even set the application to automatically translate non-English conversation into English, if you want.

It's not a perfect application—there's no way to indicate who is speaking when in the text, for example. It works quickly, though, and is completely free, so it's hard to complain too much. This is going to be a go-to tool for me from now on.

Nvidia denies pirate e-book sites are “shadow libraries” to shut down lawsuit

Nvidia denies pirate e-book sites are “shadow libraries” to shut down lawsuit

Enlarge (credit: Westend61 | Westend61)

Some of the most infamous so-called shadow libraries have increasingly faced legal pressure to either stop pirating books or risk being shut down or driven to the dark web. Among the biggest targets are Z-Library, which the US Department of Justice has charged with criminal copyright infringement, and Library Genesis (Libgen), which was sued by textbook publishers last fall for allegedly distributing digital copies of copyrighted works "on a massive scale in willful violation" of copyright laws.

But now these shadow libraries and others accused of spurning copyrights have seemingly found an unlikely defender in Nvidia, the AI chipmaker among those profiting most from the recent AI boom.

Nvidia seemed to defend the shadow libraries as a valid source of information online when responding to a lawsuit from book authors over the list of data repositories that were scraped to create the Books3 dataset used to train Nvidia's AI platform NeMo.

Read 12 remaining paragraphs | Comments

Klarna Using GenAI To Cut Marketing Costs By $10 Million Annually

Fintech firm Klarna, one of the early adopters of generative AI said on Tuesday it is using AI for purposes such as running marketing campaigns and generating images, saving about $10 million in costs annually. From a report: The company has cut its sales and marketing budget by 11% in the first quarter, with AI responsible for 37% of the cost savings, while increasing the number of campaigns, the company said. Using GenAI tools like Midjourney, DALL-E, and Firefly for image generation, Klarna said it has reduced image production costs by $6 million.

Read more of this story at Slashdot.

OpenAI Says It Has Begun Training a New Flagship AI Model

OpenAI said on Tuesday that it has begun training a new flagship AI model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT. From a report: The San Francisco start-up, which is one of the world's leading A.I. companies, said in a blog post that it expects the new model to bring "the next level of capabilities" as it strives to build "artificial general intelligence," or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Apple's Siri, search engines and image generators. OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies. "While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment," the company said. OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.

Read more of this story at Slashdot.

Argentinian president to meet Silicon Valley CEOs in bid to court tech titans

Javier Milei to hold private talks with Sundar Pichai and Sam Altman as Argentina faces worst economic crisis in decades

Javier Milei, Argentina’s president, is set to meet with the leaders of some of the world’s largest tech companies in Silicon Valley this week. The far-right libertarian leader will hold private talks with Sundar Pichai of Google, Sam Altman of OpenAI, Mark Zuckerberg of Meta and Tim Cook of Apple.

Milei also met last month with Elon Musk, who has become one of the South American president’s most prominent cheerleaders and repeatedly shared his pro-deregulation, anti-social justice message on Twitter. Peter Thiel, the tech billionaire, has also twice visited Milei, flying down to Buenos Aires to speak with him in February and May of this year.

Continue reading...

💾

© Photograph: Leandro Bustamante Gomez/Reuters

💾

© Photograph: Leandro Bustamante Gomez/Reuters

OpenAI Announces Safety and Security Committee Amid New AI Model Development

OpenAI Announces Safety and Security Committee

OpenAI announced a new safety and security committee as it begins training a new AI model intended to replace the GPT-4 system that currently powers its ChatGPT chatbot. The San Francisco-based startup announced the formation of the committee in a blog post on Tuesday, highlighting its role in advising the board on crucial safety and security decisions related to OpenAI’s projects and operations. The creation of the committee comes amid ongoing debates about AI safety at OpenAI. The company faced scrutiny after Jan Leike, a researcher, resigned, criticizing OpenAI for prioritizing product development over safety. Following this, co-founder and chief scientist Ilya Sutskever also resigned, leading to the disbandment of the "superalignment" team that he and Leike co-led, which was focused on addressing AI risks. Despite these controversies, OpenAI emphasized that its AI models are industry leaders in both capability and safety. The company expressed openness to robust debate during this critical period.

OpenAI's Safety and Security Committee Composition and Responsibilities

The safety committee comprises company insiders, including OpenAI CEO Sam Altman, Chairman Bret Taylor, and four OpenAI technical and policy experts. It also features board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, a former general counsel for Sony.
"A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days." 
The committee's initial task is to evaluate and further develop OpenAI’s existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.

Development of the New AI Model

OpenAI also announced that it has recently started training a new AI model, described as a "frontier model." These frontier models represent the most advanced AI systems, capable of generating text, images, video, and human-like conversations based on extensive datasets. The company also recently launched its newest flagship model GPT-4o ('o' stands for omni), which is a multilingual, multimodal generative pre-trained transformer designed by OpenAI. It was announced by OpenAI CTO Mira Murati during a live-streamed demo on May 13 and released the same day. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. GPT-4o has a context window supporting up to 128,000 tokens, which helps it maintain coherence over longer conversations or documents, making it suitable for detailed analysis. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Google’s Chromebook Plus Wants to Be a Cheaper AI Laptop

Hot on the heels of Microsoft’s Copilot+ PC announcements last week, Google is refreshing Chomebooks with new AI features to match. These include the ability to summon Gemini with a right click, generate AI backgrounds for video calls, and use the same Magic Editor as on Pixel phones.

There’s new non-AI features as well, like a GIF recorder and a new Game Dashboard. These are available on standard Chromebooks, while most of the new AI features will be limited to Chromebook Plus models. 

Taken together, all of these new features see Google fulfilling some of the promises it made alongside its first Chromebook Plus rollout in October of last year. But Google still seems to be deferring some rollouts to later in the year, as the company only previewed a selection of its more exciting AI developments—among them, a Microsoft Recall-like “Where Was I” screen that pops up every time you open your Chromebook.

There isn’t any brand new chip technology here, like there is with Copilot+ laptops or M-series MacBooks. But since competing devices can cost well above $1,000, Google’s promise to sell Chromebook Plus laptops starting at $349 provides a great look at what a low-cost AI computer might look like in 2024, and if it lives up to the hype.

What is a Chromebook Plus?

A photo of a Chromebook Plus laptop
Credit: Michelle Ehrhardt

In October, Google announced a new certification program for Chromebooks called Chromebook Plus. While Google doesn’t make its own Chromebook hardware, Chromebook Plus guarantees a minimum spec loadout, and comes with some handy extra features.

For a device to be considered a Chromebook Plus, it must have at least an Intel Core i3 12th Generation or AMD Ryzen 3 5000 CPU, 8GB of RAM or more, 128GB of storage or more, a 1080p IPS display or above, and a 1080p or above webcam with temporal noise reduction (which makes videos appear clearer).

This guarantees a certain level of performance, which Google says enables it to turn on features like Magic Eraser, which debuted on Pixel phones. Chromebook Plus users can also blur their backgrounds in video calls or use audio noise cancellation on an OS-level, allowing them to tune up their video even in apps that don’t support it. These were the only AI features on Chromebook Plus devices at launch, which left a lot of promises to fulfill.

The minimum requirements for Chromebook Plus devices hasn’t changed now, which means today’s update is mostly a feature drop. But there are also several new or updated devices on the way, including convertibles (laptops that become tablets). Some of these go above and beyond Google’s minimums, but perhaps the biggest news here is the cheapest option is now $349, which drops the starting price for Chromebook Plus devices down from $399.

I’ll be focusing on ChromeOS updates for most of this article, but all of my testing was done on the new HP Chromebook Plus x360, a $429 convertible laptop with 8GB of RAM, 128GB of storage, an Intel Core i3 processor, and a 14-inch 1080p touchscreen.

Gemini on Chromebook Plus

A screenshot of a Chromebook Plus desktop
Credit: Michelle Ehrhardt

The most prominent addition to Chromebook Plus is Gemini integration, both in the app shelf (Google’s name for the taskbar) and when you right click. Unfortunately, like with Gemini on the Pixel 8a, it’s somewhat of a parlor trick. Clicking the Gemini icon in the app shelf simply opens a Chrome tab for Gemini’s web app, and won’t work without an internet connection. Once in the web app, Gemini will function as usual, meaning it won’t be able to help you adjust your Chromebook’s settings, like Microsoft Copilot can with Windows.

To help alleviate any disappointment, and probably to sell future subscriptions, Google is giving all new Chromebook Plus owners a year of the Google One AI Premium plan free with their purchases, meaning they’ll be able to use Gemini Advanced to access the chatbot’s latest large language models.

There is one substantial feature here that genuinely changes how you use Gemini, but it’s pretty limited for now. “Help me write” allows users to select text, right-click it, and choose to have Gemini shorten, elaborate on, insert emojis into, or rewrite it using a specific prompt. It’s nothing the chatbot couldn’t do before, but the convenience of putting these options on a right-click makes it feel like the next evolution of copy-paste. That catch is that it only works on social media sites for now. While I was able to get writing help on X (formerly Twitter) or LinkedIn, the option wouldn’t show up on Gmail or Google Docs. It’s unclear whether that will change in the future, but Google says that “websites that offer a separate right-click menu” are not compatible with Help me write.

None of the AI here works on-device, so you’ll need to be connected to the internet to try it out.

Magic Editor on Chromebook Plus

A photo of a car before and after being edited with Magic Editor
An unedited photo of a car (left) vs. the same photo after being edited with Magic Editor (right) Credit: Michelle Ehrhardt

Less prominent but more useful than Chromebook Plus’ current Gemini integration is full Magic Editor access, something that Google promised would come when it initially launched the Chromebook Plus program. You’ll actually need to install this to use it, but getting it set up is as simple as opening an image in Google Photos and clicking the glowing magic editor button.

Installation doesn’t take long, and the resulting process is about as smooth as on a Pixel phone. You’ll back up your image, then be prompted to tap, brush, or circle the parts of the photo you want to edit. Once selected, you can delete, resize, or move your selected element, and generative AI will fill in any gaps you leave in the process.

Unfortunately, the results are about as good as on Pixel phones, too. Backgrounds are blurry and generated elements might blend together with little rhyme or reason. It’s fun for a gag, or maybe if you really hate an ex and want them out of your selfie, but it’s not going to replace Photoshop anytime soon. And while it’s a unique function that isn’t just a shortcut to the web, it also needs an internet connection to work.

Generative AI wallpaper and video call backgrounds

Chromebook Plus AI background generator
Credit: Michelle Ehrhardt

Another promise Google made upon launching Chromebook Plus was the ability to create custom, AI-generated wallpapers and video call backgrounds. This is finally here, but the implementation is seriously limited compared to expectations.

When I demoed a pre-release version of the feature at a Google event last year, I was able to generate imagery using any prompt I wanted. The results weren’t always beautiful, but the freedom was fun, and gave Google’s generative AI a unique edge over just picking something off Google Images.

Now, users can only make prompts by selecting from a list of pre-approved words. For instance, if you want to make a wallpaper with a fruit theme, you could pick a color, a fruit, and a background color from a list, but you couldn’t ask for a background of “three bananas with googly eyes wearing astronaut helmets.”

The results are now more consistent, but also so constrained and typical that there’s little reason to use these backgrounds over more traditional, handcrafted ones. The reason I even suggested a “fruit theme” above is that more imaginative options are off-limits. If you’re planning to use an AI background, I hope you like landscapes, letters, and foodstuff.

Like Magic Eraser and Gemini, you’ll need internet access for this as well.

More Chromebook AI to come

Chromebook Plus Where Was I feature
Credit: Google

Again, Google has big plans for Chromebook Plus down the line. The company says it’s working on a “Help me read” feature that will allow Gemini to summarize text for web pages or PDFs on a right-click, and answer follow-up questions. Again, this is nothing the chatbot can’t do now, but putting it on a right-click could be a great way to get people to actually use the AI, as it’ll be integrated into their current workflows.

There’s also accessibility utilities in the works that could prove to be a genuine game changer for those who need them, and possibly even those who don’t. The idea is to bake Project Gameface, which is currently available on Android, directly into ChromeOS. Chromebook users, whether on a Plus or a standard model, could then control their mouse, keyboard, and other input devices by smiling, blinking, or performing other gestures. It all sounds very cool, but it’s a bit disappointing that we’re this far out on the Chromebook Plus launch and most of the promised AI utility that’s meant to help bridge the gap between a Chromebook and a more traditional laptop are still just novelties.

What might help Google is the eventual launch of “Where Was I,” which sounds like a stripped down version of Microsoft’s new Recall feature. It’d be great to see this go live now, to help more directly compete with Microsoft, because it seems like a genuine compromise between Recall’s promises and its security concerns. Like Recall, Where Was I will remind users what they were up to upon returning to their Chromebook Plus, and even give them buttons to resume certain tasks. Unlike Recall, it won’t take a screenshot every few seconds. Instead, the computer will simply take a note of which tabs and programs you had open when it goes to sleep, and can even port over suggestions from connected phones, like articles you might have started reading on mobile.

For some users, this will just be another screen to dismiss before getting started on work, but for others, it will provide some useful shortcuts that, while not as powerful as Recall, provide much less of a security risk.

Google says these updates will roll out “in the coming year,” but dedicated users might eventually be able to test them out early via Chrome flags (I couldn’t access them in my testing period).

Non-AI features

A Google Tasks list being accessed on a Chromebook
Credit: Michelle Ehrhardt

Given the limited nature of what’s going live today and the somewhat shaky reputation Google AI has earned since being widely implemented into search, Chromebook’s non-AI upgrades might be the most exciting announcements to come out of today’s news, even if they’re not front-and-center in Google’s messaging. The best part? They’re on all Chromebooks, not just Chromebook Plus models.

Maybe the most convenient of these is the ability to record a GIF when using the screen capture tool. Simply press the screen capture button (or use the Ctrl + Shift + Show Windows or Ctrl + Shift + F5 shortcuts), click the video icon, then select “Record GIF” from the dropdown menu.

Depending on the file size, the compression might not always be great—I tested the feature out on about 10 seconds of anime footage and got plenty of strange artifacts—but for shorter and more casual social media reactions, it should prove more convenient than capturing a video file and converting it to a GIF.

Also convenient is the new Game Dashboard, which gives users access to typical screenshot functions, but also comes with a key mapper for touch-based Android games. This will make it far easier to play games like Genshin Impact on a Chromebook, since you’ll be able to assign the game’s touch controls to keyboard buttons and mouse inputs. Chromebook Plus users will also be able to capture videos of their gameplay with the included face-cam of themselves, although oddly enough, the only way to disable the face-cam is to turn off webcam input altogether.

In a move towards seamlessness, you’ll also now be able to set up your Chromebook using a QR code and an Android phone, which definitely made the process simpler for me, since my Google password is on the long end. Similarly, you can now access your Google Tasks right from the date display in your Chromebook’s bottom-right corner.

Is Chromebook Plus worth it now?

A photo of a Chromebook Plus at a Google event
Credit: Michelle Ehrhardt

With a price drop and a few extra AI conveniences, Google’s updated Chromebook Plus program does a decent job using the cloud to make up for lower hardware performance. But as a proper AI computer, Chromebook Plus is clearly still developing. The AI features here aren’t anything that you couldn’t get elsewhere, largely for free, so there’s not a good incentive to upgrade, especially if you already own a regular Chromebook. In fact, it’s pretty disappointing that Google is locking so many features behind its Chromebook Plus banner. With so much being powered by the cloud, any device with an internet connection could conceivably run them. For the most part, they even still can; they’ll just need to navigate to the Gemini web page first, instead of having AI on a right click.

That AI on a right click promise is tantalizing, though, which means Chromebook Plus is worth paying attention to as Google develops its Help me write and Help me read features. If AI is to take off, it needs to work its way into regular consumer habits, and seeing it readily available when you go to copy and paste is a smart move on Google’s part.

New Chromebook Plus models

Alongside Google's feature announcements, a number of updated Chromebook Plus models are now joining the market, including the following:

  • $699: Acer Chromebook Plus Spin 714, with a 14-inch 1,920 x 1,200 convertible touchscreen, an Intel Core Ultra 5 processor, 8GB RAM, 256GB storage

  • $649: Acer Chromebook Plus 516 GE, with a 16-inch 2,560 x 1,600 120Hz screen, an Intel Core 5 processor, 8GB RAM, 256GB storage

  • $499: Asus Chromebook Plus CX24, with a 14-inch 1,920 x 1,080 screen, a 13th Gen Intel Core i5 processor, 8GB RAM, 128GB storage

  • $429: HP Chromebook Plus x360, with a 14-inch 1,920 x 1,080 convertible touchscreen, a 13th Gen Intel Core i3 processor, 8GB RAM, 128GB storage

  • $350: Acer Chromebook Plus 514, with a 14-inch 1,920 x 1,080 screen, a 13th Gen Intel Core i3 processor, 8GB RAM, 512GB storage

OpenAI training its next major AI model, forms new safety committee

A man rolling a boulder up a hill.

Enlarge (credit: Getty Images)

On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has "recently begun" training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to two weeks of public setbacks for the company.

Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, "frontier model" is a term for a new AI system designed to push the boundaries of current capabilities. And "AGI" refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).

Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.

Read 5 remaining paragraphs | Comments

Elon Musk’s xAI raises $6bn in bid to take on OpenAI

Funding round values artificial intelligence startup at $18bn before investment, says multibillionaire

Elon Musk’s artificial intelligence company xAI has closed a $6bn (£4.7bn) investment round that will make it among the best-funded challengers to OpenAI.

The startup is only a year old, but it has rapidly built its own large language model (LLM), the technology underpinning many of the recent advances in generative artificial intelligence capable of creating human-like text, pictures, video, and voices.

Continue reading...

💾

© Photograph: Anadolu Agency/Anadolu/Getty Images

💾

© Photograph: Anadolu Agency/Anadolu/Getty Images

Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy

Episode 331 of the Shared Security Podcast discusses privacy and security concerns related to two major technological developments: the introduction of Windows PC’s new feature ‘Recall,’ part of Microsoft’s Copilot+, which captures desktop screenshots for AI-powered search tools, and Slack’s policy of using user data to train machine learning features with users opted in by […]

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Shared Security Podcast.

The post Microsoft’s Copilot+ Recall Feature, Slack’s AI Training Controversy appeared first on Security Boulevard.

💾

Mojo, Bend, and the Rise of AI-First Programming Languages

"While general-purpose languages like Python, C++, and Java remain popular in AI development," writes VentureBeat, "the resurgence of AI-first languages signifies a recognition that AI's unique demands require specialized languages tailored to the domain's specific needs... designed from the ground up to address the specific needs of AI development." Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow's machine learning capabilities... At the heart of Mojo's design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details. One of Mojo's key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo's performance enhancements... It supports static typing, which can help catch errors early in development and enable more efficient compilation... Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation... Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations. According to Mojo's creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August. Despite its impressive performance and potential, Mojo's adoption might have stalled initially due to its proprietary status. However, Modular recently decided to open-source Mojo's core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo's adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python. Developers can now explore Mojo's inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful. The article also notes other languages "trying to become the go-to choice for AI development" by providing high-performance execution on parallel hardware. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations — no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI — a tool for every occasion... The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance. The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind. This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more. The future of AI development and computing itself are being reshaped by the languages and tools we create today. In 2017 Modular AI's founder Chris Lattner (creator of the Swift and LLVM) answered questions from Slashdot readers.

Read more of this story at Slashdot.

How A US Hospital is Using AI to Analyze X-Rays - With Help From Red Hat

This week Red Hat announced one of America's leading pediatric hospitals is using AI to analyze X-rays, "to improve image quality and the speed and accuracy of image interpretation." Red Hat's CTO said the move exemplifies "the positive impact AI can have in the healthcare field". Before Boston Children's Hospital began piloting AI in radiology, quantitative measurements had to be done manually, which was a time-consuming task. Other, more complex image analyses were performed completely offline and outside of the clinical workflow. In a field where time is of the essence, the hospital is piloting Red Hat OpenShift via the ChRIS Research Integration Service, a web-based medical image platform. The AI application running in ChRIS on the Red Hat OpenShift foundation has the potential to automatically examine x-rays, identify the most valuable diagnostic images among the thousands taken and flag any discrepancies for the radiologist. This decreases the interpretation time for radiologists. But it also seems to be a big win for openness: Innovation developed internally is immediately transferable to public research clouds such as the Massachusetts Open Cloud, where large-scale data sharing and additional innovation can be fostered. Boston Children's Hospital aims to extend the reach of advanced healthcare solutions globally through this approach, amplifying their impact on patient well-being worldwide. "Red Hat believes open unlocks the world's potential," the announcement concludes, "including the potential to share knowledge and build upon each other's discoveries. Additionally, Red Hat believes innovation — including AI — should be available everywhere, making any application, anywhere a reality. "With open source, enabling AI-fueled innovation across hybrid IT environments that can lead to faster clinical breakthroughs and better patient outcomes is a reality."

Read more of this story at Slashdot.

Elon Musk Says AI Could Eliminate Our Need to Work at Jobs

In the future, "Probably none of us will have a job," Elon Musk said Thursday, speaking remotely to the VivaTech 2024 conference in Paris. Instead, jobs will be optional — something we'd do like a hobby — "But otherwise, AI and the robots will provide any goods and services that you want." CNN reports that Musk added this would require "universal high income" — and "There would be no shortage of goods or services." In a job-free future, though, Musk questioned whether people would feel emotionally fulfilled. "The question will really be one of meaning — if the computer and robots can do everything better than you, does your life have meaning?" he said. "I do think there's perhaps still a role for humans in this — in that we may give AI meaning." CNN accompanied their article with this counterargument: In January, researchers at MIT's Computer Science and Artificial Intelligence Lab found workplaces are adopting AI much more slowly than some had expected and feared. The report also said the majority of jobs previously identified as vulnerable to AI were not economically beneficial for employers to automate at that time. Experts also largely believe that many jobs that require a high emotional intelligence and human interaction will not need replacing, such as mental health professionals, creatives and teachers. CNN notes that Musk "also used his stage time to urge parents to limit the amount of social media that children can see because 'they're being programmed by a dopamine-maximizing AI'."

Read more of this story at Slashdot.

Robotaxis Face 'Heightened Scrutiny' While the Industry Plans Expansion

Besides investigations into Cruise and Waymo, America's National Highway Traffic Safety Administration (NHTSA) also announced it's examining two rear-end collisions between motorbikes and Amazon's steering wheel-free Zoox vehicles being tested in San Francisco, Seattle, and Las Vegas. This means all three major self-driving vehicle companies "are facing federal investigations over potential flaws linked to dozens of crashes," notes the Washington Post, calling it "a sign of heightened scrutiny as the fledging industry lays plans to expand nationwide." The industry is poised for growth: About 40 companies have permits to test autonomous vehicles in California alone. The companies have drawn billions of dollars in investment, and supporters say they could revolutionize how Americans travel... Dozens of companies are testing self-driving vehicles in at least 10 states, with some offering services to paying passengers, according to the Autonomous Vehicle Industry Association. The deployments are concentrated in a handful of Western states, especially those with good weather and welcoming governors. According to a Washington Post analysis of California data, the companies in test mode in San Francisco collectively report millions of miles on public roads every year, along with hundreds of mostly minor collisions. An industry association says autonomous vehicles have logged a total of 70 million miles, a figure that it compares with 293 trips to the moon and back. But it's a tiny fraction of the almost 9 billion miles that Americans drive every day. The relatively small number of miles the vehicles have driven makes it difficult to draw broad conclusions about their safety. Key quotes from the article: "Together, the three investigations opened in the past year examine more than two dozen collisions potentially linked to defective technology. The bulk of the incidents were minor and did not result in any injuries..." "But robotic cars are still very much in their infancy, and while the bulk of the collisions flagged by NHTSA are relatively minor, they call into question the companies' boasts of being far safer than human drivers..." "The era of unrealistic expectations and hype is over," said Matthew Wansley, a professor at the Cardozo School of Law in New York who specializes in emerging automotive technologies. "These companies are under a microscope, and they should be. Private companies are doing an experiment on public roads." "Innocent people are on the roadways, and they're not being protected as they need to be," said Cathy Chase, the president of Advocates for Highway and Auto Safety.

Read more of this story at Slashdot.

OpenAI Didn't Copy Scarlett Johansson's Voice for ChatGPT, Records Show

The Atlantic argued this week that OpenAI "just gave away the entire game... The Johansson scandal is merely a reminder of AI's manifest-destiny philosophy: This is happening, whether you like it or not." But the Washington Post reports that OpenAI "didn't copy Scarlett Johansson's voice for ChatGPT, records show." [W]hile many hear an eerie resemblance between [ChatGPT voice] "Sky" and Johansson's "Her" character, an actress was hired in June to create the Sky voice, months before Altman contacted Johansson, according to documents, recordings, casting directors and the actress's agent. The agent, who spoke on the condition of anonymity, citing the safety of her client, said the actress confirmed that neither Johansson nor the movie "Her" were ever mentioned by OpenAI. The actress's natural voice sounds identical to the AI-generated Sky voice, based on brief recordings of her initial voice test reviewed by The Post... [Joanne Jang, who leads AI model behavior for OpenAI], said she "kept a tight tent" around the AI voices project, making Chief Technology Officer Mira Murati the sole decision-maker to preserve the artistic choices of the director and the casting office. Altman was on his world tour during much of the casting process and not intimately involved, she said.... To Jang, who spent countless hours listening to the actress and keeps in touch with the human actors behind the voices, Sky sounds nothing like Johansson, although the two share a breathiness and huskiness. In a statement from the Sky actress provided by her agent, she wrote that at times the backlash "feels personal being that it's just my natural voice and I've never been compared to her by the people who do know me closely." More from Northeastern University's news service: "The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers," Altman said in a statement. "We cast the voice actor behind Sky's voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky's voice in our products. We are sorry to Ms. Johansson that we didn't communicate better..." [Alexandra Roberts, a Northeastern University law and media professor] says she believes things will settle down and Johansson will probably not sue OpenAI since the company is no longer using the "Sky" voice. "If they stopped using it, and they promised her they're not going to use it, then she probably doesn't have a case," she says. "She probably doesn't have anything to sue on anymore, and since it was just a demo, and it wasn't a full release to the general public that offers the full range of services they plan to offer, it would be really hard for her to show any damages." Maybe it's analgous to something Sam Altman said earlier this month on the All-In podcast. "Let's say we paid 10,000 musicians to create a bunch of music, just to make a great training set, where the music model could learn everything about song structure and what makes a good, catchy beat and everything else, and only trained on that... I was posing that as a thought experiment to musicians, and they were like, 'Well, I can't object to that on any principle basis at that point — and yet there's still something I don't like about it.'" Altman added "Now, that's not a reason not to do it, um, necessarily, but..." and then talked about Apple's "Crush" ad and the importance of preserving human creativity. He concluded by saying that OpenAI has "currently made the decision not to do music, and partly because exactly these questions of where you draw the lines..."

Read more of this story at Slashdot.

If Scarlett Johansson can’t bring the AI firms to heel, what hope for the rest of us? | John Naughton

OpenAI’s unsubtle approximation of the actor’s voice for its new GPT-4o software was a stark illustration of the firm’s high-handed attitude

On Monday 13 May, OpenAI livestreamed an event to launch a fancy new product – a large language model (LLM) dubbed GPT-4o – that the company’s chief technology officer, Mira Murati, claimed to be more user-friendly and faster than boring ol’ ChatGPT. It was also more versatile, and multimodal, which is tech-speak for being able to interact in voice, text and vision. Key features of the new model, we were told, were that you could interrupt it in mid-sentence, that it had very low latency (delay in responding) and that it was sensitive to the user’s emotions.

Viewers were then treated to the customary toe-curling spectacle of “Mark and Barret”, a brace of tech bros straight out of central casting, interacting with the machine. First off, Mark confessed to being nervous, so the machine helped him to do some breathing exercises to calm his nerves. Then Barret wrote a simple equation on a piece of paper and the machine showed him how to find the value of X, after which he showed it a piece of computer code and the machine was able to deal with that too.

Continue reading...

💾

© Photograph: Valéry Hache/AFP/Getty Images

💾

© Photograph: Valéry Hache/AFP/Getty Images

Did OpenAI Illegally Mimic Scarlett Johansson’s Voice? – Source: www.govinfosecurity.com

Source: www.govinfosecurity.com – Author: 1 Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Actor Said She Firmly Declined Offer From AI Firm to Serve as Voice of GPT-4.o Mathew J. Schwartz (euroinfosec) • May 21, 2024     Scarlett Johansson (Image: Gage Skidmore, via Flickr/CC) Imagine these optics: A man asks a […]

La entrada Did OpenAI Illegally Mimic Scarlett Johansson’s Voice? – Source: www.govinfosecurity.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

You'll be pleasantly surprised by the huge range of options

Wrapped up in the thrill of discovering this new, delightful art and securing versions of it to gaze at while stirring tea in the morning, my dark, skeptical, spidey-senses failed to engage. High on consumer dopamine and browsing picture frames, I forgot, for an important moment, that we recently crossed over into a different sort of world. The sort of world where it is trivial to prompt a neural network to create an image that pulls on the traditional patterns, subject matter, and motifs of William Morris, but layered with the hyper-realistic, high-definition, pixel-perfect asethetics of the modern web; dramatic lighting and sweeping landscapes ripped from ArtStation, meticulously art-directed details from Wes Anderson film stills, the two-tone color overlays and soft glow effects popularised on Instagram and Pinterest. A system trained on everything we've clicked like on, priming us to like what it makes. from Faking William Morris, Generative Forgery, and the Erosion of Art History

FTC Chair: AI Models Could Violate Antitrust Laws

An anonymous reader quotes a report from The Hill: Federal Trade Commission (FTC) Chair Lina Khan said Wednesday that companies that train their artificial intelligence (A) models on data from news websites, artists' creations or people's personal information could be in violation of antitrust laws. At The Wall Street Journal's "Future of Everything Festival," Khan said the FTC is examining ways in which major companies' data scraping could hinder competition or potentially violate people's privacy rights. "The FTC Act prohibits unfair methods of competition and unfair or deceptive acts or practices," Khan said at the event. "So, you can imagine, if somebody's content or information is being scraped that they have produced, and then is being used in ways to compete with them and to dislodge them from the market and divert businesses, in some cases, that could be an unfair method of competition." Khan said concern also lies in companies using people's data without their knowledge or consent, which can also raise legal concerns. "We've also seen a lot of concern about deception, about unfairness, if firms are making one set of representations when you're signing up to use them, but then are secretly or quietly using the data you're feeding them -- be it your personal data, be it, if you're a business, your proprietary data, your competitively significant data -- if they're then using that to feed their models, to compete with you, to abuse your privacy, that can also raise legal concerns," she said. Khan also recognized people's concerns about companies retroactively changing their terms of service to let them use customers' content, including personal photos or family videos, to feed into their AI models. "I think that's where people feel a sense of violation, that that's not really what they signed up for and oftentimes, they feel that they don't have recourse," Khan said. "Some of these services are essential for navigating day to day life," she continued, "and so, if the choice -- 'choice' -- you're being presented with is: sign off on not just being endlessly surveilled, but all of that data being fed into these models, or forego using these services entirely, I think that's a really tough spot to put people in." Khan said she thinks many government agencies have an important role to play as AI continues to develop, saying, "I think in Washington, there's increasingly a recognition that we can't, as a government, just be totally hands off and stand out of the way." You can watch the interview with Khan here.

Read more of this story at Slashdot.

OpenAI Releases Former Employees From Controversial Exit Agreements

OpenAI has reversed its decision requiring former employees to sign a perpetual non-disparagement agreement to retain their vested equity, stating that they will not cancel any vested units and will remove non-disparagement clauses from departure documents. CNBC reports: The internal memo, which was viewed by CNBC, was sent to former employees and shared with current ones. The memo, addressed to each former employee, said that at the time of the person's departure from OpenAI, "you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity]." "Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units," stated the memo, which was viewed by CNBC. The memo said OpenAI will also not enforce any other non-disparagement or non-solicitation contract items that the employee may have signed. "As we shared with employees, we are making important updates to our departure process," an OpenAI spokesperson told CNBC in a statement. "We have not and never will take away vested equity, even when people didn't sign the departure documents. We'll remove non-disparagement clauses from our standard departure paperwork, and we'll release former employees from existing non-disparagement obligations unless the non-disparagement provision was mutual," said the statement, adding that former employees would be informed of this as well. "We're incredibly sorry that we're only changing this language now; it doesn't reflect our values or the company we want to be," the OpenAI spokesperson added.

Read more of this story at Slashdot.

Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com

attempts-to-regulate-ai’s-hidden-hand-in-americans’-lives-flounder-in-us-statehouses-–-source:-wwwsecurityweek.com

Views: 0Source: www.securityweek.com – Author: Associated Press The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide. Only one of seven bills aimed at preventing AI’s penchant to discriminate when making […]

La entrada Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses – Source: www.securityweek.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

❌