Normal view

There are new articles available, click to refresh the page.
Today — 1 June 2024Main stream

Modi’s alliance to win easily in India election, exit polls project

1 June 2024 at 13:41

Prime minister claims victory but opposition dismisses poll results as fixed and unscientific

Indian prime minister Narendra Modi’s Bharatiya Janata party (BJP)-led alliance is projected to win a big majority in the general election that concluded on Saturday, TV exit polls said, suggesting it would do better than expected by most analysts.

Most exit polls projected the ruling National Democratic Alliance (NDA) could win a two-thirds majority in the 543-member lower house of parliament, where 272 is needed for a simple majority. A two-thirds majority will allow the government to usher in far-reaching amendments to the constitution.

Continue reading...

💾

© Photograph: Debajyoti Chakraborty/NurPhoto/REX/Shutterstock

💾

© Photograph: Debajyoti Chakraborty/NurPhoto/REX/Shutterstock

Meet Suni Williams and Butch Wilmore, the NASA Astronauts Riding on Boeing’s Starliner

1 June 2024 at 11:48
After a May 6 liftoff was scrubbed, the astronauts returned to their home base in Houston and continued their preparations for Saturday’s flight.

© Cristobal Herrera-Ulashkevich/EPA, via Shutterstock

The astronauts Butch Wilmore and Suni Williams on their way to the Starliner spacecraft on May 6, before the launch was called off.

Apple's AI Plans Include 'Black Box' For Cloud Data

1 June 2024 at 13:34
How will Apple protect user data while their requests are being processed by AI in applications like Siri? Long-time Slashdot reader AmiMoJo shared this report from Apple Insider: According to sources of The Information [four different former Apple employees who worked on the project], Apple intends to process data from AI applications inside a virtual black box. The concept, known as "Apple Chips in Data Centers" internally, would involve only Apple's hardware being used to perform AI processing in the cloud. The idea is that it will control both the hardware and software on its servers, enabling it to design more secure systems. While on-device AI processing is highly private, the initiative could make cloud processing for Apple customers to be similarly secure... By taking control over how data is processed in the cloud, it would make it easier for Apple to implement processes to make a breach much harder to actually happen. Furthermore, the black box approach would also prevent Apple itself from being able to see the data. As a byproduct, this means it would also be difficult for Apple to hand over any personal data from government or law enforcement data requests. Processed data from the servers would be stored in Apple's "Secure Enclave" (where the iPhone stores biometric data, encryption keys and passwords), according to the article. "Doing so means the data can't be seen by other elements of the system, nor Apple itself."

Read more of this story at Slashdot.

‘She just says blah blah’: why Italy’s downtrodden believe Meloni is doing nothing for them

The PM is talking up her underdog credentials ahead of this week’s European elections. But many in an impoverished Rome neighbourhood are sceptical

Sitting in the dark, cramped dining room of her home in Tor Bella Monaca, a densely populated council estate on the outskirts of Rome, Giovanna has just returned from one of several cleaning jobs the 70-year-old does to keep her family afloat. Her husband works on construction sites intermittently. The couple, whose youngest son, Cristian, 26, lives at home, might be depicted as borgatara, a slur in Roman dialect that, loosely translated, means a poor person living on the socially deprived fringes of the Italian capital.

Referring to her own upbringing in Garbatella, a traditionally working-class district within easy reach of Rome’s famed monuments, the Italian prime minister Giorgia Meloni said earlier this month she was “a proud borgatara”.

Continue reading...

💾

© Photograph: Tiziana Fabi/AFP, Getty Images

💾

© Photograph: Tiziana Fabi/AFP, Getty Images

Journalists 'Deeply Troubled' By OpenAI's Content Deals With Vox, The Atlantic

By: BeauHD
1 June 2024 at 06:00
Benj Edwards and Ashley Belanger reports via Ars Technica: On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers -- and the unions that represent them -- were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work." The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.." Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI. Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.

Read more of this story at Slashdot.

Google Rolls Back A.I. Search Feature After Flubs and Flaws

1 June 2024 at 05:04
Google appears to have turned off its new A.I. Overviews for a number of searches as it works to minimize errors.

© Jeff Chiu/Associated Press

Sundar Pichai, Google’s chief executive, introduced A.I. Overviews, an A.I. feature in its search engine, last month.

The Cassandra of American intelligence

By: chavenet
1 June 2024 at 03:53
Intelligence analysis is a notoriously difficult craft. Practitioners have to make predictions and assessments with limited information, under huge time pressure, on issues where the stakes involve millions of lives and the fates of nations. If this small bureau tucked in the State Department's Foggy Bottom headquarters has figured out some tricks for doing it better, those insights may not just matter for intelligence, but for any job that requires making hard decisions under uncertainty. from The obscure federal intelligence bureau that got Vietnam, Iraq, and Ukraine right [Vox]

‘Most eligible bachelor’ Duke of Westminster to marry – but all eyes are on William and Harry

1 June 2024 at 03:00

Wedding of Hugh Grosvenor, godfather to the princes’ sons, is ‘society wedding of the year’. Yet why will Harry not attend?

When Hugh Grosvenor, the seventh Duke of Westminster, marries at Chester Cathedral next week the 33-year-old will relinquish the status bestowed on him by society bibles of Britain’s “richest, most eligible bachelor”.

It is not just his £10bn inherited wealth and pole position in the Sunday Times list of 40 richest people under 40 in the UK that means his marriage to Olivia Henson, 31, is being billed as the society wedding of the year.

Continue reading...

💾

© Photograph: Grosvenor2023/PA

💾

© Photograph: Grosvenor2023/PA

Tin Oo, NLD founder with Aung San Suu Kyi, dies aged 97 in Myanmar

1 June 2024 at 01:38

Former armed forces chief was imprisoned after failed revolt against junta and later campaigned with Nobel laureate under National League for Democracy banner

Tin Oo, one of the closest associates of Myanmar’s ousted leader Aung San Suu Kyi, and co-founder with her of the National League for Democracy, has died at the age of 97.

Tin Oo died on Saturday morning at Yangon general hospital, said Moh Khan, a charity worker citing a family member. Charity workers in Myanmar handle funeral arrangements.

Continue reading...

💾

© Photograph: Stuart Isett/AP

💾

© Photograph: Stuart Isett/AP

Yesterday — 31 May 2024Main stream

Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic

31 May 2024 at 17:56
A man covered in newspaper.

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers—and the unions that represent them—were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."

"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."

Read 9 remaining paragraphs | Comments

At the whim of 'brain one'

By: chavenet
31 May 2024 at 15:33
given the current discussions around ai and its impact on artistry and authorship, creating a film reliant on the technology is a controversial but inevitable move. however, the software that hustwit and dawes have built may just hit the sweet spot where human meets machine; where the algorithm works to respect the material and facilitate an artistic vision. from B–1 and the first generative feature film.

'eno' is the first documentary about the pioneering artist brian eno, and the first generative feature film. the narrative is structured at the whim of 'brain one', the proprietary generative software created by hustwit and digital artist, brendan dawes. using an algorithm trained on footage from eno's extensive archive and hustwit's interviews with eno, it pieces together a film that is unique at each viewing. as the order of scenes perpetually changes and what's included is never certain, the version you see is the only time that iteration will exist. "in some ways, the film is kind of like exploring the insides of his brain... it's different memories and ideas and experiences over the 50-year plus time frame." ENO Teaser: Australian Premiere of Brian Eno Film @ Vivid Sydney Opera House Sundance 2024: Generative AI Changes Brian Eno Documentary With Every View [Forbes] 'Eno' Review: A Compelling Portrait of Music Visionary Brian Eno Is Different Each Time You Watch It [Variety] 17-track Brian Eno compilation to accompany new doc [Uncut]

Google’s AI Overview is flawed by design, and a new company blog post hints at why

31 May 2024 at 15:47
A selection of Google mascot characters created by the company.

Enlarge / The Google "G" logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

Google Finally Explained What Went Wrong With AI Overviews

31 May 2024 at 15:30

Google is finally explaining what the heck happened with its AI Overviews.

For those who aren’t caught up, AI Overviews were introduced to Google’s search engine on May 14, taking the beta Search Generative Experience and making it live for everyone in the U.S. The feature was supposed to give an AI-powered answer at the top of almost every search, but it wasn’t long before it started suggesting that people put glue in their pizzas or follow potentially fatal health advice. While they’re technically still active, AI Overviews seem to have become less prominent on the site, with fewer and fewer searches from the Lifehacker team returning an answer from Google’s robots.

In a blog post yesterday, Google Search VP Liz Reid clarified that while the feature underwent testing, "there’s nothing quite like having millions of people using the feature with many novel searches.” The company acknowledged that AI Overviews hasn’t had the most stellar reputation (the blog is titled “About last week”), but it also said it discovered where the breakdowns happened and is working to fix them.

“AI Overviews work very differently than chatbots and other LLM products,” Reid said. “They’re not simply generating an output based on training data,” but instead running “traditional ‘search’ tasks” and providing information from “top web results.” Therefore, she doesn’t connect errors to hallucinations so much as the model misreading what’s already on the web.

“We saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she continued. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice.” In other words, because the robot can’t distinguish between sarcasm and actual help, it can sometimes present the former for the latter.

Similarly, when there are “data voids” on certain topics, meaning not a lot has been written seriously about them, Reid said Overviews was accidentally pulling from satirical sources instead of legitimate ones. To combat these errors, the company has now supposedly made improvements to AI Overviews, saying:

  • We built better detection mechanisms for nonsensical queries that shouldn’t show an AI Overview, and limited the inclusion of satire and humor content.

  • We updated our systems to limit the use of user-generated content in responses that could offer misleading advice.

  • We added triggering restrictions for queries where AI Overviews were not proving to be as helpful.

  • For topics like news and health, we already have strong guardrails in place. For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.

All these changes mean AI Overviews probably aren’t going anywhere soon, even as people keep finding new ways to remove Google AI from search. Despite social media buzz, the company said “user feedback shows that with AI Overviews, people have higher satisfaction with their search results,” going on to talk about how dedicated Google is to “strengthening [its] protections, including for edge cases."

That said, it looks like there’s still some disconnect between Google and users. Elsewhere in its posts, Google called out users for “nonsensical new searches, seemingly aimed at producing erroneous results.”

Specifically, the company questioned why someone would search for “How many rocks should I eat?” The idea was to break down where data voids might pop up, and while Google said these questions “highlighted some specific areas that we needed to improve,” the implication seems to be that problems mostly appear when people go looking for them.

Similarly, Google denied responsibility for several AI Overview answers, saying that “dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression” were faked.

There’s certainly a tone of defensiveness to the post, even as Google spends billions on AI engineers who are presumably paid to find these kinds of mistakes before they go live. Google says AI Overviews only “misinterpret language” in “a small number of cases,” but we do feel bad for anyone sincerely trying to up their workout routine who might have followed its "squat plug" advice.

Apple's AI-Powered Siri Could Make Other AI Devices (Even More) Useless

31 May 2024 at 13:00

Thus far, AI devices like the Rabbit R1 and the Humane Ai pin have been all hype, no substance. The gadgets largely failed on their promises as true AI companions, but even if they didn't suffer consistent glitches from a rushed-to-market strategy, they still have a fundamental flaw: Why do I need a separate device for AI when I can do basically everything advertised with a smartphone?

It's a tough sell, and it's made me quite skeptical of AI hardware taking off in any meaningful way. I imagine anyone interested in AI is more likely to download the ChatGPT app and ask it about the world around them rather than drop hundreds of dollars on a standalone device. If you have an iPhone, however, you may soon be forgetting about an AI app altogether.

Siri might be the AI assistant we've been promised

Although Apple has been totally late to the AI party, it might be working on something that actually succeeds where Rabbit and Humane failed: According to Bloomberg's Mark Gurman, Apple is planning on a big overhaul to Siri for a later version of iOS 18: While rumors previously suggested Apple was working on making interactions with Siri more natural, the latest leaks suggest the company is giving Siri the power to control "hundreds" of features within Apple apps: You say what you want the assistant to do (e.g. crop this photo) and it will. If true, it's a huge leap from using Siri to set alarms and check the weather.

Gurman says Apple had to essentially rewire Siri for this feature, integrating the assistant with LLMs for all its AI processing. He says Apple is planning on making Siri a major showcase at WWDC, demoing how the new AI assistant can open documents, move notes to specific folders, manage your email, and create a summary for an article you're reading. At this point, AI Siri reportedly handles one command at a time, but Apple wants to roll out an update that lets you stack commands as well. Theoretically, you could eventually ask Siri to perform multiple functions across apps. Apple also plans to start with its own apps, so Siri wouldn't be able to interact this way within Instagram or YouTube—at least not yet.

It also won't be ready for some time: Although iOS 18 is likely to drop in the fall, Gurman thinks AI Siri won't be here until at least next year. Other than that, though, we don't know much else about this change at this time. But the idea that you can ask Siri to do anything on your smartphone is intriguing: In Messages, you could say "Hey Siri, react with a heart on David's last message." In Notes, you could say "Hey Siri, invite Sarah and Michael to collaborate on this note." If Apple has found a way to make virtually every feature in iOS Siri-friendly, that could be a game changer.

In fact, it could turn Siri (and, to a greater extent, your iPhone) into the AI assistant companies are struggling to sell the public on. Imagine a future when you can point your iPhone at a subject and ask Siri to tell you more about it. Then, maybe you ask Siri to take a photo of the subject, crop it, and email it to a friend, complete with the summary you just learned about. Maybe you're scrolling through a complex article, and you ask Siri to summarize it for you. In this ideal version of AI Siri, you don't need a Rabbit R1 or a Humane Ai Pin: You just need Apple's latest and greatest iPhone. Not only will Siri do everything these AI devices say they can, it'll also do everything else you normally do on your iPhone. Win-win.

The iPhone is the other side of the coin, though: These features are power intensive, so Apple is rumored to be figuring out which features can be run on-device, and which need to be run in the cloud. The more features Apple outsources to the cloud, the greater the security risk, although some rumors say the company is working on making even cloud-based AI features secure as well. But Apple will likely keep AI-powered Siri features running on-device, which means you might need at least an iPhone 15 Pro to run it.

The truth is, we won't know exactly what AI features Apple is cooking up until they hit the stage in June. If Gurman's sources are to be believed, however, Apple's delayed AI strategy might just work out in its favor.

Russia and China are using OpenAI tools to spread disinformation

31 May 2024 at 09:47
OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective."

Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis "more effective." (credit: FT montage/NurPhoto via Getty Images)

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.

The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.

Read 14 remaining paragraphs | Comments

Driverless racing is real, terrible, and strangely exciting

31 May 2024 at 07:00
Several brightly colored race cars are parked at a race course

Enlarge / No one's entirely sure if driverless racing will be any good to watch, but before we find that out, people have to actually develop driverless race cars. A2RL in Abu Dhabi is the latest step down that path. (credit: A2RL)

ABU DHABI—We live in a weird time for autonomous vehicles. Ambitions come and go, but genuinely autonomous cars are further off than solid-state vehicle batteries. Part of the problem with developing autonomous cars is that teaching road cars to take risks is unacceptable.

A race track, though, is a decent place to potentially crash a car. You can take risks there, with every brutal crunch becoming a learning exercise. (You’d be hard-pressed to find a top racing driver without a few wrecks smoldering in their junior career records.)

That's why 10,000 people descended on the Yas Marina race track in Abu Dhabi to watch the first four-car driverless race.

Read 49 remaining paragraphs | Comments

Mushroom-growing boom could cause biodiversity crisis, warn UK experts

RHS fears non-native fungi could alter microbiology of soil when grown in gardens or disposed of in compost heaps

A boom in the popularity of mushroom-growing at home could lead to a biodiversity disaster, UK garden experts have warned.

There has been a rise in the number of people growing mushrooms in their gardens, and this year, the RHS Chelsea flower show’s plant of the year award included a mushroom – the tarragon oyster mushroom, thought to be found only in the British Isles – in its shortlist for the first time, despite it being a fungus, not a plant.

Continue reading...

💾

© Photograph: Jennifer Gauld/Getty Images/iStockphoto

💾

© Photograph: Jennifer Gauld/Getty Images/iStockphoto

OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit

31 May 2024 at 06:10

Altman spent part of his virtual appearance fending off thorny questions about governance, an AI voice controversy and criticism from ousted board members.

The post OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit appeared first on SecurityWeek.

Tomorrow I will cast my vote in India’s elections. Democracy itself is at stake | Amit Chaudhuri

31 May 2024 at 05:51

There have been highs and lows during the country’s long voting period. Now pessimism is setting in

Tomorrow I will vote. I’ll probably walk to the same mid-20th century bungalow that I walked to five years ago – it was once a primary school my daughter went to – where you vote in a room on the margins of the open space that was a playground. It is a site in which the ballot is cast (or the button pressed) in the upper middle-class neighbourhood of Ballygunge in Kolkata. It has a historic serenity – even an optimism, given its immediate pedagogical past – that may not be typical at all of the circumstances of voting in India.

The general elections are, however, always largely orderly (“largely” being a crucial qualifier). This one hasn’t been very different in that regard. After I vote, I expect to receive an indelible ink mark that will stretch vertically on my forefinger from the cuticle to the skin below it; somehow, like a memory that was once all-important, it will fade after a few days. There have been stories about people who have managed to get the mark off; one of them, who voted eight times for the Bharatiya Janata party, was arrested.

Continue reading...

💾

© Photograph: Dibyangshu Sarkar/AFP/Getty Images

💾

© Photograph: Dibyangshu Sarkar/AFP/Getty Images

Before yesterdayMain stream

OpenAI says Russian and Israeli groups used its tools to spread disinformation

30 May 2024 at 18:25

Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.

Continue reading...

💾

© Photograph: Dado Ruvić/Reuters

💾

© Photograph: Dado Ruvić/Reuters

You Can Now Talk to Copilot In Telegram

30 May 2024 at 18:00

Generative AI applications like ChatGPT, Gemini, and Copilot are known as chatbots, since you're meant to talk to them. So, I guess it's only natural that chat apps would want to add chatbots to their platforms—whether or not users actually, you know, use them.

Telegram is the latest such app to add a chatbot to its array of features. Its chatbot of choice? Copilot. While Copilot has landed on other Microsoft-owned platforms before, Telegram is among the first third-party apps to offer Copilot functionality directly, although it certainly isn't obvious if you open the app today.

When I first learned about Telegram's Copilot integration, I fired up the app and was met with a whole lot of nothing. That isn't totally unusual for new features, as they usually roll out gradually to users over time. However, as it turns out, accessing Copilot in Telegram is a little convoluted. You actually need to search for Copilot by its Telegram username, @CopilotOfficialBot. Don't just search for "Copilot," as you'll find an assortment of unauthorized options. I don't advise chatting with any random bot you find on Telegram, certainly not any masquerading as the real deal.

You can also access it from Microsoft's "Copilot for Telegram" site. You'll want to open the link on the device you use Telegram on, as when you select "Try now," it'll redirect to Telegram.

Whichever way you pull up the Copilot bot, you'll end up in a new chat with Copilot. A splash screen informs you that Copilot in Telegram is in beta, and invites you to hit "Start" to use the bot. Once you do, you're warned about the risks of using AI. (Hallucinations happen all the time, after all.) In order to proceed, hit "I Accept." You can start sending messages without accepting, but the bot will just respond with the original prompt to accept, so if you want to get anywhere you will need to agree to the terms.

copilot in telegram
Credit: Lifehacker

From here, you'll need to verify the phone number you use with Telegram. Hit "Send my mobile number," then hit "OK" on the pop-up to share your phone number with Copilot. You don't need to wait for a verification text: Once you share your number, you're good to go.

From here, it's Copilot, but in Telegram. You can ask the bot questions and queries for a variety of subjects and tasks, and the bot will respond in kind. This version of the bot is connected to the internet, so it can look up real-time information for you, but you can't use Copilot's image generator here. If you try, the bot will redirect you to the main Copilot site, the iOS app, or the Android app.

There's isn't much here that's particularly Telegram-related, other than a function that will share an invite to your friends to try Copilot. You also only have 30 "turns" per day, so just keep that in mind before you get too carried away with chatting.

At the end of the day, this seems to be a play by Microsoft to get Copilot in the hands of more users. Maybe you wouldn't download the Copilot app yourself, but if you're an avid Telegram user, you may be curious enough to try using the bot in between conversations. I suspect this won't be the last Copilot integration we see from Microsoft, as the company continues to expand its AI strategy.

Report: Apple and OpenAI have signed a deal to partner on AI

30 May 2024 at 17:39
OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman. (credit: JASON REDMOND / Contributor | AFP)

Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.

It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.

"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAI’s conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.

Read 7 remaining paragraphs | Comments

Tech giants form AI group to counter Nvidia with new interconnect standard

30 May 2024 at 16:42
Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

disquieting images that just feel 'off'

By: Rhaomi
30 May 2024 at 16:30
If you're not careful and you noclip out of reality in the wrong areas, you'll end up in the Backrooms, where it's nothing but the stink of old moist carpet, the madness of mono-yellow, the endless background noise of fluorescent lights at maximum hum-buzz, and approximately six hundred million square miles of randomly segmented empty rooms to be trapped in. God save you if you hear something wandering around nearby, because it sure as hell has heard you.
So stated an anonymous 2019 thread on 4chan's /x/ imageboard -- a potent encapsulation of liminal-space horror that gave rise to a complex mythos, exploratory video games, and an acclaimed web series (previously; soon to become a major motion picture from A24!). In the five years since, the evolving "Backrooms" fandom has canonized a number of other dreamlike settings, from CGI creations like The Poolrooms and a darkened suburb with wrong stars to real places like the interior atrium of Heathrow's Terminal 4 Holliday Inn and a shuttered Borders bookstore. But the image that inspired the founding text -- an anonymous photo of a vaguely unnerving yellow room -- remained a mystery... until now.

...turns out it's from a 2003 blog post about renovating for an RC car race track in Oshkosh! Not quite as fun a reveal as for certain other longstanding internet mysteries, but still satisfying, especially since it includes another equally-unsettling photo (and serendipitously refers to a "back room"). Also, due credit to Black August, the SomethingAwful goon who quietly claims to have written the original Backrooms text. Liminal spaces previously on MeFi:
Discussing the Kane Pixels production (plus an inspired-by series, A-Sync Research). Note that as the Backrooms movie takes shape, Kane is continuing work on an intriguing spiritual successor: The Oldest View The Eerie Comfort of Liminal Spaces A Twitter thread on being lost in a real-life Backrooms space Inside the world's largest underground shopping complex A 2010 post about Hondo, an enigmatic Half-Life map designer who incorporated "enormous hidden areas that in some cases dwarfed the actual level" MyHouse.WAD, a sprawling, reality-warping Doom mod that went viral last year AskMe: Seeking fiction books with labyrinths and other interminable buildings
My personal favorite liminal space: the unnervingly cheerful indoor playground KidsFun from '90s-era Tampa -- if only because I've actually been there as a kid (and talked about its eeriness on the blue before). Do you have any liminal spaces that have left an impression on you?

Law enforcement operation takes aim at an often-overlooked cybercrime linchpin

30 May 2024 at 15:41
Law enforcement operation takes aim at an often-overlooked cybercrime linchpin

Enlarge (credit: Getty Images)

An international cast of law enforcement agencies has struck a blow at a cybercrime linchpin that’s as obscure as it is instrumental in the mass-infection of devices: so-called droppers, the sneaky software that’s used to install ransomware, spyware, and all manner of other malware.

Europol said Wednesday it made four arrests, took down 100 servers, and seized 2,000 domain names that were facilitating six of the best-known droppers. Officials also added eight fugitives linked to the enterprises to Europe’s Most Wanted list. The droppers named by Europol are IcedID, SystemBC, Pikabot, Smokeloader, Bumblebee, and Trickbot.

Droppers provide two specialized functions. First, they use encryption, code-obfuscation, and similar techniques to cloak malicious code inside a packer or other form of container. These containers are then put into email attachments, malicious websites, or alongside legitimate software available through malicious web ads. Second, the malware droppers serve as specialized botnets that facilitate the installation of additional malware.

Read 9 remaining paragraphs | Comments

OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity'

By: BeauHD
30 May 2024 at 17:25
An anonymous reader quotes a report from Reuters: Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet. The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months. These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others. The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet. In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.

Read more of this story at Slashdot.

The Guardian view on India’s election: Narendra Modi’s audacity of hate | Editorial

By: Editorial
30 May 2024 at 13:52

India’s prime minister encourages a belief in his divinity, leading followers to think it is God’s purpose to spread fear and loathing

“No party or candidate shall include in any activity which may aggravate existing differences or create mutual hatred or cause tension between different castes and communities, religious or linguistic.” So reads the rulebook for Indian elections. Has anyone told Narendra Modi? India’s prime minister has resorted to overtly Islamophobic language during the two-month campaign, painting India’s 200 million Muslims as an existential threat to the Hindu majority. Laughably, the body charged with conducting free and fair polls did issue a feeble call for restraint from “star campaigners”. With the Indian election results out next week, one commentator warned Mr Modi has “put a target on Indian Muslims’ backs, redirecting the anger of poor and marginalised Hindu communities away from crony capitalists and the privileged upper castes”.

Mr Modi’s tirades are meant to distract an electorate suffering from high inflation and a lack of jobs despite rapid economic growth. His Bharatiya Janata party’s political strategy is to emphasise threats to Hindu civilisation, and the need for a united Hindu nation against Muslims. However, Mr Modi has fused this Hindu nationalism with the idea that he was sent by God. The Congress party’s Rahul Gandhi, his main opponent, suggested that anyone else making such a claim needed to see a psychiatrist.

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

Continue reading...

💾

© Photograph: Debajyoti Chakraborty/NurPhoto/REX/Shutterstock

💾

© Photograph: Debajyoti Chakraborty/NurPhoto/REX/Shutterstock

Egypt tight-lipped over Israeli takeover of Gaza buffer zone

30 May 2024 at 12:49

Cairo seeks to keep lid on public anger and avoid escalation as IDF moves into Philadelphi corridor in breach of 1979 peace accord

Egypt has reacted with a wall of silence to the Israeli takeover of a buffer zone in southern Gaza, in apparent defiance of a decades-old peace agreement, as Cairo sought to keep a lid on simmering public anger while also avoiding an escalation in tensions with Israel.

Israel said on Wednesday that its forces had gained “operational” control over the Philadelphi corridor – the Israeli military’s code name for the 9-mile-long (14km) strip of land along the Gaza-Egypt border. Under the terms of the 1979 peace accord between Egypt and Israel, each side is allowed to deploy only a small number of troops or border guards in a demilitarised zone that stretches along the entire Israel-Egypt border and encompasses the corridor.

Continue reading...

💾

© Photograph: Xinhua/REX/Shutterstock

💾

© Photograph: Xinhua/REX/Shutterstock

‘Unliveable’: Delhi’s residents struggle to cope in record-breaking heat

Temperatures of more than 45C have left population of 29 million exhausted – but the poorest suffer most

As the water tanker drove into a crowded Delhi neighbourhood, a ruckus erupted. Dozens of residents ran frantically behind it, brandishing buckets, bottles and hoses, and jumped on top of it to get even a drip of what was stored inside. Temperatures that day had soared to 49C (120F), the hottest day on record – and in many places across India’s vast capital, home to more than 29 million people, water had run out.

Every morning, Tripti, a social health worker who lives in the impoverished enclave of Vivekanand Camp, is among those who has to stand under the blazing sun with buckets and pots, waiting desperately for the water tanker to arrive.

Continue reading...

💾

© Photograph: Anushree Fadnavis/Reuters

💾

© Photograph: Anushree Fadnavis/Reuters

You Can Use Pretty Much All of ChatGPT for Free Now

30 May 2024 at 10:30

OpenAI continues to expand the options available to free ChatGPT users. The company started by making its newest model, GPT-4o, generally free to all users—though there are limitations unless you pay—and now it has expanded the accessibility of major 4o features by removing the paywalls on file uploads, vision (which can use your camera for input), and GPTs (or custom chatbots). Browse, data analysis, and memory, also formerly paywalled features, were already available to free users in a similarly limited capacity.

OpenAI has been clear about its plans to expand the offerings that its free users can take advantage of since it first revealed GPT-4o a few weeks back, and it has made good on those promises so far. With these changes, it makes paying for ChatGPT Plus even less important for many, which is surprisingly a good thing for OpenAI. More users means more usage testing—something that will only help improve the models running ChatGPT.

There will, of course, still be usage limits on the free version of ChatGPT. Once you reach those limits, you’ll be kicked back to GPT 3.5, as OpenAI hasn’t made GPT 4 or GPT 4 Turbo accessible in the free tier. Despite that, some paid users are not exactly happy with the change, with many wondering what the point of ChatGPT Plus is supposed to be now.

Paying users still get up to five times more messages with GPT-4o than free users do, but that hasn't stopped some from taking to social media to ask questions like “what about the paid users?” and “what do paid users get? False hopes of GPT5.”

ChatGPT Plus subscribers still get access to the ability to make their own GPTs, and based on everything we know so far, Plus users are the only ones who will get 4o's upcoming voice-activated mode, though that could certainly change in the future.

Giving more people access to ChatGPT’s best features brings the chatbot in line with one of its biggest competitors, Claude, which allows free users access to the latest version of its AI model (albeit it through a less powerful version of that model).

US Slows Plans To Retire Coal-Fired Plants as Power Demand From AI Surges

By: msmash
30 May 2024 at 11:21
The staggering electricity demand needed to power next-generation technology is forcing the US to rely on yesterday's fuel source: coal. From a report: Retirement dates for the country's ageing fleet of coal-fired power plants are being pushed back as concerns over grid reliability and expectations of soaring electricity demand force operators to keep capacity online. The shift in phasing out these facilities underscores a growing dilemma facing the Biden administration as the US race to lead in artificial intelligence and manufacturing drives an unprecedented growth in power demand that clashes with its decarbonisation targets. The International Energy Agency estimates the AI application ChatGPT uses nearly 10 times as much electricity as Google Search. An estimated 54 gigawatts of US coal powered generation assets, about 4 per cent of the country's total electricity capacity, is expected to be retired by the end of the decade, a 40 per cent downward revision from last year, according to S&P Global Commodity Insights, citing reliability concerns. "You can't replace the fossil plants fast enough to meet the demand," said Joe Craft, chief executive of Alliance Resource Partners, one of the largest US coal producers. "In order to be a first mover on AI, we're going to need to embrace maintaining what we have." Operators slowing down retirements include Alliant Energy, which last week delayed plans to convert its Wisconsin coal-fired plant to gas from 2025 to 2028. Earlier this year, FirstEnergy announced it was scrapping its 2030 target to phase out coal, citing "resource adequacy concerns." Further reading: Data Centers Could Use 9% of US Electricity By 2030, Research Institute Says.

Read more of this story at Slashdot.

Very Few People Are Using 'Much Hyped' AI Products Like ChatGPT, Survey Finds

By: BeauHD
30 May 2024 at 06:00
A survey of 12,000 people in six countries -- Argentina, Denmark, France, Japan, the UK, and the USA -- found that very few people are regularly using AI products like ChatGPT. Unsurprisingly, the group bucking the trend are young people ages 18 to 24. The BBC reports: Dr Richard Fletcher, the report's lead author, told the BBC there was a "mismatch" between the "hype" around AI and the "public interest" in it. The study examined views on generative AI tools -- the new generation of products that can respond to simple text prompts with human-sounding answers as well as images, audio and video. "Large parts of the public are not particularly interested in generative AI, and 30% of people in the UK say they have not heard of any of the most prominent products, including ChatGPT," Dr Fletcher said. This research attempted to gauge what the public thinks, finding: - The majority expect generative AI to have a large impact on society in the next five years, particularly for news, media and science - Most said they think generative AI will make their own lives better - When asked whether generative AI will make society as a whole better or worse, people were generally more pessimistic In more detail, the study found: - While there is widespread awareness of generative AI overall, a sizable minority of the public -- between 20% and 30% of the online population in the six countries surveyed -- have not heard of any of the most popular AI tools. - In terms of use, ChatGPT is by far the most widely used generative AI tool in the six countries surveyed, two or three times more widespread than the next most widely used products, Google Gemini and Microsoft Copilot. - Younger people are much more likely to use generative AI products on a regular basis. Averaging across all six countries, 56% of 18-24s say they have used ChatGPT at least once, compared to 16% of those aged 55 and over. - Roughly equal proportions across six countries say that they have used generative AI for getting information (24%) as creating various kinds of media, including text but also audio, code, images, and video (28%). - Just 5% across the six countries covered say that they have used generative AI to get the latest news.

Read more of this story at Slashdot.

The fake news divide: how Modi’s rule is fracturing India – video

India is in the final stages of a general election, and almost one billion people are registered to vote. The country's prime minister ,Narendra Modi, has been in power for more than 10 years, and his Hindu nationalist Bharatiya Janata party (BJP) is seeking a third term.


But critics of Modi and the BJP say his government has become increasingly authoritarian, fracturing the country along religious lines and threatening India’s secular democracy. At the same time, the space for freedom of speech has been shrinking while disinformation and hate speech has exploded on social media.


The Guardian’s video team travelled through India to explore how fake news and censorship might be shaping the outcome of the election

Continue reading...

💾

© Photograph: the Guardian

💾

© Photograph: the Guardian

Dictatorships depend on the willing

By: chavenet
30 May 2024 at 04:19
The Stasi files offer an astonishingly granular picture of life in a dictatorship—how ordinary people act under suspicious eyes. Nearly three hundred thousand East Germans were working for the Stasi by the time the Wall fell, in 1989, including some two hundred thousand inoffizielle Mitarbeiter, or unofficial collaborators, like Genin. In a population of sixteen million, that was one spy for every fifty to sixty people. In the years since the files were made public, their revelations have derailed political campaigns, tarnished artistic legacies, and exonerated countless citizens who were wrongly accused or imprisoned. Yet some of the files that the Stasi most wanted to hide were never released. In the weeks before the Wall fell, agents destroyed as many documents as they could. Many were pulped, shredded, or burned, and lost forever. But between forty and fifty-five million pages were just torn up, and later stuffed in paper sacks. from Piecing Together the Secrets of the Stasi [The New Yorker; ungated]

‘This election is critical for Congress’: can India’s Gandhi dynasty survive?

Experts say a third consecutive electoral defeat in June could throw into question the political viability of the party and the family synonymous with it

The Nehru-Gandhi dynasty were once the giants of India’s politics – the family at the forefront of the independence battle, who built up the formidable Congress party and produced three prime ministers.

But now the family are fighting for their survival. India’s prime minister, Narendra Modi, and his Bharatiya Janata party (BJP) government are seeking a third term in power in elections taking place over six weeks. Most analysts believe a BJP victory against Congress and its allies once again seems likely.

Continue reading...

💾

© Photograph: Arun Sankar/AFP/Getty Images

💾

© Photograph: Arun Sankar/AFP/Getty Images

How an Indian state became a testing ground for Hindu nationalism – podcast

Hannah Ellis-Petersen reports from Uttarakhand, which offers a glimpse into what the future might look like if the BJP retains its power in national elections

“One of the most significant elements of Modi’s rule is how his Hindu nationalist politics has reshaped the country,” the Guardian’s south Asia correspondent Hannah Ellis-Petersen tells Michael Safi. “Uttarakhand is a state where I think we’ve seen the real consequences of that narrative play out.”

Hannah explains how religious tensions have been stoked in the state of Uttarakhand through conspiracy theories, political rhetoric and the destruction of Muslim shrines and tombs. We hear about the rising violence against the Muslim minority in the area and why this election is a concerning time for them in the state.

Continue reading...

💾

© Photograph: Aakash Hassan/The Observer

💾

© Photograph: Aakash Hassan/The Observer

BreachForums resurrected after FBI seizure – Source: securityaffairs.com

breachforums-resurrected-after-fbi-seizure-–-source:-securityaffairs.com

Views: 0Source: securityaffairs.com – Author: Pierluigi Paganini BreachForums resurrected after FBI seizure The cybercrime forum BreachForums has been resurrected two weeks after a law enforcement operation that seized its infrastructure. The cybercrime forum BreachForums is online again, recently a US law enforcement operation seized its infrastructure and took down the platform. The platform is now reachable […]

La entrada BreachForums resurrected after FBI seizure – Source: securityaffairs.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

ABN Amro discloses data breach following an attack on a third-party provider – Source: securityaffairs.com

abn-amro-discloses-data-breach-following-an-attack-on-a-third-party-provider-–-source:-securityaffairs.com

Views: 0Source: securityaffairs.com – Author: Pierluigi Paganini ABN Amro discloses data breach following an attack on a third-party provider Dutch bank ABN Amro discloses data breach following a ransomware attack hit the third-party services provider AddComm. Dutch bank ABN Amro disclosed a data breach after third-party services provider AddComm suffered a ransomware attack. AddComm distributes […]

La entrada ABN Amro discloses data breach following an attack on a third-party provider – Source: securityaffairs.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Christie disclosed a data breach after a RansomHub attack – Source: securityaffairs.com

christie-disclosed-a-data-breach-after-a-ransomhub attack-–-source:-securityaffairs.com

Views: 0Source: securityaffairs.com – Author: Pierluigi Paganini Christie disclosed a data breach after a RansomHub attack Auction house Christie disclosed a data breach following a RansomHub cyber attack that occurred this month. Auction house Christie’s disclosed a data breach after the ransomware group RansomHub threatened to leak stolen data. The security breach occurred earlier this month. The website […]

La entrada Christie disclosed a data breach after a RansomHub attack – Source: securityaffairs.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Experts released PoC exploit code for RCE in Fortinet SIEM – Source: securityaffairs.com

experts-released-poc-exploit-code-for-rce-in-fortinet-siem-–-source:-securityaffairs.com

Views: 0Source: securityaffairs.com – Author: Pierluigi Paganini Experts released PoC exploit code for RCE in Fortinet SIEM Researchers released a proof-of-concept (PoC) exploit for remote code execution flaw CVE-2024-23108 in Fortinet SIEM solution. Security researchers at Horizon3’s Attack Team released a proof-of-concept (PoC) exploit for a remote code execution issue, tracked as CVE-2024-23108, in Fortinet’s […]

La entrada Experts released PoC exploit code for RCE in Fortinet SIEM – Source: securityaffairs.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

'AI Overviews' Is a Mess, and It Seems Like Google Knows It

29 May 2024 at 10:00

At its Google I/O keynote earlier this month, Google made big promises about AI in Search, saying that users would soon be able to “Let Google do the Googling for you.” That feature, called AI Overviews, launched earlier this month. The result? The search giant spent Memorial Day weekend scrubbing AI answers from the web.

Since Google AI search went live for everyone in the U.S. on May 14, AI Overviews have suggested users put glue in their pizza sauce, eat rocks, and use a “squat plug” while exercising (you can guess what that last one is referring to).

While some examples circulating on social media have clearly been photoshopped for a joke, others were confirmed by the Lifehacker team—Google suggested I specifically use Elmer’s glue in my pizza. Unfortunately, if you try to search for these answers now, you’re likely to see the “an AI overview is not available for this search” disclaimer instead.

Why are Google’s AI Overviews like that?

This isn’t the first time Google’s AI searches have led users astray. When the beta for AI Overviews, known as Search Generative Experience, went live in March, users reported that the AI was sending them to sites known to spread malware and spam.

What's causing these issues? Well, for some answers, it seems like Google’s AI can’t take a joke. Specifically, the AI isn’t capable of discerning a sarcastic post from a genuine one, and given it seems to love scanning Reddit for answers. If you’ve ever spent any time on Reddit, you can see what a bad combination that makes.

After some digging, users discovered the source of the AI’s “glue in pizza” advice was an 11-year-old post from a Reddit user who goes by the name “fucksmith.” Similarly, the use of “squat plugs” is an old joke on Reddit’s exercise forums (Lifehacker Senior Health Editor Beth Skwarecki breaks down that particular bit of unintentional misinformation here.)

These are just a few examples of problems with AI Overviews, and another one—the AI's tendency to cite satirical articles from The Onion as gospel (no, geologists actually don't recommend eating one small rock per day) illustrates the problem particularly well: The internet is littered with jokes that would make for extremely bad advice when repeated deadpan, and that's just what AI Overviews is doing.

Google's AI search results do at least explicitly source most of their claims (though discovering the origin of the glue-in-pizza advice took some digging). But unless you click through to read the complete article, you’ll have to take the AI’s word on their accuracy—which can be problematic if these claims are the first thing you see in Search, at the top of the results page and in big bold text. As you’ll notice in Beth’s examples, like with a bad middle school paper, the words “some say” are doing a lot of heavy lifting in these responses.

Is Google pulling back on AI Overviews?

When AI Overviews get something wrong, they are, for the most part, worth a laugh, and nothing more. But when referring to recipes or medical advice, things can get dangerous. Take this outdated advice on how to survive a rattlesnake bite, or these potentially fatal mushroom identification tips that the search engine also served to Beth.

Dangerous mushroom advice in AI Overviews
Credit: Beth Skwarecki

Google has attempted to avoid responsibility for any inaccuracies by tagging the end of its AI Overviews with “Generative AI is experimental” (in noticeably smaller text), although it’s unclear if that will hold up in court should anyone get hurt thanks to an AI Overview suggestion.

There are plenty more examples of AI Overview messing up circulating around the internet, from Air Bud being confused for a true story to Barack Obama being referred to as Muslim, but suffice it to say that the first thing you see in Google Search is now even less reliable than it was when all you had to worry about was sponsored ads.

Assuming you even see it: Anecdotally, and perhaps in response to the backlash, AI Overviews currently seem to be far less prominent in search results than they were last week. While writing this article, I tried searching for common advice and facts like “how to make banana pudding” or “name the last three U.S. presidents”—things AI Overviews had confidently answered for me on prior searches without error. For about two dozen queries, I saw no overviews, which struck me as suspicious given the email Google representative Meghann Farnsworth sent to The Verge that indicated the company is “taking swift action” to remove certain offending AI answers.

Google AI Overviews is broken in Search Labs

Perhaps Google is simply showing an abundance of caution, or perhaps the company is paying attention to how popular anti-AI hacks like clicking on Search’s new web filter or appending udm=14 to the end of the search URL have become.

Whatever the case, it does seem like something has changed. In the top-left (on mobile) or top-right (on desktop) corner of Search in your browser, you should now see what looks like a beaker. Click on it, and you’ll be taken to the Search Labs page, where you’ll see a prominent card advertising AI Overviews (if you don’t see the beaker, sign up for Search Labs at the above link). You can click on that card to see a  toggle that can be swapped off, but since the toggle doesn’t actually affect search at large, what we care about is what’s underneath it.

Here, you’ll find a demo for AI Overviews with a big bright “Try an example” button that will display a few low-stakes answers that show the feature in its best light. Below that button are three more “try” buttons, except two of them now no longer lead to AI Overviews. I simply saw a normal page of search results when I clicked on them, with the example prompts added to my search bar but not answered by Gemini.

If even Google itself isn’t confident in its hand-picked AI Overview examples, that’s probably a good indication that they are, at the very least, not the first thing users should see when they ask Google a question. 

Detractors might say that AI Overviews are simply the logical next step from the knowledge panels the company already uses, where Search directly quotes media without needing to take users to the sourced webpage—but knowledge panels are not without controversy themselves

Is AI Feeling Lucky?

On May 14, the same day AI Overviews went live, Google Liaison Danny Sullivan proudly declared his advocacy for the web filter, another new feature that debuted alongside AI Overviews, to much less fanfare. The web filter disables both AI and knowledge panels, and is at the heart of the popular udm=14 hack. It turns out some users just want to see the classic ten blue links.

It’s all reminiscent of a debate from a little over a decade ago, when Google drastically reduced the presence of the “I’m feeling lucky” button. The quirky feature worked like a prototype for AI Overviews and knowledge panels, trusting so deeply in the algorithm’s first Google search result being correct that it would simply send users right to it, rather than letting them check the results themselves.

The opportunities for a search to be coopted by malware or misinformation were just as prevalent then, but the real factor behind I’m Feeling Lucky’s death was that nobody used it. Accounting for just 1% of searches, the button just wasn’t worth the millions of dollars in advertising revenue it was losing Google by directing users away from the search results page before they had a chance to see any ads. (You can still use “I’m Feeling Lucky,” but only on desktop, and only if you scroll down past your autocompleted search suggestions.)

It’s unlikely AI Overviews will go the way of I’m Feeling Lucky any time soon—the company has spent a lot of money on AI, and “I’m Feeling Lucky” took until 2010 to die. But at least for now, it seems to have about as much prominence on the site as Google’s most forgotten feature. That users aren’t responding to these AI-generated options suggests that you don't really want Google to do the Googling for you.

Anti-American partnerships during WWII and the early Cold War

29 May 2024 at 12:49
Confronting Another Axis? History, Humility, and Wishful Thinking . A long historical essay by Philip Zelikow, describing the perspectives of past and present US adversaries. "Zelikow warns that the United States faces an exceptionally volatile time in global politics and that the period of maximum danger might be in the next one to three years. Adversaries can miscalculate and recalculate, and it can be difficult to fully understand internal divisions within an adversary's government, how rival states draw their own lessons from different interpretations of history, and how they might quickly react to a new event that appears to shift power dynamics." Via Noah Smith.

OpenAI board first learned about ChatGPT from Twitter, according to former member

29 May 2024 at 11:54
Helen Toner, former OpenAI board member, speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023. (credit: Getty Images)

In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.

"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.

Read 8 remaining paragraphs | Comments

Privacy Implications of Tracking Wireless Access Points – Source: securityboulevard.com

privacy-implications-of-tracking-wireless-access-points-–-source:-securityboulevard.com

Source: securityboulevard.com – Author: Bruce Schneier Brian Krebs reports on research into geolocating routers: Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geolocate devices. Researchers from the University of Maryland say they relied on publicly available data […]

La entrada Privacy Implications of Tracking Wireless Access Points – Source: securityboulevard.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

PyPI crypto-stealer targets Windows users, revives malware campaign

By: Ax Sharma
29 May 2024 at 08:00

Sonatype has discovered 'pytoileur', a malicious PyPI package hiding code that downloads and installs trojanized Windows binaries capable of surveillance, achieving persistence, and crypto-theft. Our discovery of the malware led us to probe into similar packages that are part of a wider, months-long "Cool package" campaign.

The post PyPI crypto-stealer targets Windows users, revives malware campaign appeared first on Security Boulevard.

❌
❌