Could AI Replace CEOs?
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writersβand the unions that represent themβwere surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern."
"The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."
The Vox Unionβwhich represents The Verge, SB Nation, and Vulture, among other publicationsβreacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."
On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, "AI Overviews: About last week." In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn't realize it is admitting it.
To recap, the AI Overview featureβwhich the company showed off at Google I/O a few weeks agoβaims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems. Right now, it's an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.
While Google claims this approach is "highly effective" and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.
Google is finally explaining what the heck happened with its AI Overviews.
For those who arenβt caught up, AI Overviews were introduced to Googleβs search engine on May 14, taking the beta Search Generative Experience and making it live for everyone in the U.S. The feature was supposed to give an AI-powered answer at the top of almost every search, but it wasnβt long before it started suggesting that people put glue in their pizzas or follow potentially fatal health advice. While theyβre technically still active, AI Overviews seem to have become less prominent on the site, with fewer and fewer searches from the Lifehacker team returning an answer from Googleβs robots.
In a blog post yesterday, Google Search VP Liz Reid clarified that while the feature underwent testing, "thereβs nothing quite like having millions of people using the feature with many novel searches.β The company acknowledged that AI Overviews hasnβt had the most stellar reputation (the blog is titled βAbout last weekβ), but it also said it discovered where the breakdowns happened and is working to fix them.
βAI Overviews work very differently than chatbots and other LLM products,β Reid said. βTheyβre not simply generating an output based on training data,β but instead running βtraditional βsearchβ tasksβ and providing information from βtop web results.β Therefore, she doesnβt connect errors to hallucinations so much as the model misreading whatβs already on the web.
βWe saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she continued. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice.β In other words, because the robot canβt distinguish between sarcasm and actual help, it can sometimes present the former for the latter.
Similarly, when there are βdata voidsβ on certain topics, meaning not a lot has been written seriously about them, Reid said Overviews was accidentally pulling from satirical sources instead of legitimate ones. To combat these errors, the company has now supposedly made improvements to AI Overviews, saying:
We built better detection mechanisms for nonsensical queries that shouldnβt show an AI Overview, and limited the inclusion of satire and humor content.
We updated our systems to limit the use of user-generated content in responses that could offer misleading advice.
We added triggering restrictions for queries where AI Overviews were not proving to be as helpful.
For topics like news and health, we already have strong guardrails in place. For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.
All these changes mean AI Overviews probably arenβt going anywhere soon, even as people keep finding new ways to remove Google AI from search. Despite social media buzz, the company said βuser feedback shows that with AI Overviews, people have higher satisfaction with their search results,β going on to talk about how dedicated Google is to βstrengthening [its] protections, including for edge cases."
That said, it looks like thereβs still some disconnect between Google and users. Elsewhere in its posts, Google called out users for βnonsensical new searches, seemingly aimed at producing erroneous results.β
Specifically, the company questioned why someone would search for βHow many rocks should I eat?β The idea was to break down where data voids might pop up, and while Google said these questions βhighlighted some specific areas that we needed to improve,β the implication seems to be that problems mostly appear when people go looking for them.
Similarly, Google denied responsibility for several AI Overview answers, saying that βdangerous results for topics like leaving dogs in cars, smoking while pregnant, and depressionβ were faked.
Thereβs certainly a tone of defensiveness to the post, even as Google spends billions on AI engineers who are presumably paid to find these kinds of mistakes before they go live. Google says AI Overviews only βmisinterpret languageβ in βa small number of cases,β but we do feel bad for anyone sincerely trying to up their workout routine who might have followed its "squat plug" advice.
Thus far, AI devices like the Rabbit R1 and the Humane Ai pin have been all hype, no substance. The gadgets largely failed on their promises as true AI companions, but even if they didn't suffer consistent glitches from a rushed-to-market strategy, they still have a fundamental flaw: Why do I need a separate device for AI when I can do basically everything advertised with a smartphone?
It's a tough sell, and it's made me quite skeptical of AI hardware taking off in any meaningful way. I imagine anyone interested in AI is more likely to download the ChatGPT app and ask it about the world around them rather than drop hundreds of dollars on a standalone device. If you have an iPhone, however, you may soon be forgetting about an AI app altogether.
Although Apple has been totally late to the AI party, it might be working on something that actually succeeds where Rabbit and Humane failed: According to Bloomberg's Mark Gurman, Apple is planning on a big overhaul to Siri for a later version of iOS 18: While rumors previously suggested Apple was working on making interactions with Siri more natural, the latest leaks suggest the company is giving Siri the power to control "hundreds" of features within Apple apps: You say what you want the assistant to do (e.g. crop this photo) and it will. If true, it's a huge leap from using Siri to set alarms and check the weather.
Gurman says Apple had to essentially rewire Siri for this feature, integrating the assistant with LLMs for all its AI processing. He says Apple is planning on making Siri a major showcase at WWDC, demoing how the new AI assistant can open documents, move notes to specific folders, manage your email, and create a summary for an article you're reading. At this point, AI Siri reportedly handles one command at a time, but Apple wants to roll out an update that lets you stack commands as well. Theoretically, you could eventually ask Siri to perform multiple functions across apps. Apple also plans to start with its own apps, so Siri wouldn't be able to interact this way within Instagram or YouTubeβat least not yet.
It also won't be ready for some time: Although iOS 18 is likely to drop in the fall, Gurman thinks AI Siri won't be here until at least next year. Other than that, though, we don't know much else about this change at this time. But the idea that you can ask Siri to do anything on your smartphone is intriguing: In Messages, you could say "Hey Siri, react with a heart on David's last message." In Notes, you could say "Hey Siri, invite Sarah and Michael to collaborate on this note." If Apple has found a way to make virtually every feature in iOS Siri-friendly, that could be a game changer.
In fact, it could turn Siri (and, to a greater extent, your iPhone) into the AI assistant companies are struggling to sell the public on. Imagine a future when you can point your iPhone at a subject and ask Siri to tell you more about it. Then, maybe you ask Siri to take a photo of the subject, crop it, and email it to a friend, complete with the summary you just learned about. Maybe you're scrolling through a complex article, and you ask Siri to summarize it for you. In this ideal version of AI Siri, you don't need a Rabbit R1 or a Humane Ai Pin: You just need Apple's latest and greatest iPhone. Not only will Siri do everything these AI devices say they can, it'll also do everything else you normally do on your iPhone. Win-win.
The iPhone is the other side of the coin, though: These features are power intensive, so Apple is rumored to be figuring out which features can be run on-device, and which need to be run in the cloud. The more features Apple outsources to the cloud, the greater the security risk, although some rumors say the company is working on making even cloud-based AI features secure as well. But Apple will likely keep AI-powered Siri features running on-device, which means you might need at least an iPhone 15 Pro to run it.
The truth is, we won't know exactly what AI features Apple is cooking up until they hit the stage in June. If Gurman's sources are to be believed, however, Apple's delayed AI strategy might just work out in its favor.
Transparency isn't just about promising action, it's about proving it. It means sharing the data and results that show you're following through on your commitments.
The post Cybersecurity Insights with Contrast CISO David Lindner | 5/31/24 appeared first on Security Boulevard.
OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.
The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAIβs policies prohibit the use of its models to deceive or mislead others.
The content focused on issues βincluding Russiaβs invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,β OpenAI said in the report.
Altman spent part of his virtual appearance fending off thorny questions about governance, an AI voice controversy and criticism from ousted board members.
The post OpenAIβs Altman Sidesteps Questions About Governance, Johansson at UN AI Summit appeared first on SecurityWeek.
Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences
OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.
Malicious actors used the companyβs generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.
Continue reading...Generative AI applications like ChatGPT, Gemini, and Copilot are known as chatbots, since you're meant to talk to them. So, I guess it's only natural that chat apps would want to add chatbots to their platformsβwhether or not users actually, you know, use them.
Telegram is the latest such app to add a chatbot to its array of features. Its chatbot of choice? Copilot. While Copilot has landed on other Microsoft-owned platforms before, Telegram is among the first third-party apps to offer Copilot functionality directly, although it certainly isn't obvious if you open the app today.
When I first learned about Telegram's Copilot integration, I fired up the app and was met with a whole lot of nothing. That isn't totally unusual for new features, as they usually roll out gradually to users over time. However, as it turns out, accessing Copilot in Telegram is a little convoluted. You actually need to search for Copilot by its Telegram username, @CopilotOfficialBot. Don't just search for "Copilot," as you'll find an assortment of unauthorized options. I don't advise chatting with any random bot you find on Telegram, certainly not any masquerading as the real deal.
You can also access it from Microsoft's "Copilot for Telegram" site. You'll want to open the link on the device you use Telegram on, as when you select "Try now," it'll redirect to Telegram.
Whichever way you pull up the Copilot bot, you'll end up in a new chat with Copilot. A splash screen informs you that Copilot in Telegram is in beta, and invites you to hit "Start" to use the bot. Once you do, you're warned about the risks of using AI. (Hallucinations happen all the time, after all.) In order to proceed, hit "I Accept." You can start sending messages without accepting, but the bot will just respond with the original prompt to accept, so if you want to get anywhere you will need to agree to the terms.
From here, you'll need to verify the phone number you use with Telegram. Hit "Send my mobile number," then hit "OK" on the pop-up to share your phone number with Copilot. You don't need to wait for a verification text: Once you share your number, you're good to go.
From here, it's Copilot, but in Telegram. You can ask the bot questions and queries for a variety of subjects and tasks, and the bot will respond in kind. This version of the bot is connected to the internet, so it can look up real-time information for you, but you can't use Copilot's image generator here. If you try, the bot will redirect you to the main Copilot site, the iOS app, or the Android app.
There's isn't much here that's particularly Telegram-related, other than a function that will share an invite to your friends to try Copilot. You also only have 30 "turns" per day, so just keep that in mind before you get too carried away with chatting.
At the end of the day, this seems to be a play by Microsoft to get Copilot in the hands of more users. Maybe you wouldn't download the Copilot app yourself, but if you're an avid Telegram user, you may be curious enough to try using the bot in between conversations. I suspect this won't be the last Copilot integration we see from Microsoft, as the company continues to expand its AI strategy.
Apple and OpenAI have successfully made a deal to include OpenAI's generative AI technology in Apple's software, according to The Information, which cites a source who has spoken to OpenAI CEO Sam Altman about the deal.
It was previously reported by Bloomberg that the deal was in the works. The news appeared in a longer article about Altman and his growing influence within the company.
"Now, [Altman] has fulfilled a longtime goal by striking a deal with Apple to use OpenAIβs conversational artificial intelligence in its products, which could be worth billions of dollars to the startup if it goes well," according to The Information's source.
On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.
The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplicationsβnecessary for running neural network architectureβin parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.
This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)βcreated by Intel in 2019βwhich provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.
Read more of this story at Slashdot.
OpenAI continues to expand the options available to free ChatGPT users. The company started by making its newest model, GPT-4o, generally free to all usersβthough there are limitations unless you payβand now it has expanded the accessibility of major 4o features by removing the paywalls on file uploads, vision (which can use your camera for input), and GPTs (or custom chatbots). Browse, data analysis, and memory, also formerly paywalled features, were already available to free users in a similarly limited capacity.
OpenAI has been clear about its plans to expand the offerings that its free users can take advantage of since it first revealed GPT-4o a few weeks back, and it has made good on those promises so far. With these changes, it makes paying for ChatGPT Plus even less important for many, which is surprisingly a good thing for OpenAI. More users means more usage testingβsomething that will only help improve the models running ChatGPT.
There will, of course, still be usage limits on the free version of ChatGPT. Once you reach those limits, youβll be kicked back to GPT 3.5, as OpenAI hasnβt made GPT 4 or GPT 4 Turbo accessible in the free tier. Despite that, some paid users are not exactly happy with the change, with many wondering what the point of ChatGPT Plus is supposed to be now.
Paying users still get up to five times more messages with GPT-4o than free users do, but that hasn't stopped some from taking to social media to ask questions like βwhat about the paid users?β and βwhat do paid users get? False hopes of GPT5.β
ChatGPT Plus subscribers still get access to the ability to make their own GPTs, and based on everything we know so far, Plus users are the only ones who will get 4o's upcoming voice-activated mode, though that could certainly change in the future.
Giving more people access to ChatGPTβs best features brings the chatbot in line with one of its biggest competitors, Claude, which allows free users access to the latest version of its AI model (albeit it through a less powerful version of that model).
Read more of this story at Slashdot.
Read more of this story at Slashdot.
At its Google I/O keynote earlier this month, Google made big promises about AI in Search, saying that users would soon be able to βLet Google do the Googling for you.β That feature, called AI Overviews, launched earlier this month. The result? The search giant spent Memorial Day weekend scrubbing AI answers from the web.
Since Google AI search went live for everyone in the U.S. on May 14, AI Overviews have suggested users put glue in their pizza sauce, eat rocks, and use a βsquat plugβ while exercising (you can guess what that last one is referring to).
While some examples circulating on social media have clearly been photoshopped for a joke, others were confirmed by the Lifehacker teamβGoogle suggested I specifically use Elmerβs glue in my pizza. Unfortunately, if you try to search for these answers now, youβre likely to see the βan AI overview is not available for this searchβ disclaimer instead.
This isnβt the first time Googleβs AI searches have led users astray. When the beta for AI Overviews, known as Search Generative Experience, went live in March, users reported that the AI was sending them to sites known to spread malware and spam.
What's causing these issues? Well, for some answers, it seems like Googleβs AI canβt take a joke. Specifically, the AI isnβt capable of discerning a sarcastic post from a genuine one, and given it seems to love scanning Reddit for answers. If youβve ever spent any time on Reddit, you can see what a bad combination that makes.
After some digging, users discovered the source of the AIβs βglue in pizzaβ advice was an 11-year-old post from a Reddit user who goes by the name βfucksmith.β Similarly, the use of βsquat plugsβ is an old joke on Redditβs exercise forums (Lifehacker Senior Health Editor Beth Skwarecki breaks down that particular bit of unintentional misinformation here.)
These are just a few examples of problems with AI Overviews, and another oneβthe AI's tendency to cite satirical articles from The Onion as gospel (no, geologists actually don't recommend eating one small rock per day) illustrates the problem particularly well: The internet is littered with jokes that would make for extremely bad advice when repeated deadpan, and that's just what AI Overviews is doing.
Google's AI search results do at least explicitly source most of their claims (though discovering the origin of the glue-in-pizza advice took some digging). But unless you click through to read the complete article, youβll have to take the AIβs word on their accuracyβwhich can be problematic if these claims are the first thing you see in Search, at the top of the results page and in big bold text. As youβll notice in Bethβs examples, like with a bad middle school paper, the words βsome sayβ are doing a lot of heavy lifting in these responses.
When AI Overviews get something wrong, they are, for the most part, worth a laugh, and nothing more. But when referring to recipes or medical advice, things can get dangerous. Take this outdated advice on how to survive a rattlesnake bite, or these potentially fatal mushroom identification tips that the search engine also served to Beth.
Google has attempted to avoid responsibility for any inaccuracies by tagging the end of its AI Overviews with βGenerative AI is experimentalβ (in noticeably smaller text), although itβs unclear if that will hold up in court should anyone get hurt thanks to an AI Overview suggestion.
There are plenty more examples of AI Overview messing up circulating around the internet, from Air Bud being confused for a true story to Barack Obama being referred to as Muslim, but suffice it to say that the first thing you see in Google Search is now even less reliable than it was when all you had to worry about was sponsored ads.
Assuming you even see it: Anecdotally, and perhaps in response to the backlash, AI Overviews currently seem to be far less prominent in search results than they were last week. While writing this article, I tried searching for common advice and facts like βhow to make banana puddingβ or βname the last three U.S. presidentsββthings AI Overviews had confidently answered for me on prior searches without error. For about two dozen queries, I saw no overviews, which struck me as suspicious given the email Google representative Meghann Farnsworth sent to The Verge that indicated the company is βtaking swift actionβ to remove certain offending AI answers.
Perhaps Google is simply showing an abundance of caution, or perhaps the company is paying attention to how popular anti-AI hacks like clicking on Searchβs new web filter or appending udm=14 to the end of the search URL have become.
Whatever the case, it does seem like something has changed. In the top-left (on mobile) or top-right (on desktop) corner of Search in your browser, you should now see what looks like a beaker. Click on it, and youβll be taken to the Search Labs page, where youβll see a prominent card advertising AI Overviews (if you donβt see the beaker, sign up for Search Labs at the above link). You can click on that card to see aΒ toggle that can be swapped off, but since the toggle doesnβt actually affect search at large, what we care about is whatβs underneath it.
Here, youβll find a demo for AI Overviews with a big bright βTry an exampleβ button that will display a few low-stakes answers that show the feature in its best light. Below that button are three more βtryβ buttons, except two of them now no longer lead to AI Overviews. I simply saw a normal page of search results when I clicked on them, with the example prompts added to my search bar but not answered by Gemini.
If even Google itself isnβt confident in its hand-picked AI Overview examples, thatβs probably a good indication that they are, at the very least, not the first thing users should see when they ask Google a question.Β
Detractors might say that AI Overviews are simply the logical next step from the knowledge panels the company already uses, where Search directly quotes media without needing to take users to the sourced webpageβbut knowledge panels are not without controversy themselves.Β
On May 14, the same day AI Overviews went live, Google Liaison Danny Sullivan proudly declared his advocacy for the web filter, another new feature that debuted alongside AI Overviews, to much less fanfare. The web filter disables both AI and knowledge panels, and is at the heart of the popular udm=14 hack. It turns out some users just want to see the classic ten blue links.
Itβs all reminiscent of a debate from a little over a decade ago, when Google drastically reduced the presence of the βIβm feeling luckyβ button. The quirky feature worked like a prototype for AI Overviews and knowledge panels, trusting so deeply in the algorithmβs first Google search result being correct that it would simply send users right to it, rather than letting them check the results themselves.
The opportunities for a search to be coopted by malware or misinformation were just as prevalent then, but the real factor behind Iβm Feeling Luckyβs death was that nobody used it. Accounting for just 1% of searches, the button just wasnβt worth the millions of dollars in advertising revenue it was losing Google by directing users away from the search results page before they had a chance to see any ads. (You can still use βIβm Feeling Lucky,β but only on desktop, and only if you scroll down past your autocompleted search suggestions.)
Itβs unlikely AI Overviews will go the way of Iβm Feeling Lucky any time soonβthe company has spent a lot of money on AI, and βIβm Feeling Luckyβ took until 2010 to die.Β But at least for now, it seems to have about as much prominence on the site as Googleβs most forgotten feature. That users arenβt responding to these AI-generated options suggests that you don't really want Google to do the Googling for you.
In a recent interview on "The Ted AI Show" podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company's internal dynamics and the events surrounding CEO Sam Altman's surprise firing and subsequent rehiring last November.
OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.
"When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter," Toner said on the podcast.
Data security and AI governance company Zendata has emerged from stealth mode with $2 million in seed funding.
The post Zendata Emerges From Stealth With Data Security, AI Governance Solutions appeared first on SecurityWeek.
Source: www.darkreading.com β Author: Dark Reading Staff 1 Min Read Source: SOPA Images Limited via Alamy Stock Photo Open AI is forming a safety and security committee led by company directors Bret Taylor, Adam DβAngelo, Nicole Seligman, and CEO Sam Altman.Β The committee is being formed to make recommendations to the full board on safety [β¦]
La entrada OpenAI Forms Another Safety Committee After Dismantling Prior Team β Source: www.darkreading.com se publicΓ³ primero en CISO2CISO.COM & CYBER SECURITY GROUP.
Read more of this story at Slashdot.
Transcribing isn't fun at all. Good thing it's something AI is actually good at. Aiko is an app for Mac, iPad, and iPhone that users Whisperβopen-source technology created by OpenAIβto transcribe audio files. Aiko does not upload the file to the cloud to make the transcription; everything happens on your device. And it works fairly quickly, too: I was able to transcribe a half hour radio drama in just a few minutes.
The application works best on devices with Apple Silicon processors (Intel Macs are technically supported but are extremely slow at transcribing); my 2022 iPhone SE was significantly faster than my 2018 Intel MacBook Pro, which took around three minutes to transcribe 10 seconds of talking. If you have the right hardware, though, this application is just about perfect.
To get started, you need to either point the application toward a file or start recording what you want to transcribe. You can add any audio or video file to the application, which will immediately get started on creating a transcription for you. The recording feature is mostly there for quick notesβthe software advises you to record things using another application first if at all possible. The mobile version can grab audio from the Voice Memos app, which is a nice touch.
The application will show you the text as the transcription happens, meaning you can start reading before the complete transcription is done. The application automatically detects the language being spoken, though you can set a different language in the settings if you prefer. You can even set the application to automatically translate non-English conversation into English, if you want.
It's not a perfect applicationβthere's no way to indicate who is speaking when in the text, for example. It works quickly, though, and is completely free, so it's hard to complain too much. This is going to be a go-to tool for me from now on.
Some of the most infamous so-called shadow libraries have increasingly faced legal pressure to either stop pirating books or risk being shut down or driven to the dark web. Among the biggest targets are Z-Library, which the US Department of Justice has charged with criminal copyright infringement, and Library Genesis (Libgen), which was sued by textbook publishers last fall for allegedly distributing digital copies of copyrighted works "on a massive scale in willful violation" of copyright laws.
But now these shadow libraries and others accused of spurning copyrights have seemingly found an unlikely defender in Nvidia, the AI chipmaker among those profiting most from the recent AI boom.
Nvidia seemed to defend the shadow libraries as a valid source of information online when responding to a lawsuit from book authors over the list of data repositories that were scraped to create the Books3 dataset used to train Nvidia's AI platform NeMo.
OpenAI has a new Safety and Security Committee in place fewer than two weeks after disbanding its βsuperalignmentβ team, a year-old unit that was tasked with focusing on the long-term effects of AI.
The post OpenAI Launches Security Committee Amid Ongoing Criticism appeared first on Security Boulevard.
Read more of this story at Slashdot.
As threats increase in sophisticationβin many cases powered by GenAI itselfβGenAI will play a growing role in combatting them.
The post The Rise of Generative AI is Transforming Threat Intelligence β Five Trends to Watch appeared first on Security Boulevard.
Read more of this story at Slashdot.
Javier Milei to hold private talks with Sundar Pichai and Sam Altman as Argentina faces worst economic crisis in decades
Javier Milei, Argentinaβs president, is set to meet with the leaders of some of the worldβs largest tech companies in Silicon Valley this week. The far-right libertarian leader will hold private talks with Sundar Pichai of Google, Sam Altman of OpenAI, Mark Zuckerberg of Meta and Tim Cook of Apple.
Milei also met last month with Elon Musk, who has become one of the South American presidentβs most prominent cheerleaders and repeatedly shared his pro-deregulation, anti-social justice message on Twitter. Peter Thiel, the tech billionaire, has also twice visited Milei, flying down to Buenos Aires to speak with him in February and May of this year.
Continue reading..."A first task of the Safety and Security Committee will be to evaluate and further develop OpenAIβs processes and safeguards over the next 90 days."ΒThe committee's initial task is to evaluate and further develop OpenAIβs existing processes and safeguards. They are expected to make recommendations to the board within 90 days. OpenAI has committed to publicly releasing the recommendations it adopts in a manner that aligns with safety and security considerations. The establishment of the safety and security committee is a significant step by OpenAI to address concerns about AI safety and maintain its leadership in AI innovation. By integrating a diverse group of experts and stakeholders into the decision-making process, OpenAI aims to ensure that safety and security remain paramount as it continues to develop cutting-edge AI technologies.
OpenAI is setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot.
The post OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model appeared first on SecurityWeek.
While Red Teams can expose and root out organization specific weaknesses, there is another growing class of vulnerability at an industry level.
The post Social Distortion: The Threat of Fear, Uncertainty and Deception in Creating Security Risk appeared first on SecurityWeek.
Hot on the heels of Microsoftβs Copilot+ PC announcements last week, Google is refreshing Chomebooks with new AI features to match. These include the ability to summon Gemini with a right click, generate AI backgrounds for video calls, and use the same Magic Editor as on Pixel phones.
Thereβs new non-AI features as well, like a GIF recorder and a new Game Dashboard. These are available on standard Chromebooks, while most of the new AI features will be limited to Chromebook Plus models.Β
Taken together, all of these new features see Google fulfilling some of the promises it made alongside its first Chromebook Plus rollout in October of last year. But Google still seems to be deferring some rollouts to later in the year, as the company only previewed a selection of its more exciting AI developmentsβamong them, a Microsoft Recall-like βWhere Was Iβ screen that pops up every time you open your Chromebook.
There isnβt any brand new chip technology here, like there is with Copilot+ laptops or M-series MacBooks. But since competing devices can cost well above $1,000, Googleβs promise to sell Chromebook Plus laptops starting at $349 provides a great look at what a low-cost AI computer might look like in 2024, and if it lives up to the hype.
In October, Google announced a new certification program for Chromebooks called Chromebook Plus. While Google doesnβt make its own Chromebook hardware, Chromebook Plus guarantees a minimum spec loadout, and comes with some handy extra features.
For a device to be considered a Chromebook Plus, it must have at least an Intel Core i3 12th Generation or AMD Ryzen 3 5000 CPU, 8GB of RAM or more, 128GB of storage or more, a 1080p IPS display or above, and a 1080p or above webcam with temporal noise reduction (which makes videos appear clearer).
This guarantees a certain level of performance, which Google says enables it to turn on features like Magic Eraser, which debuted on Pixel phones. Chromebook Plus users can also blur their backgrounds in video calls or use audio noise cancellation on an OS-level, allowing them to tune up their video even in apps that donβt support it.Β These were the only AI features on Chromebook Plus devices at launch, which left a lot of promises to fulfill.
The minimum requirements for Chromebook Plus devices hasnβt changed now, which means todayβs update is mostly a feature drop. But there are also several new or updated devices on the way, including convertibles (laptops that become tablets). Some of these go above and beyond Googleβs minimums, but perhaps the biggest news here is the cheapest option is now $349, which drops the starting price for Chromebook Plus devices down from $399.
Iβll be focusing on ChromeOS updates for most of this article, but all of my testing was done on the new HP Chromebook Plus x360, a $429 convertible laptop with 8GB of RAM, 128GB of storage, an Intel Core i3 processor, and a 14-inch 1080p touchscreen.
The most prominent addition to Chromebook Plus is Gemini integration, both in the app shelf (Googleβs name for the taskbar) and when you right click.Β Unfortunately, like with Gemini on the Pixel 8a, itβs somewhat of a parlor trick. Clicking the Gemini icon in the app shelf simply opens a Chrome tab for Geminiβs web app, and wonβt work without an internet connection. Once in the web app, Gemini will function as usual, meaning it wonβt be able to help you adjust your Chromebookβs settings, like Microsoft Copilot can with Windows.
To help alleviate any disappointment, and probably to sell future subscriptions, Google is giving all new Chromebook Plus owners a year of the Google One AI Premium plan free with their purchases, meaning theyβll be able to use Gemini Advanced to access the chatbotβs latest large language models.
There is one substantial feature here that genuinely changes how you use Gemini, but itβs pretty limited for now. βHelp me writeβ allows users to select text, right-click it, and choose to have Gemini shorten, elaborate on, insert emojis into, or rewrite it using a specific prompt. Itβs nothing the chatbot couldnβt do before, but the convenience of putting these options on a right-click makes it feel like the next evolution of copy-paste. That catch is that it only works on social media sites for now. While I was able to get writing help on X (formerly Twitter) or LinkedIn, the option wouldnβt show up on Gmail or Google Docs. Itβs unclear whether that will change in the future, but Google says that βwebsites that offer a separate right-click menuβ are not compatible with Help me write.
None of the AI here works on-device, so youβll need to be connected to the internet to try it out.
Less prominent but more useful than Chromebook Plusβ current Gemini integration is full Magic Editor access, something that Google promised would come when it initially launched the Chromebook Plus program. Youβll actually need to install this to use it, but getting it set up is as simple as opening an image in Google Photos and clicking the glowing magic editor button.
Installation doesnβt take long, and the resulting process is about as smooth as on a Pixel phone. Youβll back up your image, then be prompted to tap, brush, or circle the parts of the photo you want to edit. Once selected, you can delete, resize, or move your selected element, and generative AI will fill in any gaps you leave in the process.
Unfortunately, the results are about as good as on Pixel phones, too. Backgrounds are blurry and generated elements might blend together with little rhyme or reason. Itβs fun for a gag, or maybe if you really hate an ex and want them out of your selfie, but itβs not going to replace Photoshop anytime soon. And while itβs a unique function that isnβt just a shortcut to the web, it also needs an internet connection to work.
Another promise Google made upon launching Chromebook Plus was the ability to create custom, AI-generated wallpapers and video call backgrounds. This is finally here, but the implementation is seriously limited compared to expectations.
When I demoed a pre-release version of the feature at a Google event last year, I was able to generate imagery using any prompt I wanted. The results werenβt always beautiful, but the freedom was fun, and gave Googleβs generative AI a unique edge over just picking something off Google Images.
Now, users can only make prompts by selecting from a list of pre-approved words. For instance, if you want to make a wallpaper with a fruit theme, you could pick a color, a fruit, and a background color from a list, but you couldnβt ask for a background of βthree bananas with googly eyes wearing astronaut helmets.β
The results are now more consistent, but also so constrained and typical that thereβs little reason to use these backgrounds over more traditional, handcrafted ones. The reason I even suggested a βfruit themeβ above is that more imaginative options are off-limits. If youβre planning to use an AI background, I hope you like landscapes, letters, and foodstuff.
Like Magic Eraser and Gemini, youβll need internet access for this as well.
Again, Google has big plans for Chromebook Plus down the line. The company says itβs working on a βHelp me readβ feature that will allow Gemini to summarize text for web pages or PDFs on a right-click, and answer follow-up questions. Again, this is nothing the chatbot canβt do now, but putting it on a right-click could be a great way to get people to actually use the AI, as itβll be integrated into their current workflows.
Thereβs also accessibility utilities in the works that could prove to be a genuine game changer for those who need them, and possibly even those who donβt. The idea is to bake Project Gameface, which is currently available on Android, directly into ChromeOS. Chromebook users, whether on a Plus or a standard model, could then control their mouse, keyboard, and other input devices by smiling, blinking, or performing other gestures. It all sounds very cool, but itβs a bit disappointing that weβre this far out on the Chromebook Plus launch and most of the promised AI utility thatβs meant to help bridge the gap between a Chromebook and a more traditional laptop are still just novelties.
What might help Google is the eventual launch of βWhere Was I,β which sounds like a stripped down version of Microsoftβs new Recall feature. Itβd be great to see this go live now, to help more directly compete with Microsoft, because it seems like a genuine compromise between Recallβs promises and its security concerns. Like Recall, Where Was I will remind users what they were up to upon returning to their Chromebook Plus, and even give them buttons to resume certain tasks.Β Unlike Recall, it wonβt take a screenshot every few seconds. Instead, the computer will simply take a note of which tabs and programs you had open when it goes to sleep, and can even port over suggestions from connected phones, like articles you might have started reading on mobile.
For some users, this will just be another screen to dismiss before getting started on work, but for others, it will provide some useful shortcuts that, while not as powerful as Recall, provide much less of a security risk.
Google says these updates will roll out βin the coming year,β but dedicated users might eventually be able to test them out early via Chrome flags (I couldnβt access them in my testing period).
Given the limited nature of whatβs going live today and the somewhat shaky reputation Google AI has earned since being widely implemented into search, Chromebookβs non-AI upgrades might be the most exciting announcements to come out of todayβs news, even if theyβre not front-and-center in Googleβs messaging. The best part? Theyβre on all Chromebooks, not just Chromebook Plus models.
Maybe the most convenient of these is the ability to record a GIF when using the screen capture tool. Simply press the screen capture button (or use the Ctrl + Shift + Show Windows or Ctrl + Shift + F5 shortcuts), click the video icon, then select βRecord GIFβ from the dropdown menu.
Depending on the file size, the compression might not always be greatβI tested the feature out on about 10 seconds of anime footage and got plenty of strange artifactsβbut for shorter and more casual social media reactions, it should prove more convenient than capturing a video file and converting it to a GIF.
Also convenient is the new Game Dashboard, which gives users access to typical screenshot functions, but also comes with a key mapper for touch-based Android games. This will make it far easier to play games like Genshin Impact on a Chromebook, since youβll be able to assign the gameβs touch controls to keyboard buttons and mouse inputs. Chromebook Plus users will also be able to capture videos of their gameplay with the included face-cam of themselves, although oddly enough, the only way to disable the face-cam is to turn off webcam input altogether.
In a move towards seamlessness, youβll also now be able to set up your Chromebook using a QR code and an Android phone, which definitely made the process simpler for me, since my Google password is on the long end. Similarly, you can now access your Google Tasks right from the date display in your Chromebookβs bottom-right corner.
With a price drop and a few extra AI conveniences, Googleβs updated Chromebook Plus program does a decent job using the cloud to make up for lower hardware performance. But as a proper AI computer, Chromebook Plus is clearly still developing. The AI features here arenβt anything that you couldnβt get elsewhere, largely for free, so thereβs not a good incentive to upgrade, especially if you already own a regular Chromebook. In fact, itβs pretty disappointing that Google is locking so many features behind its Chromebook Plus banner. With so much being powered by the cloud, any device with an internet connection could conceivably run them. For the most part, they even still can; theyβll just need to navigate to the Gemini web page first, instead of having AI on a right click.
That AI on a right click promise is tantalizing, though, which means Chromebook Plus is worth paying attention to as Google develops its Help me write and Help me read features. If AI is to take off, it needs to work its way into regular consumer habits, and seeing it readily available when you go to copy and paste is a smart move on Googleβs part.
Alongside Google's feature announcements, a number of updated Chromebook Plus models are now joining the market, including the following:
$699: Acer Chromebook Plus Spin 714, with a 14-inch 1,920 x 1,200 convertible touchscreen, an Intel Core Ultra 5 processor, 8GB RAM, 256GB storage
$649: Acer Chromebook Plus 516 GE, with a 16-inch 2,560 x 1,600 120Hz screen, an Intel Core 5 processor, 8GB RAM, 256GB storage
$499: Asus Chromebook Plus CX24, with a 14-inch 1,920 x 1,080 screen, a 13th Gen Intel Core i5 processor, 8GB RAM, 128GB storage
$429: HP Chromebook Plus x360, with a 14-inch 1,920 x 1,080 convertible touchscreen, a 13th Gen Intel Core i3 processor, 8GB RAM, 128GB storage
$350: Acer Chromebook Plus 514, with a 14-inch 1,920 x 1,080 screen, a 13th Gen Intel Core i3 processor, 8GB RAM, 512GB storage
On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has "recently begun" training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to two weeks of public setbacks for the company.
Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, "frontier model" is a term for a new AI system designed to push the boundaries of current capabilities. And "AGI" refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).
Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.
Funding round values artificial intelligence startup at $18bn before investment, says multibillionaire
Elon Muskβs artificial intelligence company xAI has closed a $6bn (Β£4.7bn) investment round that will make it among the best-funded challengers to OpenAI.
The startup is only a year old, but it has rapidly built its own large language model (LLM), the technology underpinning many of the recent advances in generative artificial intelligence capable of creating human-like text, pictures, video, and voices.
Continue reading...Episode 331 of the Shared Security Podcast discusses privacy and security concerns related to two major technological developments: the introduction of Windows PCβs new feature βRecall,β part of Microsoftβs Copilot+, which captures desktop screenshots for AI-powered search tools, and Slackβs policy of using user data to train machine learning features with users opted in by [β¦]
The post Microsoftβs Copilot+ Recall Feature, Slackβs AI Training Controversy appeared first on Shared Security Podcast.
The post Microsoftβs Copilot+ Recall Feature, Slackβs AI Training Controversy appeared first on Security Boulevard.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
OpenAIβs unsubtle approximation of the actorβs voice for its new GPT-4o software was a stark illustration of the firmβs high-handed attitude
On Monday 13 May, OpenAI livestreamed an event to launch a fancy new product β a large language model (LLM) dubbed GPT-4o β that the companyβs chief technology officer, Mira Murati, claimed to be more user-friendly and faster than boring olβ ChatGPT. It was also more versatile, and multimodal, which is tech-speak for being able to interact in voice, text and vision. Key features of the new model, we were told, were that you could interrupt it in mid-sentence, that it had very low latency (delay in responding) and that it was sensitive to the userβs emotions.
Viewers were then treated to the customary toe-curling spectacle of βMark and Barretβ, a brace of tech bros straight out of central casting, interacting with the machine. First off, Mark confessed to being nervous, so the machine helped him to do some breathing exercises to calm his nerves. Then Barret wrote a simple equation on a piece of paper and the machine showed him how to find the value of X, after which he showed it a piece of computer code and the machine was able to deal with that too.
Continue reading...Source: www.govinfosecurity.com β Author: 1 Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Actor Said She Firmly Declined Offer From AI Firm to Serve as Voice of GPT-4.o Mathew J. Schwartz (euroinfosec) β’ May 21, 2024 Β Β Scarlett Johansson (Image: Gage Skidmore, via Flickr/CC) Imagine these optics: A man asks a [β¦]
La entrada Did OpenAI Illegally Mimic Scarlett Johanssonβs Voice? β Source: www.govinfosecurity.com se publicΓ³ primero en CISO2CISO.COM & CYBER SECURITY GROUP.
Read more of this story at Slashdot.
Read more of this story at Slashdot.