McDonald's Pauses AI-Powered Drive-Thru Voice Orders
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
SecurityWeek’s AI Risk Summit + CISO Forum bring together business and government stakeholders to provide meaningful guidance on risk management and cybersecurity in the age of artificial intelligence.
The post Tech Leaders to Gather for AI Risk Summit at the Ritz-Carlton, Half Moon Bay June 25-26, 2024 appeared first on SecurityWeek.
The US cybersecurity agency CISA has conducted a tabletop exercise with the private sector focused on AI cyber incident response.
The post CISA Conducts First AI Cyber Incident Response Exercise appeared first on SecurityWeek.
Aim Security has raised a total of $28 million to date and is on a mission to help companies to implement AI products with confidence.
The post Aim Security Raises $18M to Secure Customers’ Implementation of AI Apps appeared first on SecurityWeek.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Authorities caught up with woman alleged to have sailed dinghy to off-limits shore after she posted videos about it
A Dubai-based influencer has been fined €1,800 for trespassing on an off-limits pink-tinged beach in Sardinia before sharing a series of video clips and photos of her escapade on social media.
The woman arrived by dinghy on the shore of Spiaggia Rosa, a beach famous for its pink sand on the tiny Sardinian island of Budelli, allegedly ignoring all the prohibition signs, according to reports in the Italian press.
Continue reading...Read more of this story at Slashdot.
A group of Tesla investors yesterday sued Elon Musk, the company, and its board members, alleging that Tesla was harmed by Musk's diversion of resources to his xAI venture. The diversion of resources includes hiring AI employees away from Tesla, diverting microchips from Tesla to X (formerly Twitter) and xAI, and "xAI's use of Tesla's data to develop xAI's own software/hardware, all without compensation to Tesla," the lawsuit said.
The lawsuit in Delaware Court of Chancery was filed by three Tesla shareholders: the Cleveland Bakers and Teamsters Pension Fund, Daniel Hazen, and Michael Giampietro. It seeks financial damages for Tesla and the disgorging of Musk's equity stake in xAI to Tesla.
"Could the CEO of Coca-Cola loyally start a competing soft-drink company on the side, then divert scarce ingredients from Coca-Cola to the startup? Could the CEO of Goldman Sachs loyally start a competing financial advisory company on the side, then hire away key bankers from Goldman Sachs to the startup? Could the board of either company loyally permit such conduct without doing anything about it? Of course not," the lawsuit says.
Read more of this story at Slashdot.
It’s been a tough few weeks for Microsoft’s headlining Copilot+ feature, and it hasn't even launched yet. After being called out for security concerns before being made opt-in by default, Recall is now being outright delayed.
In a blog post on the Windows website on Thursday, Windows+ Devices corporate vice president Pavan Davuliri wrote that Recall will no longer launch with Copilot+ AI laptops on June 18th, and is instead being relegated to a Windows Insider preview “in the coming weeks.”
“We are adjusting the release model for Recall to leverage the expertise of the Windows Insider Community to ensure the experience meets our high standards for quality and security,” Davuluri explained.
That’s a big blow for Microsoft, as Recall was supposed to be the star feature for its big push into AI laptops. The idea was for it to act like a sort of rewind button for your PC, taking constant screenshots and allowing you to search through previous activity to get caught up on anything you did in the past, from reviewing your browsing habits to tracking down old school notes. But the feature also raised concerns over who has access to that data.
Davuliri explains in his post that screenshots are stored locally and that Recall does not send snapshots to Microsoft. He also says that snapshots have “per-user encryption” that keeps administrators and others logged into the same device from viewing them.
At the same time, security researchers have been able to uncover and extract the text file that a pre-release version of Recall uses for storage, which they claimed was unencrypted. This puts things like passwords and financial information at risk of being stolen by hackers, or even just a nosy roommate.
Davuliri wasn’t clear about when exactly Windows Insiders would get their hands on Recall, but thanked the community for giving a “clear signal” that Microsoft needed to do more. Specifically, he attributed the choice to disable Recall by default and to enforce Windows Hello (which requires either biometric identification or a PIN) for Recall before users can access it.
Generously, limiting access to the Windows Insider program, which anyone can join for free, gives Microsoft more time to collect and weigh this kind of feedback. But it also takes the wind out of Copilot+’s sails just a week before launch, leaving the base experience nearly identical to current versions of Windows (outside of a few creative apps).
It also puts Qualcomm, which will be providing the chips for Microsoft’s first Copilot+ PCs, on a more even playing field with AMD and Intel, which won’t get Copilot+ features until later this year.
Copilot Plus? More like Copilot Minus: Redmond realizes Recall requires radical rethink.
The post Recall ‘Delayed Indefinitely’ — Microsoft Privacy Disaster is Cut from Copilot+ PCs appeared first on Security Boulevard.
Read more of this story at Slashdot.
Retired U.S. Army General Paul M. Nakasone brings cybersecurity experience to OpenAI's Board of Directors and Safety and Security Committee.
The post OpenAI Appoints Former NSA Director Paul Nakasone to Board of Directors appeared first on SecurityWeek.
Microsoft is not rolling out Recall with Copilot+ PCs as it’s seeking additional feedback and working on improving security.
The post Microsoft Delaying Recall Feature to Improve Security appeared first on SecurityWeek.
Whether it be purely text-based social engineering, or advanced, image-based attacks, one thing's for certain — generative AI is fueling a whole new age of advanced phishing.
The post The “Spammification” of Business Email Compromise Spells Trouble for Businesses Around the Globe appeared first on Security Boulevard.
Microsoft will be delaying its controversial Recall feature again, according to an updated blog post by Windows and Devices VP Pavan Davuluri. And when the feature does return "in the coming weeks," Davuluri writes, it will be as a preview available to PCs in the Windows Insider Program, the same public testing and validation pipeline that all other Windows features usually go through before being released to the general populace.
Recall is a new Windows 11 AI feature that will be available on PCs that meet the company's requirements for its "Copilot+ PC" program. Copilot+ PCs need at least 16GB of RAM, 256GB of storage, and a neural processing unit (NPU) capable of at least 40 trillion operations per second (TOPS). The first (and for a few months, only) PCs that will meet this requirement are all using Qualcomm's Snapdragon X Plus and X Elite Arm chips, with compatible Intel and AMD processors following later this year. Copilot+ PCs ship with other generative AI features, too, but Recall's widely publicized security problems have sucked most of the oxygen out of the room so far.
The Windows Insider preview of Recall will still require a PC that meets the Copilot+ requirements, though third-party scripts may be able to turn on Recall for PCs without the necessary hardware. We'll know more when Recall makes its reappearance.
A juried photography contest has disqualified one of the images that was originally picked as a top three finisher in its new AI art category. The reason for the disqualification? The photo was actually taken by a human and not generated by an AI model.
The 1839 Awards launched last year as a way to "honor photography as an art form," with a panel of experienced judges who work with photos at The New York Times, Christie's, and Getty Images, among others. The contest rules sought to segregate AI images into their own category as a way to separate out the work of increasingly impressive image generators from "those who use the camera as their artistic medium," as the 1839 Awards site puts it.
For the non-AI categories, the 1839 Awards rules note that they "reserve the right to request proof of the image not being generated by AI as well as for proof of ownership of the original files." Apparently, though, the awards did not request any corresponding proof that submissions in the AI category were generated by AI.
Pyte has raised $5 million for its secure computation platform, bringing the total investment in the company to $12 million.
The post Pyte Raises $5 Million for Secure Data Collaboration Solutions appeared first on SecurityWeek.
Protect AI warns of a dozen critical vulnerabilities in open source AI/ML tools reported via its bug bounty program.
The post Easily Exploitable Critical Vulnerabilities Found in Open Source AI/ML Tools appeared first on SecurityWeek.
The post Will AI Take Over Cybersecurity Jobs? appeared first on AI Enabled Security Automation.
The post Will AI Take Over Cybersecurity Jobs? appeared first on Security Boulevard.
On Monday, Apple announced it would be integrating OpenAI's ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google's multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT's placement on its devices as compensation enough.
"Apple isn’t paying OpenAI as part of the partnership," writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. "Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments."
The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT's capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.
An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.
According to a press release from the Evansville Police Department, this was a clear "misuse" of Clearview AI's controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.
To help identify suspects, police can scan what Clearview AI describes on its website as "the world's largest facial recognition network." The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.
Victor Miller is running for mayor of Cheyenne, Wyoming, with an unusual campaign promise: If elected, he will not be calling the shots—an AI bot will. VIC, the Virtual Integrated Citizen, is a ChatGPT-based chatbot that Miller created. And Miller says the bot has better ideas—and a better grasp of the law—than many people currently serving in government.
“I realized that this entity is way smarter than me, and more importantly, way better than some of the outward-facing public servants I see,” he says. According to Miller, VIC will make the decisions, and Miller will be its “meat puppet,” attending meetings, signing documents, and otherwise doing the corporeal job of running the city.
But whether VIC—and Victor—will be allowed to run at all is still an open question.
On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.
The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person's eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.
According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a "router" (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.
On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.
A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]," details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies.
Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.
One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.
Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a "Driving Behavior Data History Report" to insurers.
Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times' Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every "hard acceleration" or "hard braking event," among other data points.
At the RSA Conference last month, Netcraft introduced a generative AI-powered platform designed to interact with cybercriminals to gain insights into the operations of the conversational scams they’re running and disrupt their attacks. At the time, Ryan Woodley, CEO of the London-based company that offers a range of services from phishing detection to brand, domain,..
The post Netcraft Uses Its AI Platform to Trick and Track Online Scammers appeared first on Security Boulevard.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
In the rapidly evolving landscape of software as a service (SaaS), the security of applications has never been more critical.
The post Elevating SaaS App Security in an AI-Driven Era appeared first on Security Boulevard.
The AI’s errors can still be comical and catastrophic. Do we really want this technology to be in so many pockets?
Tech watchers and nerds like me get excited by tools such as ChatGPT. They look set to improve our lives in many ways – and hopefully augment our jobs rather than replace them.
But in general, the public hasn’t been so enamoured of the AI “revolution”. Make no mistake: artificial intelligence will have a transformative effect on how we live and work – it is already being used to draft legal letters and analyse lung-cancer scans. ChatGPT was also the fastest-growing app in history after it was released. That said, four in 10 Britons haven’t heard of ChatGPT, according to a recent survey by the University of Oxford, and only 9% use it weekly or more frequently.
Chris Stokel-Walker is the author of How AI Ate the World, which was published last month
Continue reading...Apple maintains its in-house AI is made with security in mind, but some professionals say ‘it remains to be seen’
At its annual developers conference on Monday, Apple announced its long-awaited artificial intelligence system, Apple Intelligence, which will customize user experiences, automate tasks and – the CEO Tim Cook promised – will usher in a “new standard for privacy in AI”.
While Apple maintains its in-house AI is made with security in mind, its partnership with OpenAI has sparked plenty of criticism. OpenAI tool ChatGPT has long been the subject of privacy concerns. Launched in November 2022, it collected user data without explicit consent to train its models, and only began to allow users to opt out of such data collection in April 2023.
Continue reading...Read more of this story at Slashdot.
Read more of this story at Slashdot.
Despite explaining away issues with its AI Overviews while promising to make them better, Google is still apparently telling people to put glue in their pizza. And in fact, articles like this are only making the situation worse.
When they launched to everyone in the U.S. shortly after Google I/O, AI Overviews immediately became the laughing stock of search, telling people to eat rocks, use butt plugs while squatting, and, perhaps most famously, to add glue to their homemade pizza.
Most of these offending answers were quickly scrubbed from the web, and Google issued a somewhat defensive apology. Unfortunately, if you use the right phrasing, you can reportedly still get these blatantly incorrect "answers" to pop up.
In a post on June 11, Bluesky user Colin McMillen said he was still able to get AI Overviews to tell him to add “1/8 cup, or 2 tablespoons, of white, nontoxic glue to pizza sauce” when asking “how much glue to add to pizza.”
The question seems purposefully designed to mess with AI Overviews, sure—although given the recent discourse, a well-meaning person who’s not so terminally online might legitimately be curious what all the hubbub is about. At any rate, Google did promise to address even leading questions like these (as it probably doesn’t want its AI to appear to be endorsing anything that could make people sick), and it clearly hasn’t.
Perhaps more frustrating is the fact that Google’s AI Overview sourced the recent pizza claim to Katie Notopoulus of Business Insider, who most certainly did not tell people to put glue in their pizza. Rather, Notopoulus was reporting on AI Overview’s initial mistake; Google’s AI just decided to attribute that mistake to her because of it.
“Google’s AI is eating itself already,” McMillen said, in response to the situation.
I wasn’t able to reproduce the response myself, but The Verge did, though with different wording: The AI Overview still cited Business Insider, but rightly attributed the initial advice to to Google’s own AI. Which means Google AI’s source for its ongoing hallucination is...itself.
What’s likely going on here is that Google stopped its AI from using sarcastic Reddit posts as sources, but it’s now turning to news articles reporting on its mistakes to fill in the gaps. In other words, as Google messes up, and as people report on it, Google will then use that reporting to back its initial claims. The Verge compared it to Google bombing, an old tactic where people would link the words “miserable failure” to a photo of George W. Bush so often that Google images would return a photo of the president when you searched for the phrase.
Google is likely to fix this latest AI hiccup soon, but it’s all bit of a “laying the train tracks as you go situation,” and certainly not likely to do anything to improve AI search's reputation.
Anyway, just in case Google attaches my name to a future AI Overview as a source, I want to make it clear: Do not put glue in your pizza (and leave out the pineapple while you’re at it).
AI models are nothing without vast data sets to train them and vendors will be increasingly tempted to harvest as much data as they can and answer any questions later.
The post When Vendors Overstep – Identifying the AI You Don’t Need appeared first on SecurityWeek.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
"Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees."However, despite Apple's assurances, the announcement of Apple Intelligence drew mixed reactions online, with some already likening it to Microsoft's Recall. In reaction to Apple's announcement, Elon Musk took to X to announce that Apple devices may be banned from his companies, citing the integration of OpenAI as an 'unacceptable security violation.' Others have also raised questions about the information that might be sent to OpenAI. [caption id="attachment_76692" align="alignnone" width="596"] Source: X.com[/caption] [caption id="attachment_76693" align="alignnone" width="418"] Source: X.com[/caption] [caption id="attachment_76695" align="alignnone" width="462"] Source: X.com[/caption] According to Apple's statements, requests made on its devices are not stored by OpenAI, and users’ IP addresses are obscured. Apple stated that it would also add “support for other AI models in the future.” Andy Wu, an associate professor at Harvard Business School, who researches the usage of AI by tech companies, highlighted the challenges of running powerful generative AI models while limiting their tendency to fabricate information. “Deploying the technology today requires incurring those risks, and doing so would be at odds with Apple’s traditional inclination toward offering polished products that it has full control over.” Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.
Read more of this story at Slashdot.