Reading view

There are new articles available, click to refresh the page.

McDonald's Pauses AI-Powered Drive-Thru Voice Orders

After two years of testing, McDonald's has ended its use of AI-powered drive-thru ordering. "The company was trialing IBM tech at more than 100 of its restaurants but it will remove those systems from all locations by the end of July, meaning that customers will once again be placing orders with a human instead of a computer," reports Engadget. From the report: As part of that decision, McDonald's is ending its automated order taking (AOT) partnership with IBM. However, McDonald's may be considering other potential partners to work with on future AOT efforts. "While there have been successes to date, we feel there is an opportunity to explore voice ordering solutions more broadly," Mason Smoot, chief restaurant officer for McDonald's USA, said in an email to franchisees that was obtained by trade publication Restaurant Business (as noted by PC Mag). Smoot added that the company would look into other options and make "an informed decision on a future voice ordering solution by the end of the year," noting that "IBM has given us confidence that a voice ordering solution for drive-thru will be part of our restaurant's future." McDonald's told Restaurant Business that the goal of the test was to determine whether AOT could speed up service and streamline operations. By automating drive-thru orders, companies are hoping to negate the need for a staff member to take them and either reduce the number of workers needed to operate a restaurant or redeploy resources to other areas of the business. IBM will continue to power other McDonald's systems and it's in talks with other fast-food chains over the use of its AOT tech. The likes of Hardee's, Carl's Jr., Krystal, Wendy's, Dunkin and Taco Johns are already testing or using such technology at their drive-thru locations.

Read more of this story at Slashdot.

Amazon-Powered AI Cameras Used To Detect Emotions of Unwitting UK Train Passengers

Thousands of people catching trains in the United Kingdom likely had their faces scanned by Amazon software as part of widespread artificial intelligence trials, new documents reveal. Wired: The image recognition system was used to predict travelers' age, gender, and potential emotions -- with the suggestion that the data could be used in advertising systems in the future. During the past two years, eight train stations around the UK -- including large stations such as London's Euston and Waterloo, Manchester Piccadilly, and other smaller stations -- have tested AI surveillance technology with CCTV cameras with the aim of alerting staff to safety incidents and potentially reducing certain types of crime. The extensive trials, overseen by rail infrastructure body Network Rail, have used object recognition -- a type of machine learning that can identify items in videofeeds -- to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behavior ("running, shouting, skateboarding, smoking"), and spot potential bike thieves. Separate trials have used wireless sensors to detect slippery floors, full bins, and drains that may overflow. The scope of the AI trials, elements of which have previously been reported, was revealed in a cache of documents obtained in response to a freedom of information request by civil liberties group Big Brother Watch. "The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step," says Jake Hurfurt, the head of research and investigations at the group.

Read more of this story at Slashdot.

AI in Finance is Like 'Moving From Typewriters To Word Processors'

The accounting and finance professions have long adapted to technology -- from calculators and spreadsheets to cloud computing. However, the emergence of generative AI presents both new challenges and opportunities for students looking to get ahead in the world of finance. From a report: Research last year by investment bank Evercore and Visionary Future, which incubates new ventures, highlights the workforce disruption being wreaked by generative AI. Analysing 160mn US jobs, the study reveals that service sectors such as legal and financial are highly susceptible to disruption by AI, although full job replacement is unlikely. Instead, generative AI is expected to enhance productivity, the research concludes, particularly for those in high-value roles paying above $100,000 annually. But, for current students and graduates earning below this threshold, the challenge will be navigating these changes and identifying the skills that will be in demand in future. Generative AI is being swiftly integrated into finance and accounting, by automating specific tasks. Stuart Tait, chief technology officer for tax and legal at KPMG UK, describes it as a "game changer for tax," because it is capable of handling complex tasks beyond routine automation. "Gen AI for tax research and technical analysis will give an efficiency gain akin to moving from typewriters to word processors," he says. The tools can answer tax queries within minutes, with more than 95 per cent accuracy, Tait says.

Read more of this story at Slashdot.

AI Researcher Warns Data Science Could Face a Reproducibility Crisis

Long-time Slashdot reader theodp shared this warning from a long-time AI researcher arguing that data science "is due" for a reckoning over whether results can be reproduced. "Few technological revolutions came with such a low barrier of entry as Machine Learning..." Unlike Machine Learning, Data Science is not an academic discipline, with its own set of algorithms and methods... There is an immense diversity, but also disparities in skill, expertise, and knowledge among Data Scientists... In practice, depending on their backgrounds, data scientists may have large knowledge gaps in computer science, software engineering, theory of computation, and even statistics in the context of machine learning, despite those topics being fundamental to any ML project. But it's ok, because you can just call the API, and Python is easy to learn. Right...? Building products using Machine Learning and data is still difficult. The tooling infrastructure is still very immature and the non-standard combination of data and software creates unforeseen challenges for engineering teams. But in my views, a lot of the failures come from this explosive cocktail of ritualistic Machine Learning: - Weak software engineering knowledge and practices compounded by the tools themselves; - Knowledge gap in mathematical, statistical, and computational methods, encouraged black boxing API; - Ill-defined range of competence for the role of data scientist, reinforced by a pool of candidates with an unusually wide range of backgrounds; - A tendency to follow the hype rather than the science. - What can you do? - Hold your data scientists accountable using Science. - At a minimum, any AI/ML project should include an Exploratory Data Analysis, whose results directly support the design choices for feature engineering and model selection. - Data scientists should be encouraged to think outside-of-the box of ML, which is a very small box - Data scientists should be trained to use eXplainable AI methods to provide context about the algorithm's performance beyond the traditional performance metrics like accuracy, FPR, or FNR. - Data scientists should be held at similar standards than other software engineering specialties, with code review, code documentation, and architectural designs. The article concludes, "Until such practices are established as the norm, I'll remain skeptical of Data Science."

Read more of this story at Slashdot.

CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough

The Washington Post reports: Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...." In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.

Read more of this story at Slashdot.

OpenAI CEO Says Company Could Become a For-Profit Corporation Like xAI, Anthropic

Wednesday The Information reported that OpenAI had doubled its annualized revenue — a measure of the previous month's revenue multiplied by 12 — in the last six months. It's now $3.4 billion (which is up from around $1 billion last summer, notes Engadget). And now an anonymous reader shares a new report from The Information: OpenAI CEO Sam Altman recently told some shareholders that the artificial intelligence developer is considering changing its governance structure to a for-profit business that OpenAI's nonprofit board doesn't control, according to a person who heard the comments. One scenario Altman said the board is considering is a for-profit benefit corporation, which rivals such as Anthropic and xAI are using, this person said. Such a change could open the door to an eventual initial public offering of OpenAI, which currently sports a private valuation of $86 billion, and may give Altman an opportunity to take a stake in the fast-growing company, a move some investors have been pushing. More from Reuters: The restructuring discussions are fluid and Altman and his fellow directors could ultimately decide to take a different approach, The Information added. In response to Reuters' queries about the report, OpenAI said: "We remain focused on building AI that benefits everyone. The nonprofit is core to our mission and will continue to exist." Is that a classic non-denial denial? Note that the nonprofit's "continuing to exist" does not in any way preclude OpenAI from becoming a for-profit business — with a spin-off nonprofit, continuing to exist...

Read more of this story at Slashdot.

An AI-Generated Candidate Wants to Run For Mayor in Wyoming

An anonymous reader shared this report from Futurism: An AI chatbot named VIC, or Virtually Integrated Citizen, is trying to make it onto the ballot in this year's mayoral election for Wyoming's capital city of Cheyenne. But as reported by Wired, Wyoming's secretary of state is battling against VIC's legitimacy as a candidate — and now, an investigation is underway. According to Wired, VIC, which was built on OpenAI's GPT-4 and trained on thousands of documents gleaned from Cheyenne council meetings, was created by Cheyenne resident and library worker Victor Miller. Should VIC win, Miller told Wired that he'll serve as the bot's "meat puppet," operating the AI but allowing it to make decisions for the capital city.... "My campaign promise," Miller told Wired, "is he's going to do 100 percent of the voting on these big, thick documents that I'm not going to read and that I don't think people in there right now are reading...." Unfortunately for the AI and its — his? — meat puppet, however, they've already made some political enemies, most notably Wyoming Secretary of State Chuck Gray. As Gray, who has challenged the legality of the bot, told Wired in a statement, all mayoral candidates need to meet the requirements of a "qualified elector." This "necessitates being a real person," Gray argues... Per Wired, it's also run amuck with OpenAI, which says the AI violates the company's "policies against political campaigning." (Miller told Wired that he'll move VIC to Meta's open-source Llama 3 model if need be, which seems a bit like VIC will turn into a different candidate entirely.) The Wyoming Tribune Eagle offers more details: [H]is dad helped him design the best system for VIC. Using his $20-a-month ChatGPT subscription, Miller had an 8,000-character limit to feed VIC supporting documents that would make it an effective mayoral candidate... While on the phone with Miller, the Wyoming Tribune Eagle also interviewed VIC itself. When asked whether AI technology is better suited for elected office than humans, VIC said a hybrid solution is the best approach. "As an AI, I bring unique strengths to the role, such as impartial decision-making, data-driven policies and the ability to analyze information rapidly and accurately," VIC said. "However, it's important to recognize the value of human experience and empathy and leadership. So ideally, an AI and human partnership would be the most beneficial for Cheyenne...." The artificial intelligence said this unique approach could pave a new pathway for the integration of human leadership and advanced technology in politics.

Read more of this story at Slashdot.

Dubai influencer fined €1,800 for trespassing on Sardinia’s pink beach

Authorities caught up with woman alleged to have sailed dinghy to off-limits shore after she posted videos about it

A Dubai-based influencer has been fined €1,800 for trespassing on an off-limits pink-tinged beach in Sardinia before sharing a series of video clips and photos of her escapade on social media.

The woman arrived by dinghy on the shore of Spiaggia Rosa, a beach famous for its pink sand on the tiny Sardinian island of Budelli, allegedly ignoring all the prohibition signs, according to reports in the Italian press.

Continue reading...

💾

© Photograph: robertharding/Alamy

💾

© Photograph: robertharding/Alamy

GPT-4 Has Passed the Turing Test, Researchers Claim

Drew Turney reports via Live Science: The "Turing test," first proposed as "the imitation game" by computer scientist Alan Turing in 1950, judges whether a machine's ability to show intelligence is indistinguishable from a human. For a machine to pass the Turing test, it must be able to talk to somebody and fool them into thinking it is human. Scientists decided to replicate this test by asking 500 people to speak with four respondents, including a human and the 1960s-era AI program ELIZA as well as both GPT-3.5 and GPT-4, the AI that powers ChatGPT. The conversations lasted five minutes -- after which participants had to say whether they believed they were talking to a human or an AI. In the study, published May 9 to the pre-print arXiv server, the scientists found that participants judged GPT-4 to be human 54% of the time. ELIZA, a system pre-programmed with responses but with no large language model (LLM) or neural network architecture, was judged to be human just 22% of the time. GPT-3.5 scored 50% while the human participant scored 67%. "Machines can confabulate, mashing together plausible ex-post-facto justifications for things, as humans do," Nell Watson, an AI researcher at the Institute of Electrical and Electronics Engineers (IEEE), told Live Science. "They can be subject to cognitive biases, bamboozled and manipulated, and are becoming increasingly deceptive. All these elements mean human-like foibles and quirks are being expressed in AI systems, which makes them more human-like than previous approaches that had little more than a list of canned responses." Further reading: 1960s Chatbot ELIZA Beat OpenAI's GPT-3.5 In a Recent Turing Test Study

Read more of this story at Slashdot.

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI

A large Tesla logo

Enlarge (credit: Getty Images | SOPA Images)

A group of Tesla investors yesterday sued Elon Musk, the company, and its board members, alleging that Tesla was harmed by Musk's diversion of resources to his xAI venture. The diversion of resources includes hiring AI employees away from Tesla, diverting microchips from Tesla to X (formerly Twitter) and xAI, and "xAI's use of Tesla's data to develop xAI's own software/hardware, all without compensation to Tesla," the lawsuit said.

The lawsuit in Delaware Court of Chancery was filed by three Tesla shareholders: the Cleveland Bakers and Teamsters Pension Fund, Daniel Hazen, and Michael Giampietro. It seeks financial damages for Tesla and the disgorging of Musk's equity stake in xAI to Tesla.

"Could the CEO of Coca-Cola loyally start a competing soft-drink company on the side, then divert scarce ingredients from Coca-Cola to the startup? Could the CEO of Goldman Sachs loyally start a competing financial advisory company on the side, then hire away key bankers from Goldman Sachs to the startup? Could the board of either company loyally permit such conduct without doing anything about it? Of course not," the lawsuit says.

Read 11 remaining paragraphs | Comments

AI Candidate Running For Parliament in the UK Says AI Can Humanize Politics

An artificial intelligence candidate is on the ballot for the United Kingdom's general election next month. From a report: "AI Steve," represented by Sussex businessman Steve Endacott, will appear on the ballot alongside non-AI candidates running to represent constituents in the Brighton Pavilion area of Brighton and Hove, a city on England's southern coast. "AI Steve is the AI co-pilot," Endacott said in an interview. "I'm the real politician going into Parliament, but I'm controlled by my co-pilot." Endacott is the chairman of Neural Voice, a company that creates personalized voice assistants for businesses in the form of an AI avatar. Neural Voice's technology is behind AI Steve, one of the seven characters the company created to showcase its technology. He said the idea is to use AI to create a politician who is always around to talk with constituents and who can take their views into consideration. People can ask AI Steve questions or share their opinions on Endacott's policies on its website, during which a large language model will give answers in voice and text based on a database of information about his party's policies. If he doesn't have a policy for a particular issue raised, the AI will conduct some internet research before engaging the voter and pushing them to suggest a policy.

Read more of this story at Slashdot.

Microsoft Is Pulling Recall From Copilot+ at Launch

It’s been a tough few weeks for Microsoft’s headlining Copilot+ feature, and it hasn't even launched yet. After being called out for security concerns before being made opt-in by default, Recall is now being outright delayed.

In a blog post on the Windows website on Thursday, Windows+ Devices corporate vice president Pavan Davuliri wrote that Recall will no longer launch with Copilot+ AI laptops on June 18th, and is instead being relegated to a Windows Insider preview “in the coming weeks.”

“We are adjusting the release model for Recall to leverage the expertise of the Windows Insider Community to ensure the experience meets our high standards for quality and security,” Davuluri explained.

The AI feature was plagued by security concerns

That’s a big blow for Microsoft, as Recall was supposed to be the star feature for its big push into AI laptops. The idea was for it to act like a sort of rewind button for your PC, taking constant screenshots and allowing you to search through previous activity to get caught up on anything you did in the past, from reviewing your browsing habits to tracking down old school notes. But the feature also raised concerns over who has access to that data.

Davuliri explains in his post that screenshots are stored locally and that Recall does not send snapshots to Microsoft. He also says that snapshots have “per-user encryption” that keeps administrators and others logged into the same device from viewing them.

At the same time, security researchers have been able to uncover and extract the text file that a pre-release version of Recall uses for storage, which they claimed was unencrypted. This puts things like passwords and financial information at risk of being stolen by hackers, or even just a nosy roommate.

Davuliri wasn’t clear about when exactly Windows Insiders would get their hands on Recall, but thanked the community for giving a “clear signal” that Microsoft needed to do more. Specifically, he attributed the choice to disable Recall by default and to enforce Windows Hello (which requires either biometric identification or a PIN) for Recall before users can access it.

Generously, limiting access to the Windows Insider program, which anyone can join for free, gives Microsoft more time to collect and weigh this kind of feedback. But it also takes the wind out of Copilot+’s sails just a week before launch, leaving the base experience nearly identical to current versions of Windows (outside of a few creative apps).

It also puts Qualcomm, which will be providing the chips for Microsoft’s first Copilot+ PCs, on a more even playing field with AMD and Intel, which won’t get Copilot+ features until later this year.

Clearview AI Used Your Face. Now You May Get a Stake in the Company.

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database. From a report: Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company's existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action. The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were "trapped together on a sinking ship," lawyers for the plaintiffs wrote in a court filing proposing the settlement. "These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future," added the lawyers, from Loevy + Loevy in Chicago. Anyone in the United States who has a photo of himself or herself posted publicly online -- so almost everybody -- could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company's current value would be about $52 million.) If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview's revenue, which it would be required to set aside.

Read more of this story at Slashdot.

Pride month small press books roundup

Over 50 small press books under the fold! (previous: 1, 2, and 3)

The Ace and Aro Relationship Guide: Making It Work in Friendship, Love, and Sex by Cody Daigle-Orians (Jessica Kingsley Publishers, 21 Oct 2024): Whether we're talking about friendships, romantic relationships, casual dates or intimate partners, this guide will help you not only live authentically in your ace and aro identity, but joyfully share it with others. (Amazon; Bookshop) And Then There Was One by Michele Castleman (Bold Strokes Books, 1 June 2024): Six weeks after Lyla Smith dragged her sister's dead body onto the Lake Erie shore, she escapes her small Ohio town to work as a nanny for distant relatives on their remote private island. (Amazon; Bookshop) Antiquity by Hanna Johannson, trans. Kira Josefsson (Catapult, 6 Feb 2024): Elegant, slippery, and provocative, Antiquity is a queer Lolita story by prize-winning Swedish author Hanna Johansson—a story of desire, power, obsession, observation, and taboo. (Amazon; Bookshop) Born Backwards by Tanya Olson (YesYes Books, 18 Jun 2024): Olson's third poetry collection "reports from inside butch culture in the 1980s American South as it traces how geography, family, experiences, and popular culture shape one queer life." (Amazon; Bookshop) Broughtupsy by Christina Cooke (Catapult, 23 Jan 2024): At once cinematic yet intimate, Broughtupsy is an enthralling debut novel about a young Jamaican woman grappling with grief as she discovers her family, her home, is always just out of reach. (Amazon; Bookshop) The Call Is Coming from Inside the House: Essays by Allyson McOuat (ECW Press, Apr 2024): In a series of intimate and humorous dispatches, McOuat examines her identity as a queer woman, and as a mother, through the lens of the pop culture moments in the '80s and '90s that molded her identity. (Amazon; Bookshop) Dances of Time and Tenderness by Julian Carter (Nightboat Books, 4 June 2024): A cycle of stories linking queer memory, activism, death, and art in a transpoetic history of desire and touch. (Amazon; Bookshop) The Dragonfly Gambit by A. D. Sui (Neon Hemlock Press, 16 Apr 2024): Nearly ten years after Inez Kato sustained a career-ending injury during a military exercise gone awry, she lies, cheats, and seduces her way to the very top, to destroy the fleet that she was once a part of, even at the cost of her own life. Ennis Rezál, Third Daughter of the Rule, has six months left to live. She is desperate to end the twenty-year war she was birthed to fight. But when she brings Inez aboard the mothership, a chess game of manipulation and double-crossing begins to unfold, and the Rule doesn't stand a chance. (Amazon; Bookshop) An Evening with Birdy O'Day by Greg Kearney (Arsenal Pulp, 16 Apr 2024): A funny, boisterous, and deeply moving novel about aging hairstylist Roland's childhood friendship with Birdy O'Day, whose fevered quest for pop music glory drives them apart. (Amazon; Bookshop) Finding Echoes by Foz Meadows (Neon Hemlock, 30 Jan 2024): Snow Kidama speaks to ghosts amongst the local gangs of Charybdis Precinct, isolated from the rest of New Arcadia by the city's ancient walls. But when his old lover, Gem—a man he thought dead—shows up in need of his services, Snow is forced to reevaluate everything. (Amazon; Bookshop) Firebugs by Nino Bulling (Drawn & Quarterly, 13 Feb 2024): After a trip to Paris, Ingken returns home ready for a break from drugs. Their supportive partner, Lily, is flushed, excited about a new connection she's made. Although Ingken wants to be happy for her, there's a discomfort they can't shake. Sleepless nights fill with an endless scroll of images and headlines about climate disaster. A vague dysphoria simmers under their skin; they are able to identify that like Lily, they are changing, but they're not sure exactly how and at what pace. Everyone keeps telling them to burn themself to the ground and build themself back up but they worry about the kind of debris that fire might leave behind. (Amazon; Bookshop) The Future Was Color by Patrick Nathan (Counterpoint LLC, 4 June 2024): As a Hungarian immigrant working as a studio hack writing monster movies in 1950s Hollywood, George Curtis must navigate the McCarthy-era studio system filled with possible communists and spies, the life of closeted men along Sunset Boulevard, and the inability of the era to cleave love from persecution and guilt. But when Madeline, a famous actress, offers George a writing residency at her estate in Malibu to work on the political writing he cares most deeply about, his world is blown open. (Amazon; Bookshop) Getting Glam at Gram's by Sara Weed, ill. Erin Hawryluk (Arsenal Pulp, 3 Sept 2024): A colourful and celebratory picture book that embraces all gender expressions through a fun family fashion show. (Amazon; Bookshop) God of River Mud by Vic Sizemore (West Virginia UP, Jan 2024): To escape a life of poverty and abuse, Berna Cannaday marries Zechariah Minor, a fundamentalist Baptist preacher, and commits herself to his faith, trying to make it her own. After Zechariah takes a church beside the Elk River in rural Clay, West Virginia, Berna falls in love with someone from their congregation—Jordan, a woman who has known since childhood that he was meant to be a man. (Amazon; Bookshop) Healthy Chest Binding for Trans and Non-Binary People: A Practical Guide by Frances Reed (Jessica Kingsley Publishers, 18 Apr 2024): Binding is a crucial strategy in many transgender and non-binary people's lives for coping with gender dysphoria, yet the vast majority of those who bind report some negative physical symptoms. Written by Frances Reed, a licensed bodywork and massage therapist specialising in gender transition, this comprehensive guide helps you make the healthiest choices from the very start of your binding journey. (Amazon; Bookshop) If We Were Stars by Eule Grey (Ninestar Press, 2 Apr 2024): Best friends since they were ten years old, Kurt O'Hara and Beast Harris tackle the typical teenage challenges together: pronouns, AWOL bodies, not to mention snogging. A long-distance relationship with an alien named Iuvenis is the least of their troubles. (Amazon) Keep This Off The Record by Arden Joy (Rising Action, 31 Jan 2024): A romance: Abigail Meyer and Freya Jonsson can't stand one another. But could their severe hatred be masking something else entirely? (Amazon; Bookshop) The Long Hallway by Richard Scott Larson (University of Wisconsin Press, 16 Apr 2024): Growing up queer, closeted, and afraid, Richard Scott Larson found expression for his interior life in horror films, especially John Carpenter's 1978 classic, Halloween. He developed an intense childhood identification with Michael Myers, Carpenter's inscrutable masked villain, as well as Michael's potential victims. Larson scrutinizes this identification, meditating on horror as a metaphor for the torments of the closet. (Amazon; Bookshop) Love, Leda by Mark Hyatt (Nightboat Books, 24 Sept 2024): This portrait of queer, working class London drifts from coffee shop to house party, in search of the next tryst. (Amazon; Bookshop) Lush Lives by J. Vanessa Lyon (Grove Atlantic/Roxane Gay Books, 20 Aug 2024): With beguiling wit and undeniable passion, Lush Lives is a deliciously queer and sexy novel about bold, brilliant women unafraid to take risks and fight for what they love (Amazon; Bookshop) Medusa of the Roses by Navid Sinaki (Grove Atlantic, 13 Aug 2024): Sex, vengeance, and betrayal in modern day Tehran—Navid Sinaki's bold and cinematic debut is a queer literary noir following Anjir, a morbid romantic and petty thief whose boyfriend disappears just as they're planning to leave their hometown for good. (Amazon; Bookshop) Portrait of a Body by Julie Delporte (Drawn & Quarterly, 16 Jan 2024): As she examines her life experience and traumas with great care, Delporte faces the questions about gender and sexuality that both haunt and entice her. Deeply informed by her personal relationships as much as queer art and theory, Portrait of a Body is both a joyous and at times hard meditation on embodiment—a journey to be reunited with the self in an attempt to heal pain and live more authentically. (Amazon; Bookshop) Power to Yield and Other Stories by Bogi Takács (Broken Eye Books, 6 Feb 2024): An AI child discovers Jewish mysticism. A student can give no more blood to their semi-sentient apartment and plans their escape. A candidate is rigorously evaluated for their ability to be a liaison to alien newcomers. A young magician gains perspective from her time as a plant. A neurodivergent woman tries to survive on a planetoid where thoughts shape reality... (Amazon; Bookshop) So Long Sad Love by Mirion Malle, trans. Aleshia Jensen (Drawn & Quarterly, 23 Apr 2024): This graphic novel swaps out the wobbly transition of weaving a new existence into being post-heartbreak for the surprising effortlessness and simplicity of a life already rebuilt. Cleo not only rediscovers her identity as an artist but uncovers her capacity to find love where she has always been most at home: with other women. Mirion Malle dares to tell a story with a happier ending in a stunning, full-color follow-up to the multi-award nominated This is How I Disappear. (Amazon; Bookshop) Sons, Daughters by Ivana Bodrožić, trans. Ellen Elias-Bursać (Seven Stories Press, 30 Apr 2024): This novel tells a story of being locked in: socially, domestically and intimately. Here the Croatian poet and writer depicts a wrenching love between a transgender man and a woman as well as a demanding love between a mother and a daughter in a narrative about breaking through and liberation of the mind, family, and society. (Amazon; Bookshop) Vantage Points: On Media as Trans Memoir by Chase Joynt (Arsenal Pulp, 17 Sep 2024): Following the death of the family patriarch, a box of newly procured family documents reveals writer-filmmaker Chase Joynt's previously unknown connection to Canadian media maverick Marshall McLuhan. Vantage Points takes up the surprising appearance of McLuhan in Joynt's family archive as a way to think about legacies of childhood sexual abuse and how we might process and represent them. (Amazon; Bookshop) You Can't Go Home Again by Jeanette Bears (Bold Strokes Books, 13 Aug 2024): Contemporary romance. Raegan Holcolm thought all they wanted was a proud military career, and that's what they had. But a sudden injury sends them back to their hometown with a wealth of pain, both physical and emotional, insecurities, and the reality that the career they'd chosen above all else has rejected them. The first time they fell in love, Rae left Jules behind. For love to have a second chance, they'll need to realize all along that home might have been a person just as much as a place. (Amazon; Bookshop) Previous roundups 1, 2, and 3 also included Bad Seed by Gabriel Carle, trans. Heather Houde (Feminist Press), The Default World by Naomi Kanakia (Feminist Press), Disobedience by Daniel Sarah Karasik (Book*hug), Indian Winter by Kazim Ali (Coach House), Love the World Or Get Killed Trying by Alvina Chamberland (Noemi), My Body Is Paper by Gil Cuadros (City Lights), These Letters End In Tears by Musih Tedji Xaviere (Catapult), and, finally, How We Named the Stars by Andrés N. Ordorica (Tin House) which Bookshop included in its Pride Month 15% off sale with code PRIDE24. The Bookshop sale also includes these small press titles that I haven't previously listed:
  • All-Night Pharmacy (Ruth Madievsky, Catapult, Winner of the National Jewish Book Award for Debut Fiction)
  • Birthright (George Abraham, Button Poetry, "every pronoun is a Free Palestine," Bisexual Poetry Finalist in the 2021 Lambda Literary Awards; Button Poetry also has a 3 for $36 Pride Month deal going on, including Birthright and poetry by Blythe Baird, Sierra DeMulder, Andrea Gibson, Ebony Stewart, and more)
  • Boulder (Eva Baltasar, trans. Julia Sanches, And Other Stories, a queer couple struggles with motherhood, shortlisted for the 2023 International Booker Prize)
  • Brown Neon: Essays (Raquel Gutiérrez, Coffee House Press, "part butch memoir, part ekphrastic travel diary, part queer family tree")
  • Cecilia (K-Ming Chang, Coffee House Press, an "erotic, surreal novella")
  • Corey Fah Does Social Mobility (Isabel Waidner, Graywolf, "A novel that celebrates radical queer survival and gleefully takes a hammer to false notions of success")
  • A Dream of a Woman (Casey Plett, Arsenal Pulp Press, short stories by the author of the Lambda Literary Award-winning Little Fish)
  • Everything for Everyone: An Oral History of the New York Commune, 2052-2072 (Eman Abdelhadi & M. E. O'Brien, Common Notions, speculative fiction)
  • Feed (Tommy Pico, Tin House Books, fourth book in Teebs tetralogy, "an epistolary recipe for the main character, a poem of nourishment, and a jaunty walk through New York's High Line park, with the lines, stanzas, paragraphs, dialogue, and registers approximating the park's cultivated gardens of wildness")
  • Females (Andrea Long Chu, Verso, provocative genre-defying investigation into femaleness)
  • The Free People's Village (Sim Kern, Levine Querido, a novel of "eat-the-rich climate fiction")
  • The Future Is Disabled: Prophecies, Love Notes and Mourning Songs (Lambda Literary Award-winning Leah Lakshmi Piepzna-Samarasinha, Arsenal Pulp Press, disability justice, care and mutual aid)
  • Her Body and Other Parties: Stories (Carmen Maria Machado, Graywolf Press, "blithely demolishes the arbitrary borders between psychological realism and science fiction... to shape startling narratives that map the realities of women's lives and the violence visited upon their bodies")
  • High-Risk Homosexual: A Memoir (Edgar Gomez, Soft Skull, "a touching and often hilarious spiralic path to embracing a gay, Latinx identity against a culture of machismo")
  • Homie: Poems (Danez Smith, Graywolf Press, finalist for the National Book Critics Circle Award for Poetry and the NAACP Image Award for Poetry)
  • How to Fuck Like a Girl (Vera Blossom, Dopamine/Semiotext(e), a how-to guide)
  • I Love This Part (Tillie Walden, Avery Hill Publishing, graphic novel of teen queer love)
  • It Came from the Closet: Queer Reflections on Horror (ed. Joe Vallese, Feminist Press, essays by Carmen Maria Machado, Bruce Owens Grimm, Richard Scott Larson)
  • Love Is an Ex-Country: A Memoir (Randa Jarrar, Catapult, "Queer. Muslim. Arab American. A proudly Fat femme.")
  • Mrs. S (K. Patrick, Europa Editions, a butch English boarding school matron begins an illicit affair with the headmaster's wife)
  • Outwrite: The Speeches That Shaped LGBTQ Literary Culture (eds. Julie R. Enszer, Elena Gross, Rutgers UP, 27 of the most memorable speeches from the OutWrite conference)
  • Playboy (Constance Debre, trans. Holly James, Semiotext(e), the first volume of the renowned trilogy on the author's decision to abandon her bourgeois Parisian life to become a lesbian and writer)
  • Sluts: Anthology (ed. Michelle Tea, Dopamine Books, anthology of essays and stories on sexual promiscuity in contemporary American culture)
  • Stone Fruit (Lee Lai, Fantagraphics Books, a queer couple opens up to their families in this 2022 Lambda Literary Award winner for Comics)
  • Survival Takes a Wild Imagination: Poems (Fariha Róisín, Andrews McMeel Publishing, "Who is my family? My father? How do I love a mother no longer here? Can I see myself? What does it mean to be Bangladeshi? What is a border?")
  • Time Is the Thing a Body Moves Through (T. Fleischmann, Coffee House Press, "an autobiographical narrative of embodiment, visual art, history, and loss")
  • Thunder Song: Essays (Sasha Lapointe, Counterpoint LLC, what it means to be a proudly queer indigenous woman in the USA)
  • The Tradition (Jericho Brown, Copper Canyon Press, Pulitzer Prize-winning poetry that examines black bodies, desire, privilege and resistance)
  • When We Were Sisters (Fatimah Asghar, One World, "traces the intense bond of three orphaned siblings," longlisted for the National Book Award)
  • You Exist Too Much (Zaina Arafat, Catapult: Palestinian American queer coming-of-age novel)
  • Your Emergency Contact Has Experienced an Emergency (Chen Chen, BOA Editions, "What happens when everything falls away, when those you call on in times of need are themselves calling out for rescue?")
With management's blessing, I set up a MeFi affiliate membership with Bookshop, so the links above will benefit MetaFilter.

Microsoft delays Recall again, won’t debut it with new Copilot+ PCs after all

Recall is part of Microsoft's Copilot+ PC program.

Enlarge / Recall is part of Microsoft's Copilot+ PC program. (credit: Microsoft)

Microsoft will be delaying its controversial Recall feature again, according to an updated blog post by Windows and Devices VP Pavan Davuluri. And when the feature does return "in the coming weeks," Davuluri writes, it will be as a preview available to PCs in the Windows Insider Program, the same public testing and validation pipeline that all other Windows features usually go through before being released to the general populace.

Recall is a new Windows 11 AI feature that will be available on PCs that meet the company's requirements for its "Copilot+ PC" program. Copilot+ PCs need at least 16GB of RAM, 256GB of storage, and a neural processing unit (NPU) capable of at least 40 trillion operations per second (TOPS). The first (and for a few months, only) PCs that will meet this requirement are all using Qualcomm's Snapdragon X Plus and X Elite Arm chips, with compatible Intel and AMD processors following later this year. Copilot+ PCs ship with other generative AI features, too, but Recall's widely publicized security problems have sucked most of the oxygen out of the room so far.

The Windows Insider preview of Recall will still require a PC that meets the Copilot+ requirements, though third-party scripts may be able to turn on Recall for PCs without the necessary hardware. We'll know more when Recall makes its reappearance.

Read 7 remaining paragraphs | Comments

This photo got 3rd in an AI art contest—then its human photographer came forward

To be fair, I wouldn't put it past an AI model to forget the flamingo's head.

Enlarge / To be fair, I wouldn't put it past an AI model to forget the flamingo's head. (credit: Miles Astray)

A juried photography contest has disqualified one of the images that was originally picked as a top three finisher in its new AI art category. The reason for the disqualification? The photo was actually taken by a human and not generated by an AI model.

The 1839 Awards launched last year as a way to "honor photography as an art form," with a panel of experienced judges who work with photos at The New York Times, Christie's, and Getty Images, among others. The contest rules sought to segregate AI images into their own category as a way to separate out the work of increasingly impressive image generators from "those who use the camera as their artistic medium," as the 1839 Awards site puts it.

For the non-AI categories, the 1839 Awards rules note that they "reserve the right to request proof of the image not being generated by AI as well as for proof of ownership of the original files." Apparently, though, the awards did not request any corresponding proof that submissions in the AI category were generated by AI.

Read 9 remaining paragraphs | Comments

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

The OpenAI and Apple logos together.

Enlarge (credit: OpenAI / Apple / Benj Edwards)

On Monday, Apple announced it would be integrating OpenAI's ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google's multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT's placement on its devices as compensation enough.

"Apple isn’t paying OpenAI as part of the partnership," writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. "Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments."

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT's capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

Read 7 remaining paragraphs | Comments

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Enlarge (credit: Francesco Carta fotografo | Moment)

An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.

According to a press release from the Evansville Police Department, this was a clear "misuse" of Clearview AI's controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.

To help identify suspects, police can scan what Clearview AI describes on its website as "the world's largest facial recognition network." The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.

Read 16 remaining paragraphs | Comments

Wyoming mayoral candidate wants to govern by AI bot

Digital chatbot icon on future tech background. Productivity of AI bots evolution. Futuristic chatbot icon and abstract chart in world of technological progress and innovation. CGI 3D render

Enlarge (credit: dakuq via Getty)

Victor Miller is running for mayor of Cheyenne, Wyoming, with an unusual campaign promise: If elected, he will not be calling the shots—an AI bot will. VIC, the Virtual Integrated Citizen, is a ChatGPT-based chatbot that Miller created. And Miller says the bot has better ideas—and a better grasp of the law—than many people currently serving in government.

“I realized that this entity is way smarter than me, and more importantly, way better than some of the outward-facing public servants I see,” he says. According to Miller, VIC will make the decisions, and Miller will be its “meat puppet,” attending meetings, signing documents, and otherwise doing the corporeal job of running the city.

But whether VIC—and Victor—will be allowed to run at all is still an open question.

Read 20 remaining paragraphs | Comments

Turkish student creates custom AI device for cheating university exam, gets arrested

A photo illustration of what a shirt-button camera <em>could</em> look like.

Enlarge / A photo illustration of what a shirt-button camera could look like. (credit: Aurich Lawson | Getty Images)

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person's eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a "router" (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

Read 5 remaining paragraphs | Comments

New Stable Diffusion 3 release excels at AI-generated body horror

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. (credit: HorneyMetalBeing)

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]," details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

Read 10 remaining paragraphs | Comments

One of the major sellers of detailed driver behavioral data is shutting down

Interior of car with different aspects of it highlighted, as if by a camera or AI

Enlarge (credit: Getty Images)

One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.

Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a "Driving Behavior Data History Report" to insurers.

Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times' Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every "hard acceleration" or "hard braking event," among other data points.

Read 4 remaining paragraphs | Comments

Netcraft Uses Its AI Platform to Trick and Track Online Scammers

romance scams generative AI pig butchering

At the RSA Conference last month, Netcraft introduced a generative AI-powered platform designed to interact with cybercriminals to gain insights into the operations of the conversational scams they’re running and disrupt their attacks. At the time, Ryan Woodley, CEO of the London-based company that offers a range of services from phishing detection to brand, domain,..

The post Netcraft Uses Its AI Platform to Trick and Track Online Scammers appeared first on Security Boulevard.

ChatGPT is bullshit

Using bullshit as a term of art (as defined by Harry G. Frankfurt), ChatGPT and its various LLM cohort can best be described as bullshit machines.

Abstract: Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems. And this bullshit might just be 'good enough' to give bosses even more leverage against workers.

Turkish Student Arrested For Using AI To Cheat in University Exam

Turkish authorities have arrested a student for cheating during a university entrance exam by using a makeshift device linked to AI software to answer questions. From a report: The student was spotted behaving in a suspicious way during the exam at the weekend and was detained by police, before being formally arrested and sent to jail pending trial. Another person, who was helping the student, was also detained.

Read more of this story at Slashdot.

How Amazon Blew Alexa's Shot To Dominate AI

Amazon unveiled a new generative AI-powered version of its Alexa voice assistant at a packed event in September 2023, demonstrating how the digital assistant could engage in more natural conversation. However, nearly a year later, the updated Alexa has yet to be widely released, with former employees citing technical challenges and organizational dysfunction as key hurdles, Fortune reported Thursday. The magazine reports that the Alexa large language model lacks the necessary data and computing power to compete with rivals like OpenAI. Additionally, Amazon has prioritized AI development for its cloud computing unit, AWS, over Alexa, the report said. Despite a $4 billion investment in AI startup Anthropic, privacy concerns and internal politics have prevented Alexa's teams from fully leveraging Anthropic's technology.

Read more of this story at Slashdot.

Cyberattack Hits Dubai: Daixin Team Claims to Steal Confidential Data, Residents at Risk

City of Dubai Ransomware Attack

The city of Dubai, known for its affluence and wealthy residents, has allegedly been hit by a ransomware attack claimed by the cybercriminal group Daixin Team. The group announced the city of Dubai ransomware attack on its dark web leak site on Wednesday, claiming to have stolen between 60-80GB of data from the Government of Dubai’s network systems. According to the Daixin Team's post, the stolen data includes ID cards, passports, and other personally identifiable information (PII). Although the group noted that the 33,712 files have not been fully analyzed or dumped on the leak site, the potential exposure of such sensitive information is concerning. Dubai, a city with over three million residents and the highest concentration of millionaires globally, presents a rich target for cybercriminals. [caption id="attachment_77008" align="aligncenter" width="504"]City of Dubai Ransomware Attack Source: Dark Web[/caption]

Potential Impact City of Dubai Ransomware Attack

The stolen data reportedly contains extensive personal information, such as full names, dates of birth, nationalities, marital statuses, job descriptions, supervisor names, housing statuses, phone numbers, addresses, vehicle information, primary contacts, and language preferences. Additionally, the databases appear to include business records, hotel records, land ownership details, HR records, and corporate contacts. [caption id="attachment_77010" align="aligncenter" width="1024"]Daixin Team Source: Dark Web[/caption] Given that over 75% of Dubai's residents are expatriates, the stolen data provides a treasure of information that could be used for targeted spear phishing attacks, vishing attacks, identity theft, and other malicious activities. The city's status as a playground for the wealthy, including 212 centi-millionaires and 15 billionaires, further heightens the risk of targeted attacks.

Daixin Team: A Persistent Threat

The Daixin Team, a Russian-speaking ransomware and data extortion group, has been active since at least June 2022. Known primarily for its cyberattacks on the healthcare sector, Daixin has recently expanded its operations to other industries, employing sophisticated hacking techniques. A 2022 report by the US Cybersecurity and Infrastructure Security Agency (CISA) highlights Daixin Team's focus on the healthcare sector in the United States. However, the group has also targeted other sectors, including the hospitality industry. Recently, Daixin claimed responsibility for a cyberattack on Omni Hotels & Resorts, exfiltrating sensitive data, including records of all visitors dating back to 2017. In another notable case, Bluewater Health, a prominent hospital network in Ontario, Canada, fell victim to a cyberattack attributed to Daixin Team. The attack affected several hospitals, including Windsor Regional Hospital, Erie Shores Healthcare, Chatham-Kent Health, and Hôtel-Dieu Grace Healthcare. The Government of Dubai has yet to release an official statement regarding the ransomware attack. However, on accessing the official website of the Dubai government, no foul play was sensed as the websites were fully functional. This leaves the alleged ransomware attack unverified. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

ChatGPT is coming to your iPhone. These are the four reasons why it’s happening far too early | Chris Stokel-Walker

The AI’s errors can still be comical and catastrophic. Do we really want this technology to be in so many pockets?

Tech watchers and nerds like me get excited by tools such as ChatGPT. They look set to improve our lives in many ways – and hopefully augment our jobs rather than replace them.

But in general, the public hasn’t been so enamoured of the AI “revolution”. Make no mistake: artificial intelligence will have a transformative effect on how we live and work – it is already being used to draft legal letters and analyse lung-cancer scans. ChatGPT was also the fastest-growing app in history after it was released. That said, four in 10 Britons haven’t heard of ChatGPT, according to a recent survey by the University of Oxford, and only 9% use it weekly or more frequently.

Chris Stokel-Walker is the author of How AI Ate the World, which was published last month

Continue reading...

💾

© Photograph: Angga Budhiyanto/ZUMA Press Wire/REX/Shutterstock

💾

© Photograph: Angga Budhiyanto/ZUMA Press Wire/REX/Shutterstock

Apple says long-awaited AI will set new privacy standards – but experts are divided

Apple maintains its in-house AI is made with security in mind, but some professionals say ‘it remains to be seen’

At its annual developers conference on Monday, Apple announced its long-awaited artificial intelligence system, Apple Intelligence, which will customize user experiences, automate tasks and – the CEO Tim Cook promised – will usher in a “new standard for privacy in AI”.

While Apple maintains its in-house AI is made with security in mind, its partnership with OpenAI has sparked plenty of criticism. OpenAI tool ChatGPT has long been the subject of privacy concerns. Launched in November 2022, it collected user data without explicit consent to train its models, and only began to allow users to opt out of such data collection in April 2023.

Continue reading...

💾

© Photograph: Justin Sullivan/Getty Images

💾

© Photograph: Justin Sullivan/Getty Images

Stable Diffusion 3 Mangles Human Bodies Due To Nudity Filters

An anonymous reader quotes a report from Ars Technica: On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generate images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease. A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]" details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies. AI image fans are so far blaming the Stable Diffusion 3's anatomy fails on Stability's insistence on filtering out adult content (often called "NSFW" content) from the SD3 training data that teaches the model how to generate images. "Believe it or not, heavily censoring a model also gets rid of human anatomy, so... that's what happened," wrote one Reddit user in the thread. The release of Stable Diffusion 2.0 in 2023 suffered from similar problems in depicting humans accurately, and AI researchers soon discovered that censoring adult content that contains nudity also severely hampers an AI model's ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by excluding NSFW content. "It works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw," wrote another Redditor. Basically, any time a prompt hones in on a concept that isn't represented well in its training dataset, the image model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying. Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt "a man showing his hands" returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

Read more of this story at Slashdot.

Adobe Says It Won't Train AI On Customers' Work In Overhauled ToS

In a new blog post, Adobe said it has updated its terms of service to clarify that it won't train AI on customers' work. The move comes after a week of backlash from users who feared that an update to Adobe's ToS would permit such actions. The clause was included in ToS sent to Creative Cloud Suite users, which claimed that Adobe "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience." The Verge reports: The new terms of service are expected to roll out on June 18th and aim to better clarify what Adobe is permitted to do with its customers' work, according to Adobe's president of digital media, David Wadhwani. "We have never trained generative AI on our customer's content, we have never taken ownership of a customer's work, and we have never allowed access to customer content beyond what's legally required," Wadhwani said to The Verge. [...] Adobe's chief product officer, Scott Belsky, acknowledged that the wording was "unclear" and that "trust and transparency couldn't be more crucial these days." Wadhwani says that the language used within Adobe's TOS was never intended to permit AI training on customers' work. "In retrospect, we should have modernized and clarified the terms of service sooner," Wadhwani says. "And we should have more proactively narrowed the terms to match what we actually do, and better explained what our legal requirements are." "We feel very, very good about the process," Wadhwani said in regards to content moderation surrounding Adobe stock and Firefly training data but acknowledged it's "never going to be perfect." Wadhwani says that Adobe can remove content that violates its policies from Firefly's training data and that customers can opt out of automated systems designed to improve the company's service. Adobe said in its blog post that it recognizes "trust must be earned" and is taking on feedback to discuss the new changes. Greater transparency is a welcome change, but it's likely going to take some time to convince scorned creatives that it doesn't hold any ill intent. "We are determined to be a trusted partner for creators in the era ahead. We will work tirelessly to make it so."

Read more of this story at Slashdot.

Google’s AI Is Still Recommending Putting Glue in Your Pizza, and This Article Is Part of the Problem

Despite explaining away issues with its AI Overviews while promising to make them better, Google is still apparently telling people to put glue in their pizza. And in fact, articles like this are only making the situation worse.

When they launched to everyone in the U.S. shortly after Google I/O, AI Overviews immediately became the laughing stock of search, telling people to eat rocks, use butt plugs while squatting, and, perhaps most famously, to add glue to their homemade pizza.

Most of these offending answers were quickly scrubbed from the web, and Google issued a somewhat defensive apology. Unfortunately, if you use the right phrasing, you can reportedly still get these blatantly incorrect "answers" to pop up.

In a post on June 11, Bluesky user Colin McMillen said he was still able to get AI Overviews to tell him to add “1/8 cup, or 2 tablespoons, of white, nontoxic glue to pizza sauce” when asking “how much glue to add to pizza.”

The question seems purposefully designed to mess with AI Overviews, sure—although given the recent discourse, a well-meaning person who’s not so terminally online might legitimately be curious what all the hubbub is about. At any rate, Google did promise to address even leading questions like these (as it probably doesn’t want its AI to appear to be endorsing anything that could make people sick), and it clearly hasn’t.

Perhaps more frustrating is the fact that Google’s AI Overview sourced the recent pizza claim to Katie Notopoulus of Business Insider, who most certainly did not tell people to put glue in their pizza. Rather, Notopoulus was reporting on AI Overview’s initial mistake; Google’s AI just decided to attribute that mistake to her because of it. 

“Google’s AI is eating itself already,” McMillen said, in response to the situation.

I wasn’t able to reproduce the response myself, but The Verge did, though with different wording: The AI Overview still cited Business Insider, but rightly attributed the initial advice to to Google’s own AI. Which means Google AI’s source for its ongoing hallucination is...itself.

What’s likely going on here is that Google stopped its AI from using sarcastic Reddit posts as sources, but it’s now turning to news articles reporting on its mistakes to fill in the gaps. In other words, as Google messes up, and as people report on it, Google will then use that reporting to back its initial claims. The Verge compared it to Google bombing, an old tactic where people would link the words “miserable failure” to a photo of George W. Bush so often that Google images would return a photo of the president when you searched for the phrase.

Google is likely to fix this latest AI hiccup soon, but it’s all bit of a “laying the train tracks as you go situation,” and certainly not likely to do anything to improve AI search's reputation.

Anyway, just in case Google attaches my name to a future AI Overview as a source, I want to make it clear: Do not put glue in your pizza (and leave out the pineapple while you’re at it).

SoftBank's New AI Makes Angry Customers Sound Calm On Phone

SoftBank has developed AI voice-conversion technology aimed at reducing the psychological stress on call center operators by altering the voices of angry customers to sound calmer. Japan's The Asahi Shimbun reports: The company launched a study on "emotion canceling" three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call. Toshiyuki Nakatani, a SoftBank employee, came up with the idea after watching a TV program about customer harassment. "If the customers' yelling voice sounded like Kitaro's Eyeball Dad, it would be less scary," he said, referring to a character in the popular anime series "Gegege no Kitaro." The voice-altering AI learned many expressions, including yelling and accusatory tones, to improve vocal conversions. Ten actors were hired to perform more than 100 phrases with various emotions, training the AI with more than 10,000 pieces of voice data. The technology does not change the wording, but the pitch and inflection of the voice is softened. For instance, a woman's high-pitched voice is lowered in tone to sound less resonant. A man's bass tone, which may be frightening, is raised to a higher pitch to sound softer. However, if an operator cannot tell if a customer is angry, the operator may not be able to react properly, which could just upset the customer further. Therefore, the developers made sure that a slight element of anger remains audible. According to the company, the biggest burdens on operators are hearing abusive language and being trapped in long conversations with customers who will not get off the line -- such as when making persistent requests for apologies. With the new technology, if the AI determines that the conversation is too long or too abusive, a warning message will be sent out, such as, "We regret to inform you that we will terminate our service." [...] The company plans to further improve the accuracy of the technology by having AI learn voice data and hopes to sell the technology starting from fiscal 2025. Nakatani said, "AI is good at handling complaints and can do so for long hours, but what angry customers want is for a human to apologize to them." He said he hopes that AI "will become a mental shield that prevents operators from overstraining their nerves."

Read more of this story at Slashdot.

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet

An anonymous reader quotes a report from the New York Times: The news was featured on MSN.com: "Prominent Irish broadcaster faces trial over alleged sexual misconduct." At the top of the story was a photo of Dave Fanning. But Mr. Fanning, an Irish D.J. and talk-show host famed for his discovery of the rock band U2, was not the broadcaster in question. "You wouldn't believe the amount of people who got in touch," said Mr. Fanning, who called the error "outrageous." The falsehood, visible for hours on the default homepage for anyone in Ireland who used Microsoft Edge as a browser, was the result of an artificial intelligence snafu. A fly-by-night journalism outlet called BNN Breaking had used an A.I. chatbot to paraphrase an article from another news site, according to a BNN employee. BNN added Mr. Fanning to the mix by including a photo of a "prominent Irish broadcaster." The story was then promoted by MSN, a web portal owned by Microsoft. The story was deleted from the internet a day later, but the damage to Mr. Fanning's reputation was not so easily undone, he said in a defamation lawsuit filed in Ireland against Microsoft and BNN Breaking. His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors. Mr. Fanning's complaint against BNN is one of many. The site based published numerous falsehoods during its short time online.Credit...Paulo Nunes dos Santos for The New York Times BNN went dormant in April, while The New York Times was reporting this article. The company and its founder did not respond to multiple requests for comment. Microsoft had no comment on MSN's featuring the misleading story with Mr. Fanning's photo or his defamation case, but the company said it had terminated its licensing agreement with BNN. During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of "seasoned" journalists and 10 million monthly visitors, surpassing the The Chicago Tribune's self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN's stories. Google News often surfaced them, too. A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT. BNN's "About Us" page featured an image of four children looking at a computer, some bearing the gnarled fingers that are a telltale sign of an A.I.-generated image. "How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply," adds The Times. "NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content. The websites, which seem to operate with little to no human supervision, often have generic names -- such as iBusiness Day and Ireland Top News -- that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers."

Read more of this story at Slashdot.

Apple Launches ‘Private Cloud Compute’ Along with Apple Intelligence AI

Private Cloud Compute Apple Intelligence AI

In a bold attempt to redefine cloud security and privacy standards, Apple has unveiled Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed to back its new Apple Intelligence with safety and transparency while integrating Apple devices into the cloud. The move comes after recognition of the widespread concerns surrounding the combination of artificial intelligence and cloud technology.

Private Cloud Compute Aims to Secure Cloud AI Processing

Apple has stated that its new Private Cloud Compute (PCC) is designed to enforce privacy and security standards over AI processing of private information. For the first time ever, Private Cloud Compute brings the same level of security and privacy that our users expect from their Apple devices to the cloud," said an Apple spokesperson. [caption id="attachment_76690" align="alignnone" width="1492"]Private Cloud Compute Apple Intelligence Source: security.apple.com[/caption] At the heart of PCC is Apple's stated commitment to on-device processing. When Apple is responsible for user data in the cloud, we protect it with state-of-the-art security in our services," the spokesperson explained. "But for the most sensitive data, we believe end-to-end encryption is our most powerful defense." Despite this commitment, Apple has stated that for more sophisticated AI requests, Apple Intelligence needs to leverage larger, more complex models in the cloud. This presented a challenge to the company, as traditional cloud AI security models were found lacking in meeting privacy expectations. Apple stated that PCC is designed with several key features to ensure the security and privacy of user data, claiming the following implementations:
  • Stateless computation: PCC processes user data only for the purpose of fulfilling the user's request, and then erases the data.
  • Enforceable guarantees: PCC is designed to provide technical enforcement for the privacy of user data during processing.
  • No privileged access: PCC does not allow Apple or any third party to access user data without the user's consent.
  • Non-targetability: PCC is designed to prevent targeted attacks on specific users.
  • Verifiable transparency: PCC provides transparency and accountability, allowing users to verify that their data is being processed securely and privately.

Apple Invites Experts to Test Standards; Online Reactions Mixed

At this week's Apple Annual Developer Conference, Apple's CEO Tim Cook described Apple Intelligence as a "personal intelligence system" that could understand and contextualize personal data to deliver results that are "incredibly useful and relevant," making "devices even more useful and delightful." Apple Intelligence mines and processes data across apps, software and services across Apple devices. This mined data includes emails, images, messages, texts, messages, documents, audio files, videos, contacts, calendars, Siri conversations, online preferences and past search history. The new PCC system attempts to ease consumer privacy and safety concerns. In its description of 'Verifiable transparency,' Apple stated:
"Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees."
However, despite Apple's assurances, the announcement of Apple Intelligence drew mixed reactions online, with some already likening it to Microsoft's Recall. In reaction to Apple's announcement, Elon Musk took to X to announce that Apple devices may be banned from his companies, citing the integration of OpenAI as an 'unacceptable security violation.' Others have also raised questions about the information that might be sent to OpenAI. [caption id="attachment_76692" align="alignnone" width="596"]Private Cloud Compute Apple Intelligence 1 Source: X.com[/caption] [caption id="attachment_76693" align="alignnone" width="418"]Private Cloud Compute Apple Intelligence 2 Source: X.com[/caption] [caption id="attachment_76695" align="alignnone" width="462"]Private Cloud Compute Apple Intelligence 3 Source: X.com[/caption] According to Apple's statements, requests made on its devices are not stored by OpenAI, and users’ IP addresses are obscured. Apple stated that it would also add “support for other AI models in the future.” Andy Wu, an associate professor at Harvard Business School, who researches the usage of AI by tech companies, highlighted the challenges of running powerful generative AI models while limiting their tendency to fabricate information. “Deploying the technology today requires incurring those risks, and doing so would be at odds with Apple’s traditional inclination toward offering polished products that it has full control over.”   Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Craig Federighi Says Apple Hopes TO Add Google Gemini, Other AI Models To iOS 18

Yesterday, Apple made waves in the media when it revealed a partnership with OpenAI during its annual WWDC keynote. That announcement centered on Apple's decision to bring ChatGPT natively to iOS 18, including Siri and other first-party apps. During a followup interview on Monday, Apple executives Craig Federighi and John Giannandrea hinted at a possible agreement with Google Gemini and other AI chatbots in the future. 9to5Mac reports: Moderated by iJustine, the interview was held in Steve Jobs Theater this afternoon, featuring a discussion with John Giannandrea, Apple's Senior Vice President of Machine Learning and AI Strategy, and Craig Federighi, Senior Vice President of Software Engineering. During the interview, Federighi specifically referenced Apple's hopes to eventually let users choose between different models to use with Apple Intelligence. While ChatGPT from OpenAI is the only option right now, Federighi suggested that Google Gemini could come as an option down the line: "We think ultimately people are going to have a preference perhaps for certain models that they want to use, maybe one that's great for creative writing or one that they prefer for coding. And so we want to enable users ultimately to bring a model of their choice. And so we may look forward to doing integrations with different models like Google Gemini in the future. I mean, nothing to announce right now, but that's our direction." The decision to focus on ChatGPT at the start was because Apple wanted to "start with the best," according to Federighi.

Read more of this story at Slashdot.

❌