The Tories and Labour are forking out more than ever on social media ads, but going viral isn’t easy. We speak to influencers and strategists about the messages and memes
Why would you hold an election in November? The question came from digital marketing guru Mike Harris and was asked in a message to his friend, Labour’s campaign manager, Morgan McSweeney, earlier this year. Digital advertising is more expensive in October and November because the internet is swamped with ads for Christmas and Black Friday, said Harris, the founder of communications agency 89up. Why not pick a cheaper time of year?
The company that shaped the development of search engines is banking on chatbot-style summaries. But so far, its suggestions are pretty wild
Once upon a time, Google was great. For those who were online in 1998, history’s timeline bifurcated into two eras: BG (Before Google), and AG. It was elegant and clean: elegant because it was driven by a semi-objective algorithm called PageRank, which ranked websites according to how many other websites linked to them; and clean because it had no advertising, which of course also meant that it had no business model and accordingly was burning its way through its investors’ money.
It was too good to last, and of course it didn’t. Two of its biggest investors showed up one day, demanding a return on their investments. The company’s co-founders had an idea. One of the reasons theirs was such a good search engine was that they intensively monitored what people searched for, and then used that information continually to improve the engine’s performance. Their big idea was that the information thus derived had a commercial value; it indicated what people were interested in and might therefore be of value to advertisers who wanted to sell them stuff. Thus was born what Shoshana Zuboff christened “surveillance capitalism”, the dominant money machine of the networked world.
Cybercrime group ShinyHunters reportedly demanding £400,000 ransom to prevent data being sold
Ticketmaster has been targeted in a cyber-attack, with hackers allegedly offering to sell customer data on the dark web, its parent company, Live Nation, has confirmed.
The ShinyHunters hacking group is reportedly demanding about £400,000 in a ransom payment to prevent the data being sold.
Sharing digitally altered “deepfake” pornographic images will attract a penalty of six years’ jail, or seven years for those who also created them, under proposed new national laws to go before federal parliament next week.
The attorney general, Mark Dreyfus, is expected to introduce legislation on Wednesday to create a new criminal offence of sharing, without consent, sexually explicit images that have been digitally created using artificial intelligence or other forms of technology.
The U.S. National Institute of Standards and Technology (NIST) has taken a big step to address the growing backlog of unprocessed Common Vulnerabilities and Exposures (CVEs) in the National Vulnerability Database (NVD). The institute has hired an external contractor to contribute additional processing support in its operations.
The contractor hasn't been named, but NIST said it expects that the move will allow it to return to normal processing rates within the next few months.
Clearing the National Vulnerability Database Backlog
NIST is responsible for managing entries in the NVD. After being overwhelmed with the volume of entries amid a growing backlog of CVEs that have accumulated since February, the institute has awarded an external party with a contract to aid in its processing efforts.
"We are confident that this additional support will allow us to return to the processing rates we maintained prior to February 2024 within the next few months," the agency stated. To further alleviate the backlog, the NIST is also working closely with CISA, the Cybersecurity and Infrastructure Security Agency, to improve its overall operations and processes. "We anticipate that this backlog will be cleared by the end of the fiscal year," the NIST stated.
In its status update, NIST referenced an earlier statement the agency made that it was exploring various means to address the increasing volume of vulnerabilities through the use of modernized technology and improvements to its processes.
[caption id="attachment_73938" align="alignnone" width="2332"] Source: NIST NVD Status Updates[/caption]
"Our goal is to build a program that is sustainable for the long term and to support the automation of vulnerability management, security measurement and compliance," the institute said.
NIST reaffirmed its commitment to maintaining and modernizing the NVD, stating, "NIST is fully committed to preserving and updating this vital national resource, which is crucial for building trust in information technology and fostering innovation."
CISA's 'Vulnrichment' Initiative
In response to the growing NVD backlog at NIST, CISA had launched its own initiative called "Vulnrichment" to help enrich the public CVE records. CISA's Vulnrichment project is designed to complement the work of the originating CNA (Common Vulnerabilities and Exposures Numbering Authority) and reduce the burden on NIST's analysts.
CISA said it would use an SSVC decision tree model to categorize vulnerabilities. The agency will consider factors like exploitation status, technical impact, impact on mission-essential functions, public well-being, and whether the exploitation is automatable. CISA welcomes feedback from the IT cybersecurity community on this effort.
By providing enriched CVE data, CISA aims to improve the overall quality and usefulness of the NVD for cybersecurity professionals. "For those CVEs that do not already have these fields populated by the originating CNA, CISA will populate the associated ADP container with those values when there is enough supporting evidence to do so," the agency explained.
As NIST and CISA work to address the current challenges, they have pledged to keep the community informed of their progress as well as on future modernization plans.
Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.
After new feature tells people to eat rocks or add glue to pizza sauce, company to restrict which searches return summaries
Google announced on Thursday that it would refine and retool its summaries of search results generated by artificial intelligence, posting a blog explaining why the feature was returning bizarre and inaccurate answers that included telling people to eat rocks or add glue to pizza sauce. The company will reduce the scope of searches that will return an AI-written summary.
Google has added several restrictions on the types of searches that would generate AI Overview results, the company’s head of search, Liz Reid, said, as well as “limited the inclusion of satire and humor content”. The company is also taking action against what it described as a small number of AI Overviews that violate its content policies, which it said occurred in fewer than 1 in 7m unique search queries where the feature appeared.
pImagine this: You lost your phone, or had it stolen. Would you be comfortable with a police officer who picked it up rummaging through the phone’s contents without any authorization or oversight, thinking you had abandoned it? We’ll hazard a guess: hell no, and for good reason. /p
pOur cell phones and similar digital devices open a window into our entire lives, from messages we send in confidence to friends and family, to intimate photographs, to financial records, to comprehensive information about our movements, habits, and beliefs. Some of this information is intensely private in its own right; in combination, it can disclose virtually everything about a modern cell phone user. /p
pIf it seems like common sense that law enforcement shouldn’t have unfettered access to this information whenever it finds a phone left unattended, you’ll be troubled by an argument that government lawyers are advancing in a pending case before the Ninth Circuit Court of Appeals, iUnited States v. Hunt/i. In iHunt/i, the government claims it does not need a warrant to search a phone that it deems to have been abandoned by its owner because, in ditching the phone, the owner loses any reasonable expectation of privacy in all its contents. As a basis for this claim, the government cites an exception to the Fourth Amendment’s warrant requirement that applies to searches of abandoned property. But that rule was developed years ago in the context of property that is categorially different, and much less revealing, than the reams of diverse and highly sensitive information that law enforcement can access by searching our digital devices. /p
pThe Supreme Court a href=https://www.supremecourt.gov/opinions/17pdf/16-402_h315.pdf#page=24has cautioned against/a uncritically extending pre-digital doctrines to modern technologies, like cell phones, that gather in one place so many of the privacies of life. In a a href=https://www.aclu.org/documents/ninth-circuit-cell-phone-abandonment-amicus-huntfriend-of-the-court brief/a in iHunt/i, the ACLU and our coalition partners urge the Ninth Circuit to heed this call, and hold that even if the physical device may properly be considered abandoned, the myriad records that reside on a cell phone remain subject to full constitutional protection. Police should have to get a warrant before searching the data on a phone they find separated from its owner./p
div class=wp-heading mb-8
hr class=mark /
h2 id= class=wp-heading-h2 with-markCases about abandoned property are a poor fit for digital-age privacy/h2
/div
pAs the Supreme Court a href=https://supreme.justia.com/cases/federal/us/573/13-132/case.pdfrecognized/a more than 10 years ago, when the storage capacity of the median cell phone was a great deal less than it is today, advances in digital technology threaten to erode our privacy against government intrusion if courts apply to the troves of information on a cell phone the same rule they would use to analyze a search of a cigarette pack. In a case called iRiley v. California/i, the Supreme Court held that even though police may warrantlessly search items in a suspect’s pockets during arrest to avoid the destruction of evidence or identify danger to the arresting officers, a warrantless inspection of the information on an arrestee’s phone went too far. Why? Because phones, “[w]ith all they may contain and all they may reveal,” are different. /p
pHere too, the information on a cell phone is qualitatively and quantitatively unlike the items that underpin precedents permitting warrantless searches of abandoned property. The most recent of those precedents was decided in 1988, long before cell phones became a “a href=https://supreme.justia.com/cases/federal/us/573/13-132/case.pdfpervasive and insistent part of daily life/a.” In case you’re keeping score, 1988 was the year Motorola debuted its first “bag phone,” a href=https://www.thehenryford.org/collections-and-research/digital-collections/artifact/162235#slide=gs-212075an early transportable telephone the size of a briefcase/a that needed to be lugged around with a separate battery and transceiver. In that case, the Supreme Court held that people lose their legal privacy in items, like curbside trash, that they knowingly and voluntarily leave out for any member of the public to see. But when you fail to reclaim a lost or abandoned phone, do you knowingly and voluntarily renounce all of your data, too? Our brief argues that the Ninth Circuit should not use the same reasoning that has historically applied to a href=https://tile.loc.gov/storage-services/service/ll/usrep/usrep486/usrep486035/usrep486035.pdfgarbage left out for collection/a and a href=https://tile.loc.gov/storage-services/service/ll/usrep/usrep362/usrep362217/usrep362217.pdf#page=24items discarded in a hotel wastepaper basket/a after check-out to impute to a cell phone’s owner an intent to give up all the revealing information on their device, just because it was left behind./p
div class=wp-heading mb-8
hr class=mark /
h2 id= class=wp-heading-h2 with-markCell phones contain vast amounts of diverse and revealing information, unlike other categories of objects/h2
/div
pThe immense storage capacity of modern cell phones allows people to carry in their palm a volume and variety of private information that is genuinely unprecedented in cases concerning searches of abandoned property. Our cell phones provide access to information comparable in quantity and breadth to what police might glean from a thorough search of a house. Unlike a house, though, a cell phone is relatively easy to lose. You carry it with you almost all the time. It can fall between seat cushions or slip out of a loose pocket. You might leave it at the check-out desk after making a purchase or forget it on the bus as you hasten to make your stop. Even if you eventually give up looking for the device, thereby “abandoning” it, this doesn#8217;t evince any subjective intent to relinquish to whoever might pick it up all the information the phone can store or access through the internet./p
div class=wp-heading mb-8
hr class=mark /
h2 id= class=wp-heading-h2 with-markCloud backups mean that the data on a phone often isn’t lost even when the device goes missing/h2
/div
pAn additional reason that the privacy of the information on a cell phone shouldn’t hinge on a person’s ongoing possession of their device is that you can still access and control much of the data on your phone independently of the device itself. While modern cell phones store extraordinary and growing amounts of data locally, a lot of this information resides also on remote servers — think of the untold messages, contacts, notes, and images you may have backed up on iCloud or its equivalents. If you have access to a computer or tablet, all this information remains yours to view, edit, and delete whether or not your phone is handy. Trade in your cell phone, and you can seamlessly download this information onto a new device, reviewing voicemail messages and carrying on existing conversations in text without interruption. In this sense, a cell phone is more properly analogized to a house key than a house, something we use to access vast amounts of information that’s largely stored elsewhere. It would be absurd to suggest that a person intends to open up their house for unrestrained searches by police whenever they drop their house key. Yet this is essentially the position the government in the iHunt /icase argued, successfully, in the trial court: Because the defendant discarded his phone, any piece of information stored on that phone was fair game, regardless of whether it was backed up. /p
pThe Ninth Circuit has an opportunity in iHunt/i to correct the trial court’s error and clarify that the rule governing police searches of the information on a lost or abandoned cell phone does not defy common-sense intuitions about what information we mean to give up when we lose track of our devices. The information on your cell phone is highly private and revealing. If the police want authority to review it, the Constitution requires of them something simple — get a warrant./p
In “Living off the Land attacks,” adversaries use USB devices to infiltrate industrial control systems. Cyberthreats from silent residency attacks put critical infrastructure facilities at risk.
ShinyHunters stole information including bank and credit card numbers, as well as staff HR details
Hackers are attempting to sell confidential information including the bank and credit card numbers of millions of Santander customers to the highest bidder.
ShinyHunters posted an advert on a hacker forum for the data, which it says also includes staff HR details, with an asking price of $2m (£1.6m). It is the same organisation that claims to have hacked Ticketmaster.
Showrunner will let users generate episodes with prompts, which could be an alarming next step or a fleeting novelty
One of the key strategies of streaming services is to keep you in front of a screen for as long as possible. As soon as one episode of a show you’re watching ends, the next one pops up automatically. But this approach has its limits. After all, when a series ends, Netflix will try to autoplay another series that it thinks you’ll like, but it has a terrible success rate. Maybe the tone of the suggested show is wrong, or maybe it’s too exhausting to be dumped into the sea of exposition that a new show brings. Maybe it’s just too jarring to be pulled out of one world and dumped straight into another without any space to breathe.
You know what would fix that? If Netflix gave you the chance to automatically create a new episode of the show you were already watching. You’d stay there forever, wouldn’t you? It would be wonderful. Ladies and gentlemen, you will be thrilled to learn that this glorious technology now exists.
Videos on Douyin give people step-by-step instructions on how to get to the US – and then leave them stranded upon arrival
This article is copublished with Documented, a multilingual news site about immigrants in New York, and the Markup, a non-profit, investigative newsroom that challenges technology to serve the public good.
Xiong couldn’t pinpoint exactly what finally prompted him to leave his home town in China, the only place he had lived for 32 years, and embark on the arduous journey on foot through Central and South America to reach the United States in 2023. However, he clearly remembered the catalyst that first ignited the idea.
Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences
OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.
Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.
US official says policy change relates to ‘counter-fire purposes’ and prohibits long-range attacks inside of Russia
Joe Biden has allowed Ukraine to use some US-made weapons over one part of the Russian border, to allow Kyiv’s forces to defend against an offensive aimed at the city of Kharkiv, relaxing an important constraint on Ukraine’s able to defend itself.
“The president recently directed his team to ensure that Ukraine is able to use US-supplied weapons for counter-fire purposes in the Kharkiv region so Ukraine can hit back against Russian forces that are attacking them or preparing to attack them,” a US official said.
‘World’s largest botnet’ – spread through infected emails – taken down through coordinated police action among several countries
US authorities announced on Thursday that they had dismantled the “world’s largest botnet ever”, allegedly responsible for nearly $6bn in Covid insurance fraud.
The Department of Justice arrested a Chinese national, YunHe Wang, 35, and seized luxury watches, more than 20 properties and a Ferrari. The networks allegedly operated by Wang and others, dubbed “911 S5”, spread ransomware via infected emails from 2014 to 2022. Wang allegedly accrued a fortune of $99m by licensing his malware to other criminals. The network allegedly pulled in $5.9bn in fraudulent unemployment claims from Covid relief programs.
Israeli-made Pegasus cyberweapon used in hacking attempts on at least seven journalists and activists in EU
At least seven journalists and activists who have been vocal critics of the Kremlin and its allies have been targeted inside the EU by a state using Pegasus, the hacking spyware made by Israel’s NSO Group, according to a new report by security researchers.
The targets of the hacking attempts – who were first alerted to the attempted cyber-intrusions after receiving threat notifications from Apple on their iPhones – include Russian, Belarusian, Latvian and Israeli journalists and activists inside the EU.
Florida scientists use AI and virtual reality to create 3D renderings of brain formations of mice, whose neuron types are like humans’
Neuroscientists at a Florida university have pioneered a technologically advanced method of brain mapping they believe can help demystify Alzheimer’s disease, autism and related disorders, and offer hope of more effective treatments for traumatic brain injuries.
A team at the University of South Florida’s (USF) auditory development and connectomics laboratory is using virtual reality (VR) and artificial intelligence to create a high-definition visual timeline of the journey of billions of neurons in the developing brains of newborn mice.
Celebrity posts of graphic following IDF strike help make it among most-shared content of Israel-Gaza war
An image depicting refugee tents spelling out the phrase “all eyes on Rafah” has become one of the most-shared pieces of content relating to the Israel-Gaza war, spreading rapidly on social media this week. The graphic, which was generated using artificial intelligence, had been shared on Instagram more than 45m times by Wednesday.
The image and reactions to it have also gained traction outside Instagram. On TikTok, one creator’s video commenting on the image amassed 10m plays within 24 hours of being posted. After the image was shared on a pro-Palestinian account on X on Monday, the post gained 8m views and 188,000 retweets within days.
Pack One Bag Widely available, episodes weekly from 5 Jun
“If fascism takes over your country, do you stay or do you try to flee?” David Modigliani opens this beautiful podcast about his family history with the question his Italian grandfather Franco faced. Modigliani reads love letters between his nonna Serena and Franco, learning about their escape to the US, where Franco won a Nobel prize. Then, executive producer Stanley Tucci brings great-grandfather Giulio into the story. Hannah Verdier
Big tech is playing its part in reaching net zero targets, but its vast new datacentres are run at huge cost to the environment
Mariana Mazzucato is professor of economics at UCL, and director of the Institute for Innovation and Public Purpose
When you picture the tech industry, you probably think of things that don’t exist in physical space, such as the apps and internet browser on your phone. But the infrastructure required to store all this information – the physical datacentres housed in business parks and city outskirts – consume massive amounts of energy. Despite its name, the infrastructure used by the “cloud” accounts for more global greenhouse emissions than commercial flights. In 2018, for instance, the 5bn YouTube hits for the viral song Despacito used the same amount of energy it would take to heat 40,000 US homes annually.
This is a hugely environmentally destructive side to the tech industry. While it has played a big role in reaching net zero, giving us smart meters and efficient solar,it’s critical that we turn the spotlight on its environmental footprint. Large language models such as ChatGPT are some of the most energy-guzzling technologies of all. Research suggests, for instance, that about 700,000 litres of water could have been used to cool the machines that trained ChatGPT-3 at Microsoft’s data facilities. It is hardly news that the tech bubble’s self-glorification has obscured the uglier sides of this industry, from its proclivity for tax avoidance to its invasion of privacy and exploitation of our attention span. The industry’s environmental impact is a key issue, yet the companies that produce such models have stayed remarkably quiet about the amount of energy they consume – probably because they don’t want to spark our concern.
Mariana Mazzucato is professor in the economics of innovation and public value at University College London, where she is founding director of the UCL Institute for Innovation & Public Purpose
Group wants big tech social media firms to pay up to £40m a year to reimburse customers after years of shouldering cost of fraud
A leading City lobby group is calling on the next government to bring in scams legislation that forces big tech and social media companies to cough up up to £40m a year to reimburse customers and fight fraud on their platforms.
The demand came in a ‘financial services manifesto’ released by UK Finance, which represents banks, payments companies and other financial firms.
It’s been a busy week in the world of artificial intelligence. OpenAI found itself in hot water with Scarlett Johansson after launching its new chatbot, Sky, drawing comparisons to the Hollywood star’s character in the sci-fi film Her. In South Korea, the second global AI summit took place, and a report from the Alan Turing Institute explored how AI could influence elections. The Guardian’s UK technology editor, Alex Hern, tells Madeleine Finlay about what’s been happening
Wall Street Journal reports pair have had several phone calls recently and that Musk could assist if Trump wins another term
Donald Trump has floated a possible advisory role for the tech billionaire Elon Musk if he were to retake the White House next year, according to a new report from the Wall Street Journal.
The two men, who once had a tense relationship, have had several phone calls a month since March as Trump looks to court powerful donors and Musk seeks an outlet for his policy ideas, the newspaper said, citing several anonymous sources familiar with their conversations.
Up to 28,000 people at tech giant in South Korea will strike for one day on 7 June after negotiations over wages stall
A major union representing tens of thousands of people at the South Korean tech giant Samsung Electronics said on Wednesday that workers will go on strike for the first time, potentially threatening key global semiconductor supply chains.
A spokesperson said union members, around 20% of the company workforce, or 28,000 people, would use annual leave to strike for one day on 7 June, leaving the door open for a potential general strike down the road.
The action comes after nearly 200 Meta employees sign open letter to Mark Zuckerberg demanding end to alleged censorship
As Meta held its annual shareholder meeting online Wednesday, human rights groups coordinated onlineprotests calling the company to put an end to what they call systemic censorship of pro-Palestinian content, both on the company’s social networks and within its own workforce.
The day of action comes after nearly 200 Meta employees signed a letter to Mark Zuckerberg this month demanding the company put an end to alleged censorship of internal voices advocating for Palestinian rights. The employees called for more transparency around alleged biases on public facing platforms and issued a statement urging for an immediate, permanent ceasefire in Gaza.
Security incident at pension scheme being taken ‘extremely seriously’, but broadcaster says there is no evidence of a ransomware attack
The BBC has launched an investigation after the details of more than 25,000 current and former employees were exposed in a data breach.
The corporation’s pension scheme wrote to members on Wednesday to say their details had been stolen in a data security incident that it was taking “extremely seriously”.
A Hamilton-esque performance extolling the virtues of design software was exactly the wrong kind of corny
The next time you’re sitting through a company-wide meeting, half-listening to a leader drone on about updates or product launches (and hoping they don’t announce layoffs or budget cuts), remember this: at least they’re not rapping.
That’s what happened at Canva Create, a summit held in Los Angeles last week, in honor of Canva, a graphic design company known for helping non-designers produce good-enough flyers to advertise a yard sale or middle school talent show. In LA, Melanie Perkins, co-founder of the $40bn Australian brand, spoke to attendees about “brand-building, maintaining a strong company culture and scaling operations”, per Variety. (Something she knows a lot about: Disney’s CEO, Bob Iger, who also spoke at the summit, is an investor and board member of the platform.)
Ellen Roome says firms should be required to hand over data in case it can help parents understand why their child died
A woman whose 14-year-old son killed himself is calling for parents to be given the legal right to access their child’s social media accounts to help understand why they died.
Ellen Roome has gathered more than 100,000 signatures on a petition calling for social media companies to be required to hand over data to parents after a child has died.
In the UK, the youth suicide charity Papyrus can be contacted on 0800 068 4141 or email pat@papyrus-uk.org, and in the UK and Ireland Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, the National Suicide Prevention Lifeline is at 988 or chat for support. You can also text HOME to 741741 to connect with a crisis text line counselor. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org.
Ryan Salame is first of Sam Bankman-Fried’s lieutenants to get jail time for his role in 2022 collapse of cryptocurrency exchange
A federal judge on Tuesday sentenced former FTX executive Ryan Salame to more than seven years in prison, the first of the lieutenants of failed cryptocurrency mogul Sam Bankman-Fried to receive jail time for their roles in the 2022 collapse of the cryptocurrency exchange.
Salame, 30, was a high-ranking executive at FTX for most of the exchange’s existence and, up until its collapse, was the co-CEO of FTX Digital Markets. He pleaded guilty last year to illegally making unlawful US campaign contributions and to operating an unlicensed money-transmitting business.
Javier Milei to hold private talks with Sundar Pichai and Sam Altman as Argentina faces worst economic crisis in decades
Javier Milei, Argentina’s president, is set to meet with the leaders of some of the world’s largest tech companies in Silicon Valley this week. The far-right libertarian leader will hold private talks with Sundar Pichai of Google, Sam Altman of OpenAI, Mark Zuckerberg of Meta and Tim Cook of Apple.
Milei also met last month with Elon Musk, who has become one of the South American president’s most prominent cheerleaders and repeatedly shared his pro-deregulation, anti-social justice message on Twitter. Peter Thiel, the tech billionaire, has also twice visited Milei, flying down to Buenos Aires to speak with him in February and May of this year.
What does success look like for the second global AI summit? As the great and good of the industry (and me) gathered last week at the Korea Institute of Science and Technology, a sprawling hilltop campus in eastern Seoul, that was the question I kept asking myself.
If we’re ranking the event by the quantity of announcements generated, then it’s a roaring success. In less than 24 hours – starting with a virtual “leader’s summit” at 8pm and ending with a joint press conference with the South Korean and British science and technology ministers – I counted no fewer than six agreements, pacts, pledges and statements, all demonstrating the success of the event in getting people around the table to hammer out a deal.
The first 16 companies have signed up to voluntary artificial intelligence safety standards introduced at the Bletchley Park summit, Rishi Sunak has said on the eve of the follow-up event in Seoul.
“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” Sunak said. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”
Those institutes will begin sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science of AI safety.
At the first “full house” meeting of those countries on Wednesday, [Michelle Donelan, the UK technology secretary]warned the creation of the network was only a first step. “We must not rest on our laurels. As the pace of AI development accelerates, we must match that speed with our own efforts if we are to grip the risks and seize the limitless opportunities for our public.”
Twenty-seven nations, including the United Kingdom, Republic of Korea, France, United States, United Arab Emirates, as well as the European Union, have signed up to developing proposals for assessing AI risks over the coming months, in a set of agreements that bring the AI Seoul summit to an end. The Seoul Ministerial Statement sees countries agreeing for the first time to develop shared risk thresholds for frontier AI development and deployment, including agreeing when model capabilities could pose “severe risks” without appropriate mitigations. This could include helping malicious actors to acquire or use chemical or biological weapons, and AI’s ability to evade human oversight, for example by manipulation and deception or autonomous replication and adaptation.
Grace Wolstenholme tried to persuade Meta to take down the fraudulent page, which was trying to make money by copying her posts, for five months
A young social media star with cerebral palsy says Facebook refused to take action after scammers used her content to set up a fake account and make money from her fans.
Grace Wolstenholme, 20, who has 1.3m followers on TikTok, says she has lost income from not posting videos after she was advised by the police to stop. Content she put on TikTok and on Instagram was being stolen and posted on Facebook by someone pretending to be her.
The Bletchley Park artificial intelligence summit in 2023 was a landmark event in AI regulation simply by virtue of its existence.
Between the event’s announcement and its first day, the mainstream conversation had changed from a tone of light bafflement to a general agreement that AI regulation may be worth discussing.
Bren Pointer saysthe AmericanpianistKeith Jarrett was right to disallow photography during his performances. Plus letters from Barry and Joy Norman, Meirion Bowen and Joan Lewis
Sadly, on many occasions, a flash from a phone in the audience would happen and subsequently either the concert would come to an abrupt end or there would be a lengthy delay before the performance would resume. The wishes of the musician were not respected.
Journalist Maria Ressa named Mark Zuckerberg and Elon Musk in speech at Hay literary festival in Powys
“Tech bros” such as Mark Zuckerberg and Elon Musk are “the largest dictators”, Maria Ressa, who won the Nobel peace prize in 2021 for her defence of media freedom, has said.
The American-Filipina journalist has spent a number of years fighting charges filed during then president of the Philippines Rodrigo Duterte’s administration, but said Duterte “is a far smaller dictator compared to Mark Zuckerberg, and now let me throw in Elon Musk”.
Funding round values artificial intelligence startup at $18bn before investment, says multibillionaire
Elon Musk’s artificial intelligence company xAI has closed a $6bn (£4.7bn) investment round that will make it among the best-funded challengers to OpenAI.
The startup is only a year old, but it has rapidly built its own large language model (LLM), the technology underpinning many of the recent advances in generative artificial intelligence capable of creating human-like text, pictures, video, and voices.
Hollywood star’s claim ChatGPT update used an imitation of her voice highlights tensions over rapidly accelerating technology
When OpenAI’s new voice assistant said it was “doing fantastic” in a launch demo this month, Scarlett Johansson was not.
The Hollywood star said she was “shocked, angered and in disbelief” that the updated version of ChatGPT, which can listen to spoken prompts and respond verbally, had a voice “eerily similar” to hers.
Episode 331 of the Shared Security Podcast discusses privacy and security concerns related to two major technological developments: the introduction of Windows PC’s new feature ‘Recall,’ part of Microsoft’s Copilot+, which captures desktop screenshots for AI-powered search tools, and Slack’s policy of using user data to train machine learning features with users opted in by […]
A TikTok ban threatens to destroy millions of jobs and silence diverse voices. It would change the world for the worse
I’m a TikTok creator. I’ve used TikTok to build a multimillion dollar business, focused on sharing interesting things I’ve learned in life and throughout my years in college. TikTok allowed me to create a community and help further my goal of educating the public. I always feared that one day, it would be threatened. And now, it’s happening.
Why does the US government want to ban TikTok? The reasons given include TikTok’s foreign ownership and its “addictive” nature, but I suspect that part of the reason is that the app primarily appeals to younger generations who often hold political and moral views that differ significantly from those of older generations, including many of today’s politicians.
Dominic Andre is a content creator and the CEO of The Lab
One computer scientist says we should embrace human-machine relationships, but other experts are more cautious
Hollywood may have warned about the perils of striking up relationships with artificial intelligence, but one computer scientist says we may be missing a trick if we do not embrace the positives that human-machine relationships have to offer.
Despite the travails of Joaquin Phoenix’s introverted and soon-to-be-divorced protagonist in the 2013 movie Her, one professor says we should be open to the comforts that chatbots can provide.
Petrolheads are quick to scorn the idea of electric car racing, but the series’ chief executive is sure that time, technology – and even geography – are on his side
Jeff Dodds has been a fan of Formula One “all my life”, he says. That is probably a good thing because, as chief executive of electric racing series Formula E, he must find the comparison with its fossil-fuelled cousin is constant.
So he takes it head-on. Such is the growth and improvement in technology in Formula E that one day, he says, it is “realistic that a question will be asked about whether both can exist together”. Talking to the Observer in the race company’s west London headquarters, he adds that maybe one day, as Formula E develops, “they won’t [both exist]”.
OpenAI’s unsubtle approximation of the actor’s voice for its new GPT-4o software was a stark illustration of the firm’s high-handed attitude
On Monday 13 May, OpenAI livestreamed an event to launch a fancy new product – a large language model (LLM) dubbed GPT-4o – that the company’s chief technology officer, Mira Murati, claimed to be more user-friendly and faster than boring ol’ ChatGPT. It was also more versatile, and multimodal, which is tech-speak for being able to interact in voice, text and vision. Key features of the new model, we were told, were that you could interrupt it in mid-sentence, that it had very low latency (delay in responding) and that it was sensitive to the user’s emotions.
Viewers were then treated to the customary toe-curling spectacle of “Mark and Barret”, a brace of tech bros straight out of central casting, interacting with the machine. First off, Mark confessed to being nervous, so the machine helped him to do some breathing exercises to calm his nerves. Then Barret wrote a simple equation on a piece of paper and the machine showed him how to find the value of X, after which he showed it a piece of computer code and the machine was able to deal with that too.
Scientists have found that immersing kids in computer games can train their brains to localise sounds better
Scientists have recruited an unusual ally in their efforts to help children overcome profound deafness. They are using computer games to boost the children’s ability to localise sounds and understand speech.
The project is known as Bears – for Both Ears – and it is aimed at youngsters who have been given twin cochlea implants because they were born with little or no hearing.
Max Tegmark argues that the downplaying is not accidental and threatens to delay, until it’s too late, the strict regulations needed
Big tech has succeeded in distracting the world from the existential risk to humanity that artificial intelligence still poses, a leading scientist and AI campaigner has warned.
Speaking with the Guardian at the AI Summit in Seoul, South Korea, Max Tegmark said the shift in focus from the extinction of life to a broader conception of safety of artificial intelligence risked an unacceptable delay in imposing strict regulation on the creators of the most powerful programs.
Commons education committee chair says online world poses serious dangers and parents face uphill struggle
MPs have urged the next government to consider a total ban on smartphones for under 16-year-olds and a statutory ban on mobile phone use in schools as part of a crackdown on screen time for children.
Members of the House of Commons education committee made the recommendations in a report into the impact of screen time on education and wellbeing, which also called on ministers to raise the threshold for opening a social media account to 16.
There were times during Horizon inquiry when victims of scandal struggled to keep composure as former chief executive pleaded ignorance
It was difficult for the victims attending the public inquiry into the Horizon scandal on the fifth floor of Aldwych House in central London to demur from the conclusion of Moya Greene, a former chief executive of Royal Mail and Paula Vennells’ boss until the Post Office split off in 2012.
“I think you knew,” Greene had written to Vennells in January, according to a text message published by the inquiry this week.