Reading view

There are new articles available, click to refresh the page.

AI and the Indian Election

As India concluded the world’s largest election on June 5, 2024, with over 640 million votes counted, observers could assess how the various parties and factions used artificial intelligence technologies—and what lessons that holds for the rest of the world.

The campaigns made extensive use of AI, including deepfake impersonations of candidates, celebrities and dead politicians. By some estimates, millions of Indian voters viewed deepfakes.

But, despite fears of widespread disinformation, for the most part the campaigns, candidates and activists used AI constructively in the election. They used AI for typical political activities, including mudslinging, but primarily to better connect with voters.

Deepfakes without the deception

Political parties in India spent an estimated US$50 million on authorized AI-generated content for targeted communication with their constituencies this election cycle. And it was largely successful.

Indian political strategists have long recognized the influence of personality and emotion on their constituents, and they started using AI to bolster their messaging. Young and upcoming AI companies like The Indian Deepfaker, which started out serving the entertainment industry, quickly responded to this growing demand for AI-generated campaign material.

In January, Muthuvel Karunanidhi, former chief minister of the southern state of Tamil Nadu for two decades, appeared via video at his party’s youth wing conference. He wore his signature yellow scarf, white shirt, dark glasses and had his familiar stance—head slightly bent sideways. But Karunanidhi died in 2018. His party authorized the deepfake.

In February, the All-India Anna Dravidian Progressive Federation party’s official X account posted an audio clip of Jayaram Jayalalithaa, the iconic superstar of Tamil politics colloquially called “Amma” or “Mother.” Jayalalithaa died in 2016.

Meanwhile, voters received calls from their local representatives to discuss local issues—except the leader on the other end of the phone was an AI impersonation. Bhartiya Janta Party (BJP) workers like Shakti Singh Rathore have been frequenting AI startups to send personalized videos to specific voters about the government benefits they received and asking for their vote over WhatsApp.

Multilingual boost

Deepfakes were not the only manifestation of AI in the Indian elections. Long before the election began, Indian Prime Minister Narendra Modi addressed a tightly packed crowd celebrating links between the state of Tamil Nadu in the south of India and the city of Varanasi in the northern state of Uttar Pradesh. Instructing his audience to put on earphones, Modi proudly announced the launch of his “new AI technology” as his Hindi speech was translated to Tamil in real time.

In a country with 22 official languages and almost 780 unofficial recorded languages, the BJP adopted AI tools to make Modi’s personality accessible to voters in regions where Hindi is not easily understood. Since 2022, Modi and his BJP have been using the AI-powered tool Bhashini, embedded in the NaMo mobile app, to translate Modi’s speeches with voiceovers in Telugu, Tamil, Malayalam, Kannada, Odia, Bengali, Marathi and Punjabi.

As part of their demos, some AI companies circulated their own viral versions of Modi’s famous monthly radio show “Mann Ki Baat,” which loosely translates to “From the Heart,” which they voice cloned to regional languages.

Adversarial uses

Indian political parties doubled down on online trolling, using AI to augment their ongoing meme wars. Early in the election season, the Indian National Congress released a short clip to its 6 million followers on Instagram, taking the title track from a new Hindi music album named “Chor” (thief). The video grafted Modi’s digital likeness onto the lead singer and cloned his voice with reworked lyrics critiquing his close ties to Indian business tycoons.

The BJP retaliated with its own video, on its 7-million-follower Instagram account, featuring a supercut of Modi campaigning on the streets, mixed with clips of his supporters but set to unique music. It was an old patriotic Hindi song sung by famous singer Mahendra Kapoor, who passed away in 2008 but was resurrected with AI voice cloning.

Modi himself quote-tweeted an AI-created video of him dancing—a common meme that alters footage of rapper Lil Yachty on stage—commenting “such creativity in peak poll season is truly a delight.”

In some cases, the violent rhetoric in Modi’s campaign that put Muslims at risk and incited violence was conveyed using generative AI tools, but the harm can be traced back to the hateful rhetoric itself and not necessarily the AI tools used to spread it.

The Indian experience

India is an early adopter, and the country’s experiments with AI serve as an illustration of what the rest of the world can expect in future elections. The technology’s ability to produce nonconsensual deepfakes of anyone can make it harder to tell truth from fiction, but its consensual uses are likely to make democracy more accessible.

The Indian election’s embrace of AI that began with entertainment, political meme wars, emotional appeals to people, resurrected politicians and persuasion through personalized phone calls to voters has opened a pathway for the role of AI in participatory democracy.

The surprise outcome of the election, with the BJP’s failure to win its predicted parliamentary majority, and India’s return to a deeply competitive political system especially highlights the possibility for AI to have a positive role in deliberative democracy and representative governance.

Lessons for the world’s democracies

It’s a goal of any political party or candidate in a democracy to have more targeted touch points with their constituents. The Indian elections have shown a unique attempt at using AI for more individualized communication across linguistically and ethnically diverse constituencies, and making their messages more accessible, especially to rural, low-income populations.

AI and the future of participatory democracy could make constituent communication not just personalized but also a dialogue, so voters can share their demands and experiences directly with their representatives—at speed and scale.

India can be an example of taking its recent fluency in AI-assisted party-to-people communications and moving it beyond politics. The government is already using these platforms to provide government services to citizens in their native languages.

If used safely and ethically, this technology could be an opportunity for a new era in representative governance, especially for the needs and experiences of people in rural areas to reach Parliament.

This essay was written with Vandinika Shukla and previously appeared in The Conversation.

‘Olympics Has Fallen’ – Russian Government Attempts to Discredit 2024 Paris Olympics

2024 Paris Olympics Russian Government

Researchers from Microsoft have observed a year-long coordinated campaign by Russian threat actors to influence the public's view of the upcoming 2024 Paris Olympics. The chief effort of these influence operations has involved an AI-generated Tom Cruise movie titled "Olympics Has Fallen," parodying the title of the Hollywood movie "Olympus Has Fallen." In the Russian AI movie, a voice and image impersonation of Tom Cruise appears to discredit the leadership behind the International Olympics Committee. Along with the movie, the influence operations have also disparaged the French nation, French President Emmanuel Macron, and the hosting of the upcoming games in Paris.

Use of AI in Influence Campaigns

These operations were linked to Russian-affiliated threat actors Storm-1679 and Storm-1099. In an effort to sow disinformation and denigrate the International Olympic Committee (IOC), these groups distributed fake videos and spoofed news reports employing the use of AI-generated content, even stoking fears of violence in Paris. Storm-1679 was behind the distribution of the feature-length fake documentary "Olympics Has Fallen" last summer. This movie was produced through the use of an AI voice impersonating the famous American actor Tom Cruise and demonstrated slick, Hollywood-style production values. The movie also featured an official website, while purporting to be from Netflix. The researchers observed the use of evolved tactics throughout the campaign, blending traditional forgeries with cutting-edge AI capabilities. Distribution of the the film included additional AI-generated fake celebrity endorsements that were edited into legitimate videos from Cameo, a service where fans can pay celebrities to read personalized messages or for custom content. These deceptive ads made it appear that the celebrities promoted the anti-Olympic rhetoric in the film.

Stoking Fears of Violence at 2024 Paris Olympics

Along with the spread of anti-Olympics rhetoric from AI-generated deepfakes, the campaign also attempts to sow further discord and stoke public fear of violent occurrences or terrorist incidents during the games. The attempt at fearmongering may be an attempt to reduce the attendance and viewership of the upcoming games. These operations include:
  • Spoofed videos under the cover of legitimate news outlets like Euro News and France24 that claim a high percentage of the event's tickets were returned over security concerns.
  • Fabricated warnings from the CIA and French intelligence services about potential terror threats that are targeting the event.
  • Fake graffiti images suggesting a repeat of the 1972 Munich Olympics massacre that targeted Israeli athletes. Researchers observed a video featuring imagery from the incident, amplified further through the activities of pro-Russian bot accounts.
The researchers warn that these influence efforts could intensify further as the July 26 Opening Ceremony draws near. They predict that the campaign may shift to more automated tactics like bot networks to amplify messaging across different social media. The report stated that these threat actors were known to previously target the Ukrainian refugee community in the U.S. and Europe through similarly spoofed news content attempting to sow fears and spread disinformation.

Previous Russian Influence Attempts on the Olympic Games

While psychological tactics dominate the campaign, the researchers highlight that the new campaign signals the addition of advanced technology in the long history of Russian disinformation operations. The researchers cited examples such as Russia's predecessor, the Soviet Union, attempting to stoke fears before the 1984 Summer Olympics in Los Angeles by spreading pamphlets in Zimbabwe, Sri Lanka and South Korea that non-white competitors would be targeted for violence. In 2016, Russian threat actors hacked into the World Anti-Doping Agency and leaked sensitive medical information about American athletes Serena Williams, Venus Williams, and Simone Biles. In 2018, the "Olympic Destroyer" malware attack against the 2018 Winter Olympics in South Korea disrupted some events and took them offline. In 2020, the U.S. Department of Justice charged two Russian GRU officers with responsibility for the 2018 South Korean Olympics hack. These incidents, along with the recent sophisticated influence campaigns, demonstrate the Russian government's efforts to undercut and defame such international competitions in the eyes of potential attenders and global spectators, largely due to their own long history of tensions with organizations responsible for overseeing these events. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

AI Threats, Cybersecurity Uses Outlined by Gartner Analyst

AI threats and defenses

AI is a long way from maturity, but there are still offensive and defensive uses of AI technology that cybersecurity professionals should be watching, according to a presentation today at the Gartner Security & Risk Management Summit in National Harbor, Maryland. Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that the large language models (LLMs) that have been getting so much attention are “not intelligent.” He cited one example where ChatGPT was recently asked what the most severe CVE (common vulnerabilities and exposures) of 2023 was – and the chatbot’s response was essentially nonsense (screenshot below). [caption id="attachment_74538" align="aligncenter" width="618"]ChatGPT AI security prompt ChatGPT security prompt - and response (source: Gartner)[/caption]

Deepfakes Top AI Threats

Despite the lack of sophistication in LLM tools thus far, D’Hoinne noted one area where AI threats should be taken seriously: deepfakes. “Security leaders should treat deepfakes as an area of immediate focus because the attacks are real, and there is no reliable detection technology yet,” D’Hoinne said. Deepfakes aren’t as easy to defend against as more traditional phishing attacks that can be addressed by user training. Stronger business controls are essential, he said, such as approval over spending and finances. He recommended stronger business workflows, a security behavior and culture program, biometrics controls, and updated IT processes.

AI Speeding Up Security Patching

One potential AI security use case D’Hoinne noted is patch management. He cited data that AI assistance could cut patching time in half by prioritizing patches by threat and probability of exploit and checking and updating code, among other tasks. Other areas where GenAI security tools could help include: alert enrichment and summarization; interactive threat intelligence; attack surface and risk overview; security engineering automation, and mitigation assistance and documentation. [caption id="attachment_74541" align="aligncenter" width="1024"]AI security code fixes AI code fixes (source: Gartner)[/caption]

AI Security Recommendations

“Generative AI will not save or ruin cybersecurity,” D’Hoinne concluded. “How cybersecurity programs adapt to it will shape its impact.” Among his recommendations to attendees was to “focus on deepfakes and social engineering as urgent problems to solve,” and to “experiment with AI assistants to augment, not replace staff.” And outcomes should be measured based on predefined metrics for the use case, “not ad hoc AI or productivity ones.” Stay tuned to The Cyber Express for more coverage this week from the Gartner Security Summit.

Racist AI Deepfake of Baltimore Principal Leads to Arrest

A high school athletic director in the Baltimore area was arrested after he used A.I., the police said, to make a racist and antisemitic audio clip.

© Kim Hairston/The Baltimore Sun

Myriam Rogers, superintendent of Baltimore County Public Schools, speaking about the arrest of Dazhon Darien, the athletic director of Pikesville High.
❌