Editor’s note: Story updated 2:00 p.m. EST with details of guidance issued by U.S. government agencies.

Russia’s disinformation operations around the U.S. elections have ramped up over the last month-and-a-half after a relatively slow start compared to previous cycles, Microsoft said Wednesday.

The operations use a mix of themes seen in 2020 campaigns with a “renewed focus on undermining US support for Ukraine” as well as efforts to turn U.S. citizens against NATO and to foster domestic infighting.  

“Messaging regarding Ukraine — via traditional media and social media — picked up steam over the last two months with a mix of covert and overt campaigns from at least 70 Russia-affiliated activity sets we track,” Microsoft’s Threat Analysis Center (MTAC) explained in the new report.

MTAC traced the activity back to several Russia-affiliated influence actors, but the most prolific was housed within the Russian Presidential Administration. Microsoft said the efforts have quietly pivoted away from intelligence services and the Internet Research Agency into a more centralized operation run by Russia’s presidential office. 

“Each Russian actor has shown the capability and willingness to target English-speaking — and in some cases Spanish-speaking — audiences in the US, pushing social and political disinformation meant to portray Ukrainian President Volodymyr Zelensky as unethical and incompetent, Ukraine as a puppet or failed state, and any American aid to Ukraine as directly supporting a corrupt and conspiratorial regime,” they said. 

At least one Russian actor has published anti-Ukraine narratives in English, Russian, French, Arabic, Finnish and several other languages. 

The efforts typically start with a purported whistleblower or citizen journalist publishing a video which is then covered by a network of affiliated websites based in “the Middle East and Africa, as well as several English-language outlets such as DC Weekly, Miami Chronical, and The Intel Drop.” Other websites, like “Election Watch” and “50 States of Lie,” focus on domestic U.S. issues.

Before long, the information is circulated enough online that U.S. audiences repeat and share the disinformation widely — with most “unaware of its original source.”

Screenshot 2024-04-17 at 12.10.18 PM.png

An example of the process for laundering disinformation into U.S. audience spaces by the group Storm-1516. Credit: Microsoft.

Microsoft noted that ex-Ukrainian Parliamentarian and U.S.-sanctioned Russian agent Andrei Derkach — who spearheaded a disinformation campaign against U.S. President Joe Biden in 2020 — reemerged on social media in January after nearly two years offline. 

“Derkach, in an interview conducted in Belarus and uploaded onto social media, propagated both old and new claims about US political figures, including President Biden,” the researchers said. 

In addition to wider disinformation efforts, Russian groups are also employing hack-and-leak operations akin to what was seen in the 2016 election. Microsoft said it has seen a “notable uptick in cyber activity” from groups like SEABORGIUM — also known as COLDRIVER and Callisto Group.

The groups are targeting Western think tanks and Microsoft officials believe it is the “first in a series of hacking campaigns meant to drive Kremlin outcomes headed into November.”

AI concerns

Microsoft reiterated previous concerns that state actors in Russia, China and Iran have been honing their skills with generative artificial intelligence to unleash a barrage of deepfakes and other deceptive material. 

The company has partnered with Democracy Forward and AI For Good Lab to “identify, triage, and analyze nation-states’ malicious use of generative AI in influence operations.”

So far, their research has found that high production, synthetic deepfake videos have largely not caused widespread deception or confusion and in “only a few cases have [they] seen any genuine audience deception from such content.”

“Instead, most of the incidents where we’ve observed audiences gravitate toward and share disinformation involve simple digital forgeries consistent with what influence actors over the last decade have regularly employed,” they said.

“For example, fake news stories with spoofed media logos embossed on them—a typical tactic of Russia-affiliated actors—garner several times more views and shares than any fully synthetic generative AI video we’ve observed and assessed.”

The most potent content that has shown the ability to gain traction involves real images edited with AI as opposed to completely AI-generated content, Microsoft explained, noting that social media platforms have become fairly good at spotting and calling out fake content. According to the researchers, AI images and videos are most successful in private chats where a person cannot get the opinion of others. 

Audio content, however, has proven to be alarmingly convincing. Microsoft pointed to the recent Slovak presidential election cycle, where audio deepfakes of politician Michal Šimečka and journalist Monika Tódová had an impact, as an example of how Russian actors will likely leverage AI. 

Microsoft released a similar report on China’s efforts to use AI ahead of the 2024 U.S. election two weeks ago, spotlighting several campaigns run by the country’s intelligence services.

Senator Mark Warner (D-VA) — chair of the Senate Intelligence Committee — warned to the New York Times on Wednesday that the disruption of U.S. intelligence agencies efforts to coordinate with social media sites to stop disinformation campaigns was endangering the country. 

Warner pointed to Slovakia as a worrying example of how effective Russian disinformation can be. 

“Slovakia was 80 percent pro-Ukraine. Two years later, with massive amounts of Russian misinformation and disinformation, you have a pro-Russian government and 55 percent of Slovaks think America started the war in Ukraine,” he told the New York Times. 

The Cybersecurity and Infrastructure Security Agency (CISA), FBI, and Office of the Director of National Intelligence (ODNI) released guidance on Wednesday that outlines the latest tactics used in foreign malign influence operations. 

“Foreign actors continue to pursue efforts aimed at sowing discord among the American people, with the ultimate goal of eroding confidence in our democratic institutions,” said ODNI Foreign Malign Influence Center Director Jessica Brandt.

“The normalization of influence activities, combined with the rise of new technologies, increasingly presents a whole-of-society challenge for the Intelligence Community to address alongside the broader U.S. Government, industry, and civil society.”