According to Microsoft’s threat intelligence team, state-backed Chinese cyber groups, with the help of North Korean actors, and Russian actors, are expected to increase their use of sophisticated AI-generated disinformation campaigns to influence high-profile elections in 2024 in the US, UK, South Korea and India including both presidential and legislative elections.
Social Media will be targeted with AI-generated content skewed to benefit the State-Actors interests and positions and although previously the impact of such content in swaying audiences was low, Microsoft’s team expects that LLM generated and augmented memes, videos, and audio may prove effective this time around as the technology is so improved and will result in such widespread disinformation that there will be an enormous backlash against the re-occurrence of this as angry legislators take steps to rein in social media news.
China’s recent “dry run” utilizing AI-synthesized disinformation during Taiwan’s January presidential election is seen as a harbinger of the emerging threat, with groups such as pro-Beijing Storm 1376 or Spamouflage Dragon riding on their successes as the first documented attempt by a state-sponsored bad actors to influence a foreign vote using AI-manufactured content. It is also expected that the US may retaliate by attaching the Chinese state-controlled propaganda machine to disseminate pro-western and anti-politburo messages direct to Chinese citizens, creating a new era which will be known as the “code-war”.
The Chinese-backed operatives have already deployed various tactics, including posting fake audio clips that were likely generated by AI and depicted a former presidential candidate endorsing a rival, as well as AI-generated memes levelling unfounded corruption allegations against the ultimately victorious pro-sovereignty Taiwanese candidate William Lai.
New technology allowing AI-rendered “news anchors” to broadcast disinformation about political figures means that AI-generated fake news can now be coupled with look-alike traditional media at a speed never before possible and although the targetting of fake new stories about Lai’s personal life in the Taiwan election was mostly spotted by readers, Microsoft warn that this was a learning curve and that the divisive new stories in the next series of elections will be significantly more sophisticated.
Microsoft warned that as populations in India, South Korea, United Kingdom and the United States head to the polls, Chinese cyber and influence bad-actors, and to some extent Russian and North Korean cyber bad-actors, may work toward targeting these elections. The company added that Chinese groups are already attempting to map divisive issues and voting blocs in these countries through orchestrated social media campaigns, potentially to gather intelligence and precision on key voting demographics ahead of the elections and targetting swing areas particularly.
While Microsoft flagged the risk, it acknowledged that AI-enabled disinformation has so far achieved limited success in shaping public opinion globally, but this is likely to change. Only 12 months ago, the AI was poor, with the Will Smith spaghetti eating video and the Pope’s puffa jacket being the state of art, but a year later, things have moved so fast that near Hollywood-quality fakes are possible with very little effort and with Beijing’s growing investment in, and increasing sophistication with, the technology, there is now a very serious and escalating threat to the integrity of democratic elections worldwide.

