AI-driven misinformation campaigns and a wave of cyberattacks on elections worldwide have raised legitimate concerns about next month’s US presidential election. Will the voting be secure? Will voters trust the results? And what are we in for if they don’t?
This month’s Compass will examine the threats that cybercrime and AI pose in the context of how they’ve already been deployed to disrupt recent elections in the EU and Taiwan.
But crucially, for business leaders, we’re devoting this issue to the effects of election instability on markets, brands, and consumer behaviors. Because the cloud of doubt and distrust hovering over the polls isn’t likely to dissipate when they close. It’s going to change how consumers interact with brands. Our global Crisis team—including Ethan Bauley, Brett SanPietro, Nazli Togrul, Robert Langmuir, Alexis Sogl, Emily Ahmad, Tori Sousa and Mackenzie Chalfin—offers some pointed advice on how brands can prepare for and adapt to this worsening climate.
Best,
Peter Duda, President, Weber Shandwick, Global Corporate Crisis and Issues
Keisha McClellan, VP, Weber Shandwick, Global Crisis, Issues and Cybersecurity
Ethan Bauley, Head of Service Innovation, Weber Shandwick, North America
This issue also received contributions from Brett SanPietro, Nazli Togrul, Emily Ahmad, Tori Sousa, Mackenzie Chalfin and from our EMEA team Robert Langmuir and Alexis Sogl.
This is the ninth issue of Compass for the Chaos – Weber Shandwick’s monthly newsletter highlighting recent trends and topics impacting global organizations.
Subscribe for future issues and ongoing insights from our team of crisis and risk experts to help you navigate the world of risk and opportunities.
Was this newsletter forwarded to you? Subscribe to our Substack to receive these moving forward.
A dual threat to democracy
Long before the European Union held its 2024 parliamentary elections this June, the World Economic Forum outlined the risks cybercrime poses: manipulation of voter registration databases to disenfranchise voters; hacking or tampering with electronic voting machines to manipulate vote counts; manipulation of election management systems to misreport results; and phishing attacks on election officials to harvest sensitive information or introduce malware.
Sure enough, shortly after the Netherlands opened its polls, kicking off the 2024 EU elections, several of its political parties reported their websites had been hacked (HackNeT, a Russian group known to promote the Russian state agenda, proved to be the culprit). At the same time, member states including Italy, Spain, Germany and Poland reported misinformation campaigns designed to dissuade people from voting or even to spoil their ballots.
With 27 member-state voting processes, each with different infrastructures and levels of security, the EU is perhaps exceptionally vulnerable to cyberattacks: disrupting just one state process could cast doubt over the other 26. But with half the world heading to the polls in 2024, the potential for disruption is incentivizing cyberattacks on an unprecedented scale. Election interference has increased by 160% since 2015, with approximately 1 in 3 attacks directed at OECD countries, according to a Canadian study. And as US Federal Bureau of Investigation (“FBI”) Director Christopher Wray acknowledged in January, the US presidential election is the top target this year.
Unquestionably, artificial intelligence amplifies the threat of election cyberattacks and misinformation. As Crowdstrike reports, criminals leverage machine-learning algorithms to make attacks more efficient (e.g., revealing system vulnerabilities and hacking passwords faster), more scalable (e.g., deploying phishing campaigns along identified attack vectors), and more convincing (e.g., using deepfake video and audio tools). We’ve already witnessed these tools in action: in mid-September, members of the Office of the Director of National Intelligence and the FBI warned that Russia, Iran and China were using AI tools to amplify negative stories and comments about US Vice President Kamala Harris and create and disseminate fake news articles about immigration and abortion, among other controversial topics.
Not everyone is convinced AI and cyberattacks pose the threat to democracy that cyber experts and media keep reporting. Fears of their impact may, in fact, be overblown.
But this may miss the larger point: fear is itself a disruptor. 78% of Americans expect that abuses of AI systems will affect the outcome of the 2024 US presidential election. Doubt and distrust already cloud the US election.
Insecurity’s ripple effect
What might this mean for companies and organizations?
Political and economic futures are intertwined, as voter doubt translates into market volatility. Investors who become wary of the long-term economic climate, for example, might pull back from or pull out of stock markets. Consumers who are distrustful of digital services may become hesitant to engage in e-commerce or with any platform that demands sensitive information. Businesses vulnerable to theft of intellectual property or sensitive data will double down on cybersecurity investments, increasing operational costs and narrowing profit margins. Businesses perceived as supporting or benefiting from a government that is viewed as illegitimate could suffer public backlash that depresses sales, erodes brand loyalty, and unwinds strategic partnerships. Brands that suffer a major ransomware attack or data breach and fumble the ensuing crisis might never recover from the reputational damage.
How to weather the coming storm
The strategies that election integrity and cybersecurity officials are taking to prepare the US this November can work for large businesses as well.
Raise AI literacy rates across your enterprise. Conduct trainings to help employees understand how AI works and how it can be used—and abused— to supercharge misinformation campaigns and scale cyberattacks. Use the trainings as an opportunity to acquaint your teams with your own policies around the use of AI and its associated risks.
Develop more discerning eyes. The deepfake technology used to misrepresent a political candidate can just as readily be used to misrepresent a business leader. AI might clone a leader’s voice to issue, say, a fraudulent wire transfer. Will your employees fall for it? We train client workforces to spot false narratives, fact-check content, and identify manipulated video or audio (for example: does that leader have 6 fingers on one hand? Does the audio sync with their mouth movements? Does that analysis sound a little too ChatGPT?) because however sophisticated your security, your employees are your first line of defense.
Incentivize detection and reporting. Major social media companies allow users to report misleading content on their platforms. So should you. Make it easy for employees to report suspicious emails; consider incentivizing them to flag false narratives or AI-altered content they encounter on their social media feeds.
Plan for the worst. Cyberattacks are a question of when, not if. Develop a comprehensive response plan. Outline steps for risk identification and mitigation. Craft a communications strategy to inform both internal and external stakeholders and update it regularly. Even if the worst happens, effective communications can salvage your brand’s reputation, as a client of ours, a regional cancer treatment center, discovered after suffering a ransomware attack via a third-party vendor. Empathetic and transparent communications secured patients’ trust; outreach to media, meanwhile, promoted coverage that situated the cyberattack as part of a healthcare-sector trend, shifting the spotlight from the client.
Create a culture of preparedness. So, you’ve got a comprehensive and updated response plan. Do your people know how to use it? Crisis mitigation depends on the rapid, choreographed deployment of highly trained teams across functions. Make sure those teams have the plan and get media training specific to your market, as an awareness of cultural nuances can make a huge difference in how your crisis response messaging lands with key stakeholders.
Drill first responders. Nothing prepares your teams for a sophisticated cyberattack like plausible simulation drills. That’s why ENISA, the EU’s cybersecurity force, employs us to stage them. We’ve used Firebell, our proprietary multi-media simulation platform, to help thousands of communicators and leaders prepare for the worst—most recently at a medical device company at risk of a global breach of patient information. After seeing the breach spiral into a global regulatory and patient safety issue, the executive suite reallocated resources to prioritize risk assessment and response plans.
Fight fire with fire. For every AI tool weaponized by cybercriminals, we’ve got one to combat or neutralize it. We’re arming our clients with state-of-the-art detection, analysis, and simulation tools so they can track and counter the spread of disinformation, identify bots and bad actors, detect nascent risks on fringe platforms like Rumble, Telegram, and Truth Social, and test how messaging will land with key stakeholders. As is so often the case, the best defense is a formidable offense.
What we’re listening to…
Click Here (Podcast)
Hosted by former NPR investigations correspondent Dina Temple-Raston, Click Here uses the classic NPR style of radio and first-hand interviews. Click Here maintains a global perspective — presenting worldwide cybersecurity concepts such as security breaches or compromises of applications in concise 30-minute episodes.
Hacking Humans (Podcast)
Join Hacking Humans hosts each week as they look behind the social engineering scams, phishing schemes and other phenomena that are making headlines and inflicting harm on companies around the world.