A powdered wig shouldn’t trigger a reputational crisis. But when Gemini, Google’s generative AI tool, produced an image that recast a U.S. Founding Father as Black and racially reimagined other historical figures, Google users had new cause to suspect that Gemini’s underlying algorithms might be inherently biased.
Like so many businesses caught up in the sprint to harness and profit from AI, Google was caught unprepared when it came to risk-mitigation. With some anticipation, the reputation damage caused by this situation might have been averted.
No organization can afford to make the same mistake. As AI overtakes everything from word processing to risk analyses, it’s crucial that leaders review enterprise-wide use of AI and put protocols in place to regulate its deployment. Fortunately, at many companies, leaders are prioritizing such protocols. According to a recent IBM survey, fully 96% of EMEA execs who have or plan to deploy generative AI are actively engaged in creating ethical use-governance frameworks.
Here’s a checklist to help you, too, become a responsible steward and to help ensure you avoid a potentially reputation damaging moment:
Commit to AI oversight and accountability. Define its mission and application in your business. Responsible stewardship begins with an acknowledgment that your business cannot afford to play fast and loose with such a powerful tool.
Establish a steering committee with complete visibility into how AI is being used in the organization to assess potential harms, set guidelines on responsible usage, and monitor their adoption.
Educate and train employees to comply with usage guidelines. Get legal input to ensure all parts of your organization are prepared to deploy AI responsibly.
Define what constitutes imperfect use and experimental results, so that employees understand that AI is prototypical and bound to be imperfect and can recognize unacceptable output.
Build an environment for AI experimentation. Weber Shandwick teams have developed an AI Sandbox, encouraging responsible experimentation without risk of exposure. We’re pressure-testing AI to determine its strengths and weaknesses in various scenarios and use cases, as we firmly believe that responsible stewardship can unlock AI’s potential while mitigating its risks.
Be forthright in your disclosures. Insist every employee note where, when, why, and how they’ve used AI in delivering on their objectives. Always disclose to external parties how AI is being used for a project, whether for ideation, content drafting, or visual asset creation. This will ensure material doesn’t accidentally get repurposed in public-facing communications, where it may receive more scrutiny.
Fact-check every bit of content you consume, create, and disseminate. AI is known to “hallucinate,” wherein it creates fictitious content; never assume AI has a lock on accuracy.
Never feed an AI tool confidential, proprietary or personal information. View AI as you would any collaborator: anything you share could wind up being shared with the public.
Our team has been working through many of these considerations – in partnership with our Futures and legal partners – as we look to further enhance our crisis simulation platform – Firebell – which mimics social platforms like X (formerly Twitter) feeds, and Facebook campaigns (we swear, this isn’t a sales pitch). Over the last few months, we’ve been exploring how to use AI tools to enhance the experience, creating deep-fake news segments. The effect? “I’m sweating,” one client told us after watching the simulation.
See for yourself.
Peter Duda, President, Global Corporate Issues at Weber Shandwick
Paul Furfari, Senior Vice President, Global Corporate Issues at Weber Shandwick
This is the second issue of Compass for the Chaos – Weber Shandwick’s monthly newsletter highlighting recent trends and topics impacting global organizations. This month, we’re zeroing in on AI’s potential risks and rewards as the emerging technology increasingly shapes the future of business, work and society.
Subscribe for future issues and ongoing insights from our team of crisis and risk experts to help you navigate the world of risk and opportunities.
Was this newsletter forwarded to you? Subscribe to our Substack to receive these moving forward.
AI Trends We’re Watching:
Deepfakes will elevate disinformation to Defcon 3. AI-generated content is estimated to influence nearly 99% of all information on the Internet. With deepfakes generated by AI getting more convincing every day, the potential for disinformation, narrative manipulation, and reputation Armageddon grows exponentially. And no corner of the Web is safe. Certainly not financial markets, as the SEC recently warned. Politics, as we hardly needed OpenAI CEO Sam Altman to tell us, is in grave danger of manipulation: just look at what happened to voters in the New Hampshire Primary. Next up for next-level reputational risk? Your business. Major brands must be on extreme alert for unauthorized tweaks to their online narratives, including those introduced by their own generative AI. They must fact-check the content they create, consume, and distribute. They must educate employees on nefarious as well as productive applications of AI. Fortunately, the Internet’s gatekeepers are taking steps to get out in front of the inevitable attacks. The executive suite would be wise to do the same.
AI could boost productivity upwards of 20%. By identifying new sources of revenue, new ways to cut costs, shortcuts to increased output, and pathways to long-term competitive advantage, AI has the power to supercharge business growth. A recent Boston Consulting Group survey reveals that, just by deploying AI to redesign workflows and aid employees in performing daily tasks, productivity could grow by 10% to 20%. Little wonder that 85% of leaders say they plan to increase their spending on AI and GenAI in 2024. But an alarming 90% also told BCG they’re waiting for the hype to die down before they fully commit, potentially missing out on the gravy train. If you’re one of them—if you have yet to develop a gameplan for incorporating AI into your business operations—talk to Chris Perry, Weber Shandwick Future’s chair and chief innovation officer and the lead guy on our AI Accelerator intelligence tool. We can help you realize this unprecedented opportunity.
This month’s newsletter received contributions from Eric Blankenbaker, Caroline Welch, Meghan Durant, Mady Epplin, David Weinberg, Mae Symmonds, Linda Chen, Kayla Axelrod, and Mackenzie Chalfin.
What knocked our socks off
Forget Siri. Now you can tell Sora, the latest generative AI tool in OpenAI’s kit, what kind of video you want it to make with just a text prompt. Currently in beta, Sora (Japanese for “sky") generates minute-long videos with hyper-realistic scenes, multiple characters, specific types of motion, and accurate subject and background details. Examples on YouTube can give you a taste of its awesomeness. But with great power comes great responsibility: what’s to prevent Sora’s misuse? OpenAI has vowed to prevent the model from serving up deepfakes, inappropriate content, and copyrighted art. We’ll be watching.
Risk Mitigator of the Month:
AI can tell you how—and when—to steer public opinion
How does social media respond to a hot-button issue? Typically, and oh-so-predictably, with a Blue response and a Red response. As you can see in this AI-generated visual, there’s no overlap between ideological galaxies. Each is trapped in its own cosmos. That doesn’t make it easy to join the discourse, let alone direct it. There’s no overlap to occupy.
But work we’re doing with higher-education institutions suggests that, on some issues, there’s less polarity than you’d expect. Not only can you take a stand; potentially, you could sway the discourse.
Recently we parsed an online conversation about free speech and censorship on university campuses using an AI-powered narrative analysis tool. The tool not only creates a visual of how both ideologically committed and middle-ground users interact on the topic, but also helps us map complex connections, understand motivating factors, and make sense of user actions. We were surprised to see a galaxy of white dots among the partisan red and blue bubbles, dots indicative of multiple points of view.
The tool reveals that, on the subject of free speech, there’s a fairly sizeable “moveable middle.” Who knew?
Well, with the right tool and the right team at your disposal, you might—positioning you to exploit a rare opportunity to influence public discourse while persuading key audiences.
Nailed it
Ok, so Google may have had some serious trouble with Gemini, but we have to give them major props for demonstrating in their Superbowl ad just how wondrously AI might enhance the human experience. The commercial, which features a man with a visual impairment photographing milestone moments of his life, showcases the phone’s Guided Frame feature, which utilizes AI-driven facial recognition and vocal cues to help users capture their subject. It’s a powerful testament not only to how AI can benefit members of disabled communities but also how a company can harness AI responsibly to gain a competitive edge in the marketplace. It left us with hope that we aren’t all going to wind up in a dystopian Skynet future.
Do you have an example of a company that handled a recent issue well and should be featured in the newsletter?