Business

Costly Lessons of AI Misuse in Brand Marketing

LUXUO unpacks AI misuses across industries, and what they reveal about tech systems, its users, and brand strategies.

Feb 10, 2025 | By Yasmine Loh

The use of artificial intelligence (AI) continues to bounce between good and bad effects. In 2025, AI use is only projected to increase, with McKinsey reporting that AI use in companies leapt to a staggering 72 percent after hanging around 50 to 60 percent in previous years. The question now becomes how well companies can wield AI’s double-edged sword. Within multi-national companies and historical institutions, the effects of AI misuse can damage well-built reputations and undermine their credibility as an organisation. Even while AI offers efficiency and innovation, infamous cases in advertising, politics and arts industries highlights the ongoing struggle to balance technological advancements with brand values.

Coca-Cola

Coca-Cola’s recent venture into AI-driven advertising went awry for their 2024 Christmas campaign. The ad — produced with the assistance of multiple AI studios such as Secret Level, Silverside AI and Wild Card — sought to replicate an iconic Christmas commercial from 1995. Titled, “Holidays Are Coming,” the ad sees its famed red Coca-Cola trucks dressed in twinkling fairylights, as they barrel down snow-blanketed streets. The company has recreated the commercial in previous years to great success, yet the 2024 version faced backlash, with many branding the ad as “soulless” and lacking the emotional depth that has long been associated with the brand’s holiday campaigns.

The ad’s use of AI divided industry creatives and marketers after its release. Fast Company reported that market research firm System1 Group tested the ad out with audiences. “We’ve tested the new AI version with real people, and they love it. The 15-second cut has managed to score top marks. 5.9 stars and 98% distinctiveness. Huge positive emotions and almost zero negative,” said Andrew Tindall, the research firm’s senior VP of global partnerships. System1’s results imply that the ad contributed greatly to long-term brand-building for Coca-Cola. However, the issue many critics have is not simply the use of AI, but rather, the use of AI in a company whose values are so closely connected to authenticity and family — a stark contrast to what many perceive AI to be.

On NBC News, Neeraj Arora, a marketing expert at the University of Wisconsin-Madison, suggested that introducing AI into such a sacred space felt jarring to many consumers, creating a disconnect between the brand’s essence and the campaign itself. “Your holidays are a time of connection, time of community, time to connect with family,” Arora said, “But then you throw AI into the mix… that is not a fit with holiday timing, but also, to some degree, also Coke, what the brand means to people.” While AI has undeniable potential to streamline processes and cut costs, it also runs the risk of diluting the emotional impact that storytelling has when done by real people. Embracing new tools is certainly possible — and in this day and age, practically essential. Though it should not come at the cost of human elements and brand values that are so integral to a company’s mission.

Trump Campaign

With the rise of AI, last year’s US elections and campaigns saw a whole crop of AI-associated issues emerge in politics. One notable example was the use of AI-generated images to create a misleading narrative, particularly regarding Black voters’ support for Trump during his campaign for presidency. The images were uncovered by BBC Panorama, who discovered several deepfake images that portrayed the now-US president photographed with black individuals, which were then widely shared by his supporters. While there was no direct evidence connecting these manipulated images to Trump’s official campaign, they reflect a strategic effort by certain conservative factions to reframe the president’s relationship with Black voters. Cliff Albright, co-founder of Black Voters Matter, noted to BBC that these fake images were part of a broader effort to depict Trump as more popular among African Americans, who were crucial to Biden’s triumph over Trump in 2020.

BBC’s investigation traced one of the deepfake images to radio host, Mark Kaye, who admitted that he created a fabricated image of Trump posing with Black women at a party. Since then, Kaye has distanced himself from any claim of accuracy, stating that his goal was for storytelling purposes instead of factual information. Similarly, Trump shared a number of AI-generated images of Taylor Swift and her fans endorsing his bid for president in August 2024 on Truth Social, with the caption “I accept!” Trump later told Fox Business that he did not generate the images, nor does he know the source of them. Despite this disclaimer, some social media users mistook the images for genuine photographs, blurring the lines between humour, satire, and misinformation.

In an opinion column for The Guardian, Sophia Smith Galer suggests that “Trump’s AI posts are best understood not as outright misinformation — intended to be taken at face value — but as part of the same intoxicating mix of real and false information that has always characterised his rhetoric.” Although there is some truth to this, the confusion caused by deepfakes in a political context reflects the lack of media literacy that many possess. A 2023 study from the University of Waterloo found that around 61 percent of the 260 participants could differentiate between AI-generated images of people and real photographs. Particularly in politics, such uses can result in disingenuous or manipulative practices, further polarising opposing factions that seem none the wiser. As Galer puts it in the context of Trump’s campaign, “Trump isn’t interested in telling the truth; he’s interested in telling his truth — as are his fiercest supporters. In his world, AI is just another tool to do this.”

Sports Illustrated

In late 2023, Sports Illustrated was embroiled in a scandal when science and tech site Futurism published an exposé revealing that several articles published on the magazine’s website were penned by authors who did not exist, their profiles attached to AI-generated headshots. Despite initially denying reports, Sports Illustrator’s licensee, The Arena Group, later removed numerous articles from its site after an internal investigation was launched. Once a towering figure in American sports journalism, what made Sports Illustrated’s blunder particularly damaging was the company’s complete lack of transparency about the use of AI in its content creation process. Rather than openly acknowledging it, The Arena Group attributed the articles to a third-party contractor, AdVon, which they claim were responsible for the fictitious writers.

As a brand, the impacts on Sports Illustrated’s image are significant, as it undermines the magazine’s credibility. The backlash was immediate and evident. CBS reported that the company quickly fired its CEO Ross Levinsohn, COO Andrew Kraft, media president Rob Barrett and corporate counsel Julie Fenster. The Arena Group’s stocks also fell 28 percent after its AI use was exposed, according to Yahoo Sports. What this situation highlights is actually a matter of journalism ethics. The very pillars of the practice are meant to be grounded in truth and objectivity, and once that is lost, it is no longer considered good journalism. Tom Rosenstiel, a journalism ethics professor at the University of Maryland told PBS News that there is nothing wrong with media companies using AI as a tool — “the mistake is in trying to hide it,” he said, “If you want to be in the truth-telling business, which journalists claim they do, you shouldn’t tell lies… a secret is a form of lying.”

Beyond this, Sports Illustrated’s scandal is telling of the current landscape that many media companies operate in. Sports Illustrated was once highly coveted and boasted millions of subscribers. Over the past decade it has faced a steady decline in revenue and influence. The Arena Group’s strategy of monetising the Sports Illustrated brand through licensing and mass content production has resulted in a media company that focuses on constantly churning out content with little editorial oversight. Writing for the Los Angeles Times, tech columnist Brian Merchant said that “the tragedy of AI is not that it stands to replace good journalists but that it takes every gross, callous move made by management to degrade the production of content — and promises to accelerate it.”

READ MORE: For Better or Worse: Here Is How AI Artist Botto is Reshaping the Art Industry

Amazon

Earlier on in the adoption of AI systems, Amazon experimented with an AI recruitment process to disastrous results. In 2014, Amazon had started utilising AI to review resumes, hoping to streamline the hiring process. The system, which rated applicants with scores between one and five stars, aimed to make hiring decisions faster and more efficient. By 2015, it became clear that the tool was not gender-neutral. Instead of evaluating resumes objectively, it learned from data skewed by the tech industry’s historical male dominance, favouring male applicants over female ones. As a result, the system not only filtered women’s resumes out but also penalised CVs that contained the word “women’s” in them.

This revelation, reported by Reuters, highlights a fatal flaw in AI’s machine and data-learning process. While many tech companies tout AI as “predictive,” the reality is that this is not entirely true. Algorithms predict based on existing data — it does not generate information out of thin air. During a lecture at Carnegie Mellon University, tech and business professor Dr. Stuart Evans suggested that biases in machine-learning systems can actually worsen social inequity, further alienating underrepresented groups if not carefully monitored. Interestingly, a 2022 research study on human versus machine hiring processes found that participants viewed a balance between human input and the use of AI systems as the fairest type of hiring process.

What is most chilling about Amazon’s case is not the failure of the AI programme, but actually the society that exists behind it. Machines are often seen as the antithesis of humans — mechanical and lacking in human emotion. Amazon’s AI system proves otherwise. While the machine itself does not possess emotions, the algorithms sadly reflect the reality we live in today and actually amplify existing biases in the world. Companies like LinkedIn are also experimenting with AI-driven tools, but its president of LinkedIn Talent Solutions, John Jersin, stressed that AI is not ready to replace human recruiters entirely because of these fundamental flaws in the system. As a result, Rachel Goodman, a staff attorney with the American Civil Liberties Union told Reuters that “algorithmic fairness” within HR and recruiting processes must increasingly be focused on.

Queensland Symphony Orchestra

Arts industries — already rife with their fair share of AI issues — saw a recent blunder when the Queensland Symphony Orchestra (QSO) posted an AI-generated advertisement on Facebook in February 2024. The ad was meant to entice audiences to attend the orchestra’s concerts, depicting a loving couple sitting in a concert hall, listening to the sounds of the Queensland Symphony play. Upon closer inspection, the image revealed odd proportions in their hands, disjointed clothing and the unsettling facial expressions on the AI-generated people, akin to uncanny valley-type features. Shortly after, the Media, Entertainment & Arts Alliance, an Australian trade union which represents professionals in the creative sector, called the ad the worst AI generated artwork they’ve seen and criticised QSO’s use of AI in an industry that should be celebrating and supporting creative artists of all kinds.

Harsh criticism for QSO stems from the use of AI in a field so deeply connected to human artistry, emotion and expression. The state orchestra has been operating for over 70 years, cultivating a reputation as a community-focused organisation with a rich history in the classical music world. By opting for an AI-generated ad, QSO’s credibility in embracing true artistic integrity was put into question. Many comments in response to the orchestra’s Facebook posts were to hire actual photographers to shoot the promotional campaign, instead of outsourcing to machines. Daniel Boud a freelance photographer based in Sydney, told The Guardian that AI has yet to replace real photographers who work in ads and marketing. “The design agency or a marketing person will use AI to visualise a concept, which is then presented to me to turn into a reality,” Boud told the newspaper. “That’s a reasonable use of AI because it’s not doing anyone out of a job.”

QSO’s AI ad only to the existing controversy of AI in the arts world. In 2023, German photographer Boris Eldagsen made headlines when he won first prize at the Sony World Photography Awards, later admitting that the image was entirely AI-generated. The revelation and result of Eldagsen’s submission suggested a gloomy future to the photography industry — the possibility that AI could be convincing enough to replace real photography. After Eldagsen’s withdrawal from the competition, Forbes reported that the World Photography Awards released a statement saying that “The Awards always have been and will continue to be a platform for championing the excellence and skill of photographers and artists working in the medium.” In a world where AI is becoming increasingly prevalent in creative industries, such glaring mistakes like QSO’s ad suggest that tech creates a disconnect between the organisation and its audiences, who seek genuine experiences rooted in human creativity.

Google

Even within a tech company, AI still proves to be a complicated system to perfect. In February of 2023, Google teased its AI-driven chatbot Bard to the public, and quickly realised its mistake when the chatbot kept spitting out incorrect information. The moment that went viral online was the company’s promotional video for the chatbot. Bard had incorrectly stated that the James Webb Space Telescope took the first pictures of exoplanets, when in fact, the European Southern Observatory’s Very Large Telescope had accomplished this in 2004. Although chatbots are known not to be entirely accurate — as they cannot be updated with facts as it simultaneously occurs in real-life — Google’s grave mistake came when it was revealed that its own employees warned that the chatbot would not be ready for release so soon. Ignoring these cautions, Google released it anyway.

Just months before Bard’s public launch in March 2023, employees raised serious concerns about the tool’s reliability. According to Bloomberg, some internal testers referred to Bard as “a pathological liar,” claiming that the chatbot was generating information that could potentially lead to harm or dangerous situations given their factual inaccuracy. Examples included advice on how to land a plane, whereby some tips provided could lead to a crash; and scuba diving facts that would “likely result in serious injury or death.” Google pushed ahead with the public launch in the hopes of competing with OpenAI’s ChatGPT, sparking criticism about the company’s disregard for AI ethics in the race to stay relevant in the tech industry. This decision to launch Bard without proper safeguards has damaged Google’s brand image, especially considering its reputation as a leader in AI innovation.

Google’s premature launch of Bard suggest that profit and growth have taken precedence, which has ironically taken a downturn after Bard’s mistakes were made evident. Reuters reported that Google’s parent holding company, Alphabet lost USD 100 billion in market value after the release of the promotional video. What this issues highlights is also the future of information online. The tech industry’s haste to develop increasingly advanced AI has them cast quality by the wayside, with no oversight to the credibility of information. Speaking to AP News, University of Washington linguistics professor Emily Bender states that creating a “truthful” AI chatbot is not feasible. “It’s inherent in the mismatch between the technology and the proposed use cases,” Bender said. The reason for this is because AI chatbots rely on a predictive model, designed to predict the next word in the sentence, not tell the truth — a process that many do not understand about AI systems.

READ MORE: Artificial Intelligence: a Blessing or a Curse?

Vanderbilt University

Vanderbilt University experienced controversy when the school’s Peabody College of Education and Human Development sent out a condolence email drafted by AI in response to the tragic mass shooting at Michigan State University. The email aimed to address the pain caused by the tragedy and encourage inclusivity, but included a surprising disclosure at the very end: “paraphrased from OpenAI’s ChatGPT AI language model.” This revelation quickly sparked outrage among students — many of whom felt that the use of AI in such a sensitive context was impersonal and insensitive. Nicole Joseph, Associate Dean of Peabody’s Office for Equity, Diversity, and Inclusion quickly issued an apology after, though to little effect.

An article from the The Vanderbilt Hustler on the matter revealed student perspectives. One source, Laith Kayat, whose sibling attends Michigan State, said that “There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself.” Moreover, the lack of human empathy in the AI-generated message raised concerns about the university’s true commitment to its community, prompting questions from students about whether such practices would extend to other sensitive matters, including the death of students or staff.

Vanderbilt’s mishandling of this situation highlights a deeper issue: the implications of using AI in spaces that require genuine human connection, particularly during moments of crisis. Within Vanderbilt’s email, The Hustler was quick to point out the lack of specifics in the text and incorrect references to the tragedy that had occurred. This connects to a broader issue of the eventual uniformity that AI will cause. Devoid of human touch, an increasing reliance on AI will eventually create an endless feedback loop — AI will spit out the most common text or image, and an increasing use of AI will cause similar data to be fed back into the system. Website WorkLife interviewed a senior tech developer on the implications of generative AI on design work. The developer stated that the adoption of AI on design creates a higher risk of uniformity. “That feels like an area where soul, or the aesthetic — the personal aspect of it still matters more,” the developer said. “Like writing an article — what matters is the writer’s identity and their specific voice.”

Air Canada

Air Canada’s use of AI through its chatbot has recently become a controversial topic, as a series of unfortunate events led to a legal ruling in favour of a passenger who was misled by the bot’s incorrect information. Jake Moffatt, a grieving customer, relied on Air Canada’s automated chatbot to understand the airline’s bereavement fare policy. The chatbot assured him that he could book a full-fare ticket and apply for the bereavement discount later. However, when Moffatt followed this advice, Air Canada rejected his request and claimed that the policy required the application for a bereavement fare to be made before the flight. What followed was a tedious back-and-forth between Moffatt and Air Canada, which eventually extended into a court case.

The legal case was determined by Canada’s Civil Resolution Tribunal, who determined that Air Canada had to pay full compensation to Moffatt. Initially, the airline tried to argue that the chatbot was a “separate legal entity” responsible for its own actions, according to BBC. The tribunal determined that there was no difference between information provided by the chatbot and information provided on a regular webpage. Air Canada’s AI misuse brings to light the legal implications of using automated systems. With AI technology advancing at a rapid pace, there is a need for clearer regulatory frameworks to protect consumers from mistakes that AI can cause. Currently, Canada’s Artificial Intelligence and Data Act states that “there are no clear accountabilities in Canada for what businesses should do to ensure that high-impact AI systems are safe and non-discriminatory.” The act only advises that businesses asses their systems in order to “mitigate risk.”

At the core of this case however, is two foundational rules of running a business: 1. make sure all facts are correct and 2. do not lie to consumers. Even in the case of Air Canada — which was an inadvertent mistake on the part of the AI chat — it is still crucial for organisations to make sure that all information and disclaimers are highlighted. AI is not a malevolent entity, it simply works with the information it has. The tribunal’s ruling reinforces that businesses must bear the responsibility for mistakes made by their AI systems, making it clear that companies cannot sidestep liability by attributing errors to automated tools. What must follow with the use of AI tools must be clearer disclaimers about the chatbot’s limitations.

For more on the latest in business reads, click here.


 
Back to top