What is ‘AI washing’ and why is it a problem?

What is ‘AI washing’ and why is it a problem?
Getty Images A shopper walking out of an Amazon Fresh store in London in 2021Getty Images

Amazon had to defend the use of AI technology in its physical grocery stores

Amazon received critical headlines this year when reports questioned the “Just Walk Out” technology installed at many of its physical grocery stores.

The AI-powered system enables customers at many of its Amazon Fresh and Amazon Go shops to simply pick their items, and then leave.

The AI uses lots of sensors to work out what you have chosen. You then get automatically billed.

However, back in April it was widely reported that rather than solely using AI, Just Walk Out needed around 1,000 workers in India to manually check almost three quarters of the transactions.

Amazon was quick to claim that the reports were “erroneous”, and that staff in India were not reviewing video footage from all the shops.

Instead it said that the Indian workers were simply reviewing the system. Amazon added that “this is no different than any other AI system that places a high value on accuracy, where human reviewers are common”.

Whatever the exact details of the Amazon case, it is a high-profile example of a new and growing question – whether companies are making over-inflated claims about their use of AI. It is a phenomenon that has been dubbed “AI washing” in reference to the environmental “green washing”.

But first, a reminder of what exactly AI means. While there is no exact definition, AI allows computers to learn and solve problems. AI is able to do this after first being trained on huge amounts of information.

The specific type of AI that has made all the headlines over the past few years is so-called “generative AI”. This is AI that specialises in creating new content, be it having text conversations, or producing music or images.

Chatbots like ChatGPT, Google’s Gemini, and Microsoft’s Copilot are popular examples of generative AI.

When it comes to AI washing, there are several types. Some companies claim to use AI when they’re actually using less-sophisticated computing, while others overstate the efficacy of their AI over existing techniques, or suggest that their AI solutions are fully operational when they are not.

Meanwhile, other firms are simply bolting an AI chatbot onto their existing non-AI operating software.

While only 10% of tech start-ups mentioned using AI in their pitches in 2022, this rose to more than a quarter in 2023, according to OpenOcean, a UK and Finland-based investment fund for new tech firms. It expects that figure to be more than a third this year.

And, says OpenOcean team member Sri Ayangar, competition for funding and the desire to appear on the cutting edge have pushed some such companies to overstate their AI capabilities.

“Some founders seem to believe that if they don’t mention AI in their pitch, this may put them at a disadvantage, regardless of the role it plays in their solution,” says Mr Ayangar.

“And from our analysis, a significant disparity exists between companies claiming AI capabilities, and those demonstrating tangible AI-driven results.”

Sri Ayangar Sri AyangarSri Ayangar

Sri Ayangar says that some start-up bosses feel they just have to mention AI

It is a problem that has quietly existed for a number of years, according to data from another tech investment firm, MMC Ventures. In a 2019 study it found that 40% of new tech firms that described themselves as “AI start-ups” in fact used virtually no AI at all.

“The problem is the same today, plus a different problem,” says Simon Menashy, general partner at MMC Ventures.

He explains that “cutting-edge AI capabilities” are now available for every company to buy for the price of standard software. But that instead of building a whole AI system, he says many firms are simply popping a chatbot interface on top of a non-AI product.

Douglas Dick, UK head of emerging technology risk at accountancy giant KPMG, says the problem of AI washing is not helped by the fact there not a single agreed definition of AI.

“If I asked a room of people what their definition of AI is, they would all give a different answer,” he says. “The term is used very broadly and loosely, without any clear point of reference. It is this ambiguity that is allowing AI washing to emerge.

“AI washing can have concerning impacts for businesses, from overpaying for technology and services to failing to meet operational objectives the AI was expected to help them achieve.”

Meanwhile, for investors it can make it harder to identify genuinely innovative companies.

And, says Mr Ayangar: “If consumers have unmet expectations from products that claim to offer advanced AI-driven solutions, this can erode trust in start-ups that are doing genuinely ground-breaking work.”

Regulators, in the US at least, are starting to take notice. Earlier this year, the US Securities and Exchange Commission (SEC) said it was charging two investment advisory firms with making false and misleading statements about the extent of their use of AI.

“The firm stance taken by the SEC demonstrates a lack of leeway when it comes to AI washing, indicating that, at least in the US, we can expect more fines and sanctions down the line for those who violate the regulations,” says Nick White, partner at international law firm Charles Russell Speechlys.

Nick White Nick WhiteNick White

Nick White says that it is good to see US regulators clamping down on the problem

In the UK, rules and laws covering AI washing are already in place, including the Advertising Standards Authority’s (ASA’s) code of conduct, which states that marketing communications must not materially mislead, or be likely to do so.

Michael Cordeaux, associate in the regulatory team at UK corporate law firm Walker Morris, says that AI claims have become an increasingly common feature of advertisements subject to ASA investigation.

Examples include a paid-for Instagram post about an app captioned “Enhance your Photos with AI”, which was held by the ASA to be exaggerating the performance of the app, and was therefore misleading.

“What is clear is that AI claims are becoming increasingly prevalent and, presumably, effective at piquing consumer interest,” says Mr Cordeaux.

“In my opinion we are at the peak of the AI hype cycle,” says Sandra Wachter, a professor of technology and regulation at Oxford University, and a leading global expert on AI.

“However, I feel that we have forgotten to ask if it always makes sense to use AI for all types of tasks. I remember seeing advertisements in the London Tube for electric toothbrushes that are powered by AI. Who is this for? Who is helped by this?”

Also, the environmental impact of AI is often glossed over, she says.

“AI does not grow on trees… the technology already contributes more to climate change than aviation. We have to move away from this one-sided overhyped discussion, and really think about specific tasks and sectors that AI can be beneficial for, and not just blindly implement it into everything.”

But in the longer term, says Advika Jalan, head of research at MMC Ventures, the problem of AI washing may subside on its own.

“AI is becoming so ubiquitous – even if they’re just ChatGPT wrappers – that ‘AI-powered’ as a branding tool will likely cease to be a differentiator after some time,” she says. “It will be a bit like saying ‘we’re on the internet’.”

Leave a Reply

Your email address will not be published. Required fields are marked *