FeatureHow Real Is AI Washing? 4 Companies — and 1 Rock Band — Caught Faking It
AI is everywhere — and increasingly, so are the scams. As companies rush to harness the momentum of artificial intelligence — and leverage its integration as a powerful marketing tool — a trend has emerged in parallel: a surge of corporate deception where “AI-powered” turns out to be nothing more than smoke, mirrors and, in some cases, low-paid offshore labor.
This practice, dubbed “AI washing,” is prompting lawsuits, regulatory scrutiny and even public embarrassment as companies are forced to admit they’ve misled customers, investors or both.
The hype-to-reality gap has become large enough to trigger a legal trend. According to a report from Cornerstone Research and the Stanford Law School Securities Class Action Clearinghouse, AI-related securities fraud cases more than doubled in 2024, with at least 15 active federal cases — up from just seven the year prior.
1. Builder.ai: Billion-Dollar Valuation, Zero Actual AI
The most stunning collapse so far may be that of Builder.ai, a UK-based company that secured over $450 million in funding from heavyweight investors, including Microsoft and the Qatar Investment Authority.
Touted as a no-code AI software platform, Builder.ai promised to generate applications autonomously using proprietary AI. The tech “backend” was in fact a team of 700 engineers in India manually writing code while pretending to be bots.
The company filed for bankruptcy across multiple countries after the AI washing incident and was accused of inflating sales to investors. The scale of the fallout, particularly for investors like the Qatar Investment Authority, marked it as a cautionary tale of AI credulity gone wrong.
2. Amazon’s 'Just Walk Out' Isn’t What It Seemed
Amazon’s stores equipped with “Just Walk Out” technology were supposed to be a triumph of computer vision and automation: shoppers could grab items and leave without scanning or checking out.
Amazon Go store with Just Walk Out technology in Seattle, Washington (Dec. 2019)
However, internal documents revealed that up to 70% of purchases required manual verification by roughly 1,000 workers based in India. Far from being fully autonomous, the AI needed a helping human hand on most transactions.
Amazon eventually began rolling back the technology in its Fresh stores in favor of Dash Carts, a more visibly interactive tech that at least sets clearer user expectations. While Amazon claimed the Indian team was training the model, the sheer volume of human intervention led many to question whether “Just Walk Out” ever lived up to its branding.
3. Innodata's 'Lipstick on a Pig' Technology
Innodata, a data services company based in New Jersey, came under fire for what plaintiffs allege was a fictional AI platform. The company told investors it had developed a cutting-edge AI system named Goldengate.
The lawsuit from investors came about after the release of a report from financial research firm Wolfpack Research, which claimed the AI tech was "smoke and mirrors" and that its marketing claims were comparable to "putting lipstick on a pig." After the report's release, Innodata's stock price dropped more than 30%.
One highlight of the report included a conversation between Wolfpack Research and a former Innodata employee:
Former Employee: "All they do is services…"
Wolfpack Analyst: "Would all services include AI?"
Former Employee: "No… services meaning labeling, computer vision, data aggregation, as far as taking unstructured to structured."
Wolfpack Analyst: "Right, so that's not AI?"
Former Employee: "Nope."
According to an investor lawsuit, the platform was rudimentary at best, and the actual operations were handled by offshore workers rather than machine intelligence. The case is still unfolding in federal court, but if the allegations of AI washing hold, it could set a precedent for what counts as “proprietary AI” under SEC scrutiny.
4. Evolv and the AI Security That Wasn't
Boston-based Evolv Technologies marketed its Evolv Express security system as an AI-powered breakthrough in weapons detection. The company promised real-time threat identification without the hassle of metal detectors. But lawsuits — and a settlement with the FTC — suggest the AI may not have lived up to the promise.
Reports indicated that the system failed to detect certain knives and explosives. The FTC settlement barred Evolv from making unsupported claims about its AI and gave some customers, including school systems, the option to cancel contracts.
Evolv still defends its technology, saying it uses a mix of AI, software and sensors, but the incident brought to attention the reputational risk of stretching AI’s actual capabilities beyond the truth.
5. The Velvet Sundown Hoax: AI Goes Artistic
Perhaps the strangest story so far comes from the world of music. Folk-rock band Velvet Sundown shot to viral fame on Spotify, earning over a million monthly listeners and topping charts in Europe. The band had a full backstory, visual branding and even a social media “spokesperson” — until it was revealed the entire act was an “AI hoax.” The music had been generated by AI, guided by humans who framed the project as an artistic mirror to today’s hype.
While not a fraud in the legal sense, the AI washing reveal triggered a public backlash and opened a broader conversation about authenticity in creative industries.