FeatureGenerative AI Adoption: Top-Down or Bottom-Up?
As generative artificial intelligence (AI) tools such as ChatGPT gain traction, a pivotal question is surfacing across organizations: is AI being adopted as a coordinated business strategy, or simply used informally by employees trying to save time?
Executives often speak confidently about “AI transformation,” but on the ground, the story is more fragmented. A whitepaper from SAS reveals that many employees have already integrated tools like ChatGPT into everyday workflows — to write email messages, summarize reports, generate code or brainstorm ideas — often without formal guidance, governance or oversight.
This bottom-up experimentation, while valuable, introduces challenges around consistency, security and scalability. As a result, there's a widening gap between AI ambition and AI execution. Some companies are moving decisively by embedding AI into internal systems, launching training initiatives and integrating it into customer-facing solutions. Others are stuck in a holding pattern, unsure how to assess AI’s return on investment, build roadmaps or manage risk.
The shift from isolated experimentation to enterprise integration is already underway. According to a recent EY survey,
- 48% of tech executives report adopting or deploying agentic AI.
- 50% expect a majority of their AI implementations to be autonomous within the next two years.
- 81% believe AI will help them meet business goals within 12 months.
- 70% are investing in talent through upskilling and 68% through external hiring.
So, is generative AI a top-down enterprise initiative or a bottom-up experiment gathering steam? Increasingly, the answer is: both. And the line between the two is disappearing fast.
A GenAI Inflection Point in the Workplace
As companies such as OpenAI and Anthropic release successive generations of large language models (LLMs), one trend has become apparent: Performance benchmarks are converging. While current LLMs are still powerful, raw model improvements are yielding diminishing returns, especially in reasoning.
Creating further value now depends more on how models are used than on the models themselves. Advanced prompting techniques such as chain-of-thought reasoning, multi-turn dialogue and reinforcement learning are becoming more essential to generate reliable outcomes.
“We’re reaching a point where the marginal gains from newer models are starting to flatten out,” says Nimisha Mehta, a DevOps engineer at Confluent. “To push performance meaningfully forward, we’ll likely need a true breakthrough. Until then, success will depend on how creatively we use what we already have.”
GenAI and the Complexity Factor
Despite the flood of new AI products — some impressive, others more novelty than utility — the real challenge lies in converting capabilities into business value. Many organizations are beginning to recognize that AI is most effective when treated not as the centerpiece, but as a behind-the-scenes assistant.
In practical terms, the most immediate examples involve automating low-complexity workflows: summarizing Slack threads, triaging customer support tickets or generating routine documentation. In these contexts, LLMs shine by accelerating projects, drafting content and reducing manual work.
But when applied to complex, multi-system environments like those in large enterprises, LLMs falter. They may misunderstand system logic, lose context or generate outputs based on flawed assumptions.
“Beyond a certain threshold of complexity, LLMs struggle,” Mehta explained. “They just don’t have the systems-level visibility or contextual awareness that humans naturally bring to the table.”
Meanwhile, agentic AI is expanding GenAI’s reach. These systems also take more autonomous actions, from executing transactions to managing workflows. But they come with new risks: lack of reliability, difficulty in controlling behavior and outputs that still feel artificial or misaligned with user expectations.
Despite these limitations, enterprise leaders are betting big. Even small productivity gains are enough to justify investment, and the field is evolving rapidly. Tools that offer reliability, simplicity and secure integration are emerging as frontrunners.
“Even if today’s AI tools are imperfect, they’re still worth paying close attention to,” Mehta said. “In a gold rush, the smartest move might just be to sell the shovels.”
What AI Adoption Numbers Don’t Show
While 78% of global companies report using AI, and 71% say they’ve deployed GenAI in at least one business function, much of this usage is superficial, said Iliya Rybchin, principal at BDO.
“On paper, it sounds impressive,” Rybchin said. “But in reality, most of this ‘adoption’ amounts to employees using ChatGPT to write emails or brainstorm, usually without IT support or executive oversight.”
Leadership rhetoric often outpaces real implementation, Rybchin contended. The result is what he calls “proof of concept purgatory,” a state where pilots are launched with enthusiasm but never scaled, leaving companies stuck between experimentation and transformation.
“The AI boom has created an expertise vacuum,” Rybchin explained. “Many are rushing to fill it, armed with little more than buzzwords and updated LinkedIn titles.”
A popular Reddit thread captured this dynamic with biting satire, Rybchin said. “You may notice I only started being an AI thought leader last year. Before that, I was one of my firm’s preeminent blockchain experts. Prior to that, I led our IoT practice ….”
Beyond the jokes lie serious risks. Overhyped and underdelivering solutions are muddying the waters, making it harder for genuine innovation to gain traction, said Princeton’s Arvind Narayanan. Meanwhile, institutional pressures lead some CIOs to rebrand routine automation as “AI agents,” while consultants pitch demos that never make it to production.
The takeaway: AI is not just a tech problem. It’s an organizational one. Success requires rethinking how people, processes and data interact, and ensuring AI supports, not disrupts, broader business goals.
Top-Down vs. Bottom-Up: The GenAI Problem
So where is GenAI really being driven from? “GenAI is simultaneously a top-down initiative and a bottom-up movement,” said Paul McDonagh-Smith, senior lecturer at MIT Sloan Executive Education. “That’s what makes it so powerful — and so risky.”
Today, GenAI sits in boardroom roadmaps and grassroots workflows alike. From finance and research to human resources and customer service, executives outline long-term potential while employees test its usefulness in day-to-day tasks.
This dual adoption model fuels creativity and rapid learning, but is also uncoordinated. When AI is used without oversight, mistakes happen. Those errors can come with steep costs: financial, regulatory and reputational, McDonagh-Smith said.
“Organizations must consider not just the cost of correcting mistakes, but the cost of missing them altogether,” McDonagh-Smith warned.
Still, overregulating grassroots innovation risks stifling progress. The solution isn’t to shut down experimentation, but to match it with enterprise objectives.
“The companies that succeed will be those that connect bottom-up innovation with top-down strategy,” McDonagh-Smith said. “That’s how you turn scattered efforts into sustainable advantage.”
Countering AI Adoption Risks
AI success hinges on thoughtful integration, not speed, said Len Gilbert, executive vice president and head of strategic consulting at Mod Op.
“Not having an AI plan that is well understood and connected to business strategy makes it difficult to ensure that AI will be net-beneficial,” Gilbert said.
Data from S&P Market Intelligence shows a 42% failure rate for AI projects in 2025. Common culprits are poor data quality, regulatory barriers, weak integration and unclear value propositions.
Mod Op identifies five core pillars for successful enterprise AI:
- Strategy and governance — AI must be adopted top-down, with support from leadership and rules for legal and ethical use. Gilbert recommends forming an AI Council to oversee implementation.
- Clearly defined use cases — Focus efforts where they matter most: customer experience, operational efficiency or workforce productivity. Start small, succeed fast, scale intentionally.
- Clean, trusted data — AI systems are only as effective as the data behind them.
- Measurable impact — Define success metrics up front. Without key performance indicators, even well-built AI tools risk becoming costly experiments.
- Change management — AI transformation is also a people transformation. Mod Op incorporates training on prompt engineering, ethical use and foundational AI principles into every engagement.
“Use AI to improve the outcomes defined in your strategy,” Gilbert added. “Don’t just experiment, align its use with your goals.”
Whether top-down or bottom-up, GenAI is reshaping the digital workplace. But without matching business goals, many efforts risk falling short. The companies that succeed will be those that combine grassroots experimentation with executive clarity, creating structure without stifling innovation. Pragmatism and creativity must go hand-in-hand.
Editor's Note: For more on GenAI adoption trends:
- Big Tech Bets Billions on GenAI, But Adoption Is Slow — Companies like Microsoft, Google and Salesforce are betting the house on generative AI, yet adoption rates lag far behind the investment. Here's why.
- What It Takes for GenAI Pilots to Scale — Alan Pelz-Sharpe, Rebecca Hinds and Craig Durr join Three Dots to discuss why individual productivity gains with GenAI haven't scaled across the organization.
- Why HR and IT Must Join Forces for AI to Succeed — The overlooked partnership at the center of real AI adoption.