Tap
Back to Resources
Article8 min readFebruary 2026

Building the AI Adoption Engine: Our Vision for Tap

Most organizations measure AI adoption by counting licenses. We believe the real signal lives in the conversations you're not having. Here's why we built Tap — and where we're taking it.

The Adoption Gap Nobody Talks About

Every enterprise technology leader knows the playbook: buy licenses, run training sessions, send reminder emails, hope for the best. And every year, the results tell the same story — the tools are there, but the adoption isn't.

According to a 2024 RAND Corporation study based on interviews with 65 experienced data scientists and engineers, more than 80% of AI projects fail — roughly double the failure rate of non-AI IT projects. McKinsey and BCG have independently estimated that 70% of digital transformations fail to meet their objectives. The gap between investment and impact is enormous, and it's growing.

The AI Failure Rate

More than 80% of AI projects fail to reach production, according to the RAND Corporation (2024). That's twice the failure rate of traditional IT projects.

But here's the thing most vendors miss: the problem isn't the technology. The RAND study identified five root causes of failure, and every single one traces back to human and organizational factors — miscommunication about what problem to solve, lack of adequate data, chasing shiny technology over real problems, poor infrastructure for deployment, and applying AI to problems it can't solve.

None of these are technical problems. They're feedback problems. And that's why we built Tap.

Why Feedback Is the Missing Layer

When engineering leaders try to understand whether their teams are actually adopting AI tools, they reach for the obvious metrics: license utilization, code suggestions accepted, pull requests merged. These tell you what happened. They never tell you why.

Consider GitHub Copilot. The 2024 Stack Overflow Developer Survey found that 82% of developers using AI tools were using ChatGPT and 68% were using GitHub Copilot. A multi-company study across Microsoft, Accenture, and a Fortune 100 enterprise found an average 26% increase in developer productivity with Copilot access. Google's internal research showed developers completed tasks roughly 21% faster with AI assistance.

Those numbers look great in a quarterly review. But they hide critical questions. Which developers aren't using these tools — and why? What friction points are slowing adoption? Where are teams developing workarounds instead of embracing the intended workflow? Which use cases are delivering real value versus which are generating "almost right" code that creates more debugging work than it saves?

The Sentiment Gap

Developer sentiment toward AI tools has dropped from over 70% positive in 2023 to roughly 60% in 2025. The number-one frustration, cited by 66% of developers, is AI solutions that are "almost right, but not quite" (Stack Overflow Developer Survey).

Surveys can't answer these questions. Traditional feedback methods — annual engagement surveys, quarterly pulse checks, NPS scores — were designed for a world where organizations changed slowly. AI adoption moves at the speed of software releases, not fiscal quarters.

From Surveys to Conversations

Tap replaces static surveys with AI-driven conversations. Instead of asking employees to rate their experience on a scale of 1 to 5, Tap engages them in a genuine dialogue — asking follow-up questions, probing deeper when something interesting surfaces, adapting to the individual's context and role.

The difference isn't just cosmetic. Traditional surveys suffer from well-documented response rate collapse. Pew Research has tracked telephone survey response rates dropping from 36% in 1997 to roughly 6% by 2018. While enterprise surveys aren't telephone polls, the underlying dynamic is the same: people are drowning in requests for their attention, and a 15-question checkbox form doesn't feel worth their time.

A 3-to-5-minute AI conversation is fundamentally different. It feels like someone actually listening. And because the AI can follow threads that matter to each individual, the insights that emerge are qualitatively richer — not just "how satisfied are you?" but "what specifically is blocking you from using Copilot for test generation, and what would make that friction go away?"

This is the core of what Tap does: it turns the unstructured reality of how people experience technology into structured, actionable intelligence that engineering leaders can act on.

Why We Call It an Engine

We deliberately chose the word "engine" over "tool" or "platform." An engine is a continuous system — it takes input, transforms it, and produces output in an ongoing cycle. That's exactly what AI adoption requires.

The adoption challenge isn't a one-time event. Teams don't flip a switch and start using AI tools effectively. Adoption is a process of experimentation, friction, learning, adjustment, and gradual integration into daily workflows. It requires continuous sensing — understanding where people are in their adoption journey, what's working, what's confusing, and what's being quietly abandoned.

Tap's vision is to be that continuous sensing layer. Today, that means AI-powered conversations that surface the real blockers and accelerators of adoption. Tomorrow, it means trend dashboards that track adoption patterns across teams and time periods. Eventually, it means connecting validated needs to concrete next steps — closing the loop between insight and action.

The Adoption Cycle

Effective AI adoption isn't a launch event — it's a continuous cycle of deploy, listen, learn, and adjust. Tap is designed to power every stage of that cycle.

We're building this incrementally, with each layer informed by what we learn from real organizations using Tap. That's not just a development philosophy — it's the whole point. An adoption engine that doesn't listen to its own users would be the ultimate irony.

Where We're Headed

Tap is live today with its core conversation engine — the ability to create AI-powered feedback campaigns, collect rich qualitative data through natural conversations, and surface themes and sentiment patterns through automated analysis.

What comes next is the Action Layer: the set of capabilities that transform raw insight into organizational momentum. We're building toward trend dashboards that let leaders track adoption signals over time, feature request aggregation that turns individual feedback into prioritized backlogs, and workflow intelligence that connects adoption patterns to team structure and project context.

The organizations that win at AI adoption won't be the ones with the biggest training budgets or the most licenses. They'll be the ones that build a genuine feedback loop between their people and their technology strategy. That's the engine we're building.

If you're an engineering leader navigating AI adoption and want to move beyond license counts and completion rates, we'd love to talk. Tap Insider is open, and the teams shaping the product today will shape how every organization understands adoption tomorrow.