Product AdoptionEdTechImplementationResearch

Why Your EdTech Pilot Succeeded and Your Rollout Did Not: The Research Behind Pilot Purgatory

9 min read

A colleague in the space recently sent me a collection of articles that inspired this piece. Not because any single finding was surprising on its own, but because when you read them together, they paint a picture that every EdTech product leader needs to see. The research is converging on a single uncomfortable conclusion: the way most EdTech companies run pilots is fundamentally broken, and it is costing the entire industry.

I have written about the pilot-to-scale gap before from a practitioner's perspective. What these new sources add is the data. The scale of the problem is now quantified, and the patterns I have seen in individual engagements are showing up across industries.

The numbers are stark. According to Gartner, nearly 85% of AI initiatives never make it to production. McKinsey's research puts it slightly differently but just as bluntly: only 10 to 15% of companies actually achieve measurable business impact from AI. And Deloitte found that over 60% of AI pilots are owned by a single team with limited cross-functional buy-in (Jain, EdTech Digest, 2025). These are not EdTech-specific numbers, but the pattern applies directly. K-12 may actually be worse.

85% of technology pilots never reach full production. The problem is not that pilots fail. The problem is that successful pilots create a false sense of readiness for what comes next.

The Pilot Purgatory Problem

Novo Innovative Pathways, a consultancy focused on education innovation, uses a term that deserves wider adoption: pilot purgatory. It describes the state where a district or organization runs pilot after pilot, each one producing promising results, but never builds the institutional infrastructure to move from pilot to full implementation. The pilots keep happening. The scaling never does.

This is not a failure of willpower or budget. It is a structural problem. As Novo's research describes, educators are left mainly to navigate AI adoption on their own. Only one in five teachers reported that their school even had an AI policy, and over two thirds received no AI training. Tool adoption is ad hoc rather than strategic. Each new tool gets its own informal trial, its own champion, its own isolated set of results. And when that champion moves on or the budget cycle shifts, the pilot results vanish with them (Novo Innovative Pathways, 2025).

Dr. Julia Rafal-Baer, co-founder and CEO of ILO Group, an education and policy strategy firm, put it even more directly in a recent GovTech opinion piece co-authored with Dr. Scott Muri: "K-12 is great at launching cool pilots and terrible at institutionalizing what works" (GovTech, 2025). They describe what they call the shiny-object syndrome: "rushing something into classrooms simply because it is new." And too often, they write, even pilots lack research, clear goals, or measures of success.

The Organ Rejection Effect

McKinsey's Spark and Sustain research, which studied school systems across 73 countries, identifies a failure mode they call "organ rejection of reform." It happens when improvements falter in the face of pushback from communities and educators who feel they were not consulted. Top-down policies may not actually work once they reach the classroom (McKinsey, 2024). The technology itself may be perfectly functional. But the environment it is being introduced into is not prepared to accept it.

In K-12 EdTech, this organ rejection effect shows up everywhere. A pilot succeeds in three classrooms where teachers volunteered and received intensive support. The district decides to roll it out to 30 classrooms. The new teachers did not volunteer. They did not receive the same depth of training. Their principals may not even understand why this tool was chosen. The technology is identical, but the conditions around it have completely changed. And so the adoption fails.

The McKinsey research found that only 20% of education improvement efforts meet their stated goals, and only 23 of 73 systems managed to achieve significant, sustained improvement in student outcomes over the past decade. The systems that succeeded invested in organizational conditions: leadership alignment, coherent priorities, authentic engagement with educators and communities, and sustained support over years, not months. The fix is not more pilots or better technology. It is investing in the conditions that allow change to take root (McKinsey, 2024).

McKinsey calls it "organ rejection of reform." The technology works. The culture was not prepared to receive it.

Why 60% of Pilots Are Set Up to Stall

Dipesh Jain, writing in EdTech Digest, highlights a Deloitte finding that should alarm every product team: over 60% of AI pilots are owned by a single team with limited cross-functional buy-in. In EdTech, this plays out predictably. The customer success team owns the pilot. Sales owns the expansion conversation. Product owns the roadmap. The curriculum team at the district owns the pedagogical alignment. And none of these groups are systematically coordinating on what it takes to move from 3 classrooms to 300.

This fragmentation creates a gap between pilot and production that nobody owns. The pilot generates data that proves the product works in controlled conditions. But nobody has built the bridge between those controlled conditions and the messy reality of full deployment. The onboarding process that worked for 10 eager volunteers does not work for 100 teachers who were told to use something new in the middle of the school year. The support model that worked when your team had 3 accounts does not work when you have 30 (Jain, EdTech Digest, 2025).

This is not a sales problem or a product problem. It is a systems problem. And it requires a systems-level response.

The Shiny Object Trap

The shiny-object syndrome observation from Rafal-Baer and Muri cuts in both directions. Districts chase new tools because they are under pressure to show innovation, and every conference season brings a new wave of products promising transformation. But vendors are equally complicit. The incentive structure in EdTech rewards launching new pilots, not sustaining existing ones. A new pilot logo looks great on the investor deck. A quietly successful renewal in year three does not generate the same excitement.

The result is a market-wide pattern where both buyers and sellers optimize for the wrong phase of the adoption lifecycle. Districts spend disproportionate energy evaluating and piloting new tools. Vendors spend disproportionate energy on demos and proofs of concept. And the actual hard work of embedding technology into daily practice, the work that determines whether students benefit, gets structurally underinvested.

As they wrote, the problem is not identifying promising technology. "We are prone to the shiny-object syndrome, rushing something into classrooms simply because it is new." The problem is what comes after: the unglamorous, multi-year process of training, iterating, measuring, and institutionalizing (Rafal-Baer and Muri, GovTech, 2025).

What the Research Tells Us to Do Differently

Across these sources, a consistent set of recommendations emerges. They are not new ideas. But the fact that researchers, data from Gartner and Deloitte, and experienced education policy leaders are all arriving at the same conclusions makes them harder to ignore.

Build cross-functional ownership before you scale

The Deloitte finding that over 60% of AI pilots are owned by a single team with limited cross-functional buy-in is a red flag. Successful scaling requires that sales, customer success, product, and the customer's own leadership team are aligned on what success looks like and who is responsible for what. If your pilot was run entirely by your CS team and the district's tech coordinator, you do not have the infrastructure for scale.

Design pilots to test scalability, not just effectiveness

If your pilot only proves the product works under ideal conditions, it has not proven anything useful for scale. Include teachers who were assigned, not just volunteers. Reduce the level of support to what your model can actually sustain. Track what happens after the initial training window closes. A pilot that succeeds under realistic conditions is worth ten that succeed under ideal ones.

Invest in the organizational conditions, not just the technology

The technology is not the limiting factor. The culture is. McKinsey's organ rejection finding should be required reading for every EdTech go-to-market team. The systems that sustained improvement invested in leadership alignment, coherent priorities, authentic engagement with educators, and sustained professional development. The same principle applies to EdTech: if teachers feel a tool was imposed on them without consultation, they will reject it regardless of how well it performed in a pilot.

Break out of pilot purgatory with a standardized adoption process

Novo's point about ad hoc tool adoption is critical. If every new tool gets its own informal trial without a repeatable framework for evaluating results and deciding next steps, you will run pilots indefinitely. Districts need a structured adoption process, and vendors need to help build it rather than hoping each pilot will organically lead to expansion.

Why This Matters Right Now

The timing of this research convergence matters. K-12 is in the middle of the largest technology spending wave in its history, driven by ESSER funds, AI enthusiasm, and pandemic-era digital infrastructure. As those dollars begin to contract and budget accountability tightens, the products that survive will be the ones with genuine adoption, not just pilot logos.

The districts that will renew are the ones where the technology is actually embedded in daily practice. The vendors that will retain their accounts are the ones who invested in the bridge between pilot success and sustainable use. Everything else is temporary.

The products that survive the next budget cycle will not be the ones with the most pilots. They will be the ones with the deepest adoption in the schools where they already exist.

Sources

Jain, Dipesh. "From Pilot to Scale: Why Most AI Projects Fail to Move the Needle." EdTech Digest, 2025.

McKinsey and Company. "Spark and Sustain: How All the World's School Systems Can Improve Learning at Scale." McKinsey Education, 2024.

NOVO Innovative Pathways. "2025 AI and Education Year-in-Review: Trends, Insights and Opportunities." 2025.

Rafal-Baer, Julia and Muri, Scott. "AI in K-12 Schools: 5 Moves Only Leaders Can Make." GovTech, 2025.

Gartner and Deloitte statistics cited via Jain, EdTech Digest, 2025.

If your product has strong pilot results but struggles to scale into sustained adoption, that is exactly the kind of challenge I help EdTech teams diagnose and solve. The gap between pilot and scale is not a mystery. It is a system design problem with a system design solution.

Free Assessment

How strong is your adoption strategy?

Take the free 3-minute Adoption Diagnostic. Score your product across five critical dimensions and get personalized recommendations.

Your Product Is Strong. Let’s Make Sure Educators See It.

Most product adoption challenges aren’t about the product itself. They’re about how it’s positioned, introduced, and supported. Let’s figure out what yours needs.

Start the Conversation