After six years of implementing data platforms for enterprise clients, I noticed that the same problems appeared on every project. Not technical problems. Human ones. Clients who thought their data was ready when it was not. Stakeholders who each had a different definition of success. Products that were built, delivered, and quietly ignored.
Root to Value is the methodology I built to address those problems directly. It is opinionated. It is based on real work, not theory. And it is designed to close the gap between what a data platform promises and what it actually delivers for the people using it.
The name comes from a simple idea: you cannot grow value from bad roots. If the data foundation is wrong, everything built on top of it will be wrong too. Get the roots right first. Then build.
Three things that are always true
1. "Data ready" means something different to the customer than it means to you.
Customers work with data at surface level. They see reports, dashboards, exports. They have never had to assess completeness, consistency, or joinability across systems. When a client says their data is ready, what they mean is: someone in IT told them it was ready. That is not the same thing.
Never accept "our data is ready" without verification. Always assess across all five dimensions — completeness, consistency, accuracy, timeliness, and joinability — before any configuration begins.
In one engagement, a client assured me their ERP data was clean and mapped. On assessment, three key fields had been manually overridden for two years without documentation. We caught it in Discovery. Had we caught it in Configuration, the project would have reset entirely.
2. Every stakeholder has a different definition of success. Map them all on day one.
The CFO wants ROI visibility. The data engineer wants clean architecture. The end user wants speed. The IT manager wants security compliance. None of them will tell you this unless you ask directly. And each of them will evaluate the go-live through their own lens.
Map every stakeholder's definition of success in the first week. Write it down. Share it back to them. Then never send the same update to all of them — tailor every communication to what they actually care about.
On a procurement analytics implementation, the CFO defined success as one dashboard showing spend by supplier. The analyst team defined success as drill-through capability across 14 dimensions. Both were right. We built for both. But knowing this on day one meant we scoped correctly instead of discovering the misalignment at UAT.
3. Feature adoption collapse — you build ten reports, they use two.
This is the most common failure mode in data platform implementations. The product is technically complete. The delivery is on time. Three months later, active usage has collapsed to two reports that the team was already running before the implementation.
Adoption must be designed from kickoff, not assumed at go-live. End users need to be involved before the product is finished. The reports they never asked for will never be used. Build fewer things that matter more — and measure adoption from week one.
In a retail analytics project, we delivered 11 reports at go-live. Six months later, client data showed two were used daily. The other nine had not been opened in four months. We had built to the spec, not to the actual workflow. The rebuild started from observation of how the team actually worked.
Six phases. What happens when you skip one.
Pre-kickoff
Account intelligence gathering. Research client industry, tech stack, and likely complexity. Form a hypothesis about where the hard problems will be. Reach out to a champion before the first meeting so you walk in knowing someone.
If skipped: You arrive at kickoff knowing nothing the client does not. You spend the first two weeks learning things you could have known before you started. The client notices.
Kickoff
Define success before building anything. Written success plan agreed with all stakeholders. IAM requirements surfaced on day one. IT engagement started immediately — not after configuration has begun.
If skipped: IT dependencies surface six weeks later. SSO configuration blocks go-live. You spend the final sprint negotiating access that should have been sorted in week one.
Discovery
Full data readiness assessment across five dimensions. Semantic layer defined before any report development begins. Every data source documented with owner, refresh frequency, and known quality issues.
If skipped: Data quality problems surface in UAT. Clients validate what they expect to see rather than what is true. Issues found at UAT cost five times more to fix than issues found in Discovery.
Configuration
Every decision documented in real time. End user champions involved from the start — not invited to review at the end. Configuration debt actively prevented by building the right thing once rather than quickly and then reworking.
If skipped or rushed: Undocumented decisions accumulate. The engineer who made a shortcut leaves. Nobody knows why the filter behaves the way it does. Every change request becomes an archaeology project.
Go-live
The starting line, not the finish line. Lead with 2 to 3 high-value reports that solve a real daily problem. Pre-launch checklist complete. Support process documented and communicated before the first user logs in.
If treated as the end: Adoption is never measured. Problems compound silently. Three months later the platform is technically live and behaviourally dead.
Hypercare
Compress time to value in the first 30 days. Daily usage monitoring. Address adoption collapse before it becomes entrenched behaviour. Measure the first meaningful outcome — not just usage, but a decision that was made using the platform.
If skipped: Low adoption becomes the new normal. The platform lives on a dashboard nobody checks. The implementation is technically complete and practically invisible.
Six years of implementations taught me that the technical work is rarely the hard part. The hard part is the space between the product and the people who need to use it. That space is what this framework is designed to close.