Between Insight and Implementation
Given the current pace of AI, it’s easy to lose sight of what actually happens between data and decision.
There’s a growing belief that we can hand off much of the manual and cognitive load of solution design to the nearest LLM. And while that’s increasingly true for exploration and prototyping, enterprise implementation still follows the same sequence of difficult, detailed, and very human steps that it did a decade ago.
And even then, success is never guaranteed.
This is the daily reality for data and analytics professionals, and it’s one that’s often underappreciated. What’s even less appreciated is that once an analytics initiative starts, the clock starts too.
Three months.
You have three months to move from POC excitement to something that looks like an enterprise implementation before interest wanes, priorities shift, or funding moves somewhere else.
Tic. Toc.
Day 1
An idea surfaces from a business unit.
An oil and gas example might be production prediction using multiple disparate inputs; SCADA time series coming out of SQL; well logs in LAS format; seismic volumes in SEG-Y or increasingly Zarr cubes; satellite imagery as COGs accessed through STAC catalogs; financial data sitting in Snowflake, SAP, or CSV exports someone emails weekly.
A proof of concept model is achieved fairly quickly as data is manually gathered, wrangled, and harmonized locally; just enough to work.
Early results are promising and there is enough interest to move forward.
Day 10
MVP conversations begin.
More data is requested. Not just more rows but more variation, more edge cases, more history.
People want to know if the results hold up to scrutiny.
More people join and early pipeline sketches appear.
Delivery questions start to surface: dashboard, API, map service, batch report, or all of the above? At this stage, most data is still moving via scripts, scheduled extracts, or one-off connectors.
It still feels manageable.
Day 20
The first real friction shows up. Some imagery needed for analysis can’t be stored where the team expected. The data engineering team is already stretched. Current pipelines work but only under ideal conditions. They’re certainly not robust.
Global expansion conversations start, and suddenly you’re dealing with more databases (PostGIS, Oracle, SQL Server), different data schemas, and duplicate identifiers. Nothing by itself is catastrophic but the friction is growing.
Day 35
Data governance enters the conversation in a real way and questions shift from “can we predict this” to “who owns what dataset? Can data be shared? How do we designate authoritative versions? Will the reporting tools be performant?” Interest is still high but momentum starts to taper.
Day 50
Momentum is now noticeably attenuated.
Some pipelines look enterprise-ready. Most do not.
More time is spent tuning models than designing durable data flows; not because it’s the right priority, but because it’s the part teams can control directly.
Meanwhile, global dashboards struggle with dataset size; joins become fragile; schemas drift. More people are pulled into the project and sharing data gets harder, not easier.
Day 70
The hidden costs become visible as cloud spend increases, contractors are hired, and existing application ‘credits’ are increased. Engineering efforts are already focused on maintenance even though the project has yet to reach full deployment. Morale and momentum wane to a minimum
Day 91
Time’s up.
The analytic itself works and there is strong evidence it provides value. But the system required to operationalize it is fragile, expensive, and incomplete. The work continues but the confidence in, and probability of, success decreases.
By now, 20–30 people may have touched the project in some way. It gets recorded as a win but everyone knows that it’s not. Better luck next time.
Why We Exist
This isn’t a tail case. It’s a standard and it is why we exist. The infrastructure that is assumed to exist perennially does not. It’s created and recreated, with a ticking clock, each time an analytic or AI effort is undertaken. The resulting value, lost and unrealized, can be reclaimed by virtualizing that infrastructure with Geodesic.
AI is accelerating how quickly we can generate insight. But data connection maturity still determines whether that insight ever becomes a reliable enterprise capability.
Most teams don’t discover that until the clock is already running.
