The Problem with LLMs in Organizations
Ask ten people how they value LLMs and you’ll get eleven different answers. They are terrible, or they are amazing; overhyped or underutilized; too simple or too magical. Whether you are an evangelist, a denigrator, or an apologist their value is the same as any other tool: high if you use them in the right way; low if you don’t. But what is the right way to use LLMs? What are their functional ceilings in an organization and how can those be raised? These are the questions we’re asking at SeerAI as we look towards the next phase of our development. We’re beginning to have answers.
3 Truths
That feeling of magic when we use something like ChatGPT in our personal life dissipates and morphs into frustration when we try to use it in the same way inside our organization. The reason is because, like us, in that setting LLMs need three foundational layers to be effective:
- Data - Not just the open-source, size-of-the-internet data it was trained on but real, proprietary, actionable data that sits across the enterprise
- Tool Access - MCP (and its alternatives). This is a must. Imagine if someone asked you to do something at work then took away your computer!
- Context - In SeerAI’s world, this is ontology. LLMs need an encoded understanding of how everything important to your org (i.e. data, people, groups, products, etc) are related
An Example
Let’s look at an example prompt we asked ChatGPT:
”Tell me about what oil and gas infrastructure is at risk from seismic activity and how might that affect production? Be as specific as possible with locations, dollar amounts, and event magnitudes”
The results, while satisfactory for general inquiry, are far from what would be needed in a large organization.
The reply consisted of example of:
- Infrastructure that had experienced past damage
- Example: 2011 Tōhoku earthquake (Japan, M9.0) — storage tanks at a 220,000 bbl/day Cosmo refinery ignited and burned for 10+ days; a 145,000 bbl/day JX Nippon refinery was damaged
- Specific potential hazards
- Major faults (Hayward, San Andreas) run near major refinery hubs in the Bay Area and Southern California. A large quake (M7+) could damage infrastructure like the Chevron Richmond Refinery
- Macro-economic risks
- Disruptions in major producing regions or major importers (Japan, California refineries) can tighten global supply
The reply did not provide field-level or well-level operation risks for specific companies (no data for that); it did not provide a map or other visual tool capable of being shared across an enterprise (limited tool access); and it did not suggest who to contact for more information, what additional data might be useful, or any org-specific factors to consider (lack of context).
At SeerAI, we built our Geodesic platform in deference to these three fundamental truths. Not necessarily because LLMs need them but because people do.
In the coming weeks, we will be showing off our new core capabilities that fully extend the platform in to the realm of LLM satisfaction for the enterprise. We’re excited to share the results with you and for you to build great things on Geodesic’s foundation.
