👋 Hi, it’s Rick Koleta. Welcome to GTM Vault - a breakdown of how high-growth companies design, test, and scale revenue architecture. Join 25,000+ operators building GTM systems that compound.
12 tools, 12 definitions of customer, zero agreement on what revenue means. The dashboards are full. The data team is underwater. The founders are making decisions on gut feel anyway.
The problem was never access to data. It was architecture.
Danylo Borodchuk dropped out of Dartmouth to build analytics infrastructure. CS background, DALI Lab, DARPA research. He went through Y Combinator’s Winter 2025 batch. Before Lopus AI was an analytics platform, it was a completely different product. A generative UI tool that got Twitter hype and zero traction in practice. YC forced the question that killed the first idea: who actually wants this? Nobody had an answer. The pivot tells you everything about where the real pain lives.
Lopus AI connects CRM, billing, product analytics, and support into one governed workspace. No SQL required. No data engineering team required. A semantic layer that locks in your definitions so every query speaks the same language. The platform is single tenant at $2K a month, with a forward deployed data engineer for onboarding and a self-healing definition layer that regenerates its own SQL when the underlying schemas change.
In GTM 43, Danylo breaks down why most analytics fail before the first query runs. He explains why every company’s CRM is a mess in the same predictable ways, why marketing and sales will never agree on “qualified” without a governed definition layer, and why the most dangerous analytics tools are the ones that answer every question, including the ones the data cannot support. The fix is not a better dashboard. It is an architectural layer between raw data and every query the business runs against it.
This is not a conversation about better charts.
It is a conversation about why your tools define the business differently, and what happens when you install a single governed layer underneath all of them.
Inside this episode
This episode maps the structural gap between the data your tools produce and the answers your teams trust, starting at the foundation: the definition layer that most companies never formally build.
Danylo explains what happens during onboarding. The CRM is always a mess. Billing data becomes the source of truth by default because it is the closest thing to financial reality. But even billing carries company-specific definitions of MRR, churn, and customer count that no off-the-shelf dashboard captures. When marketing says “qualified” and sales says “qualified,” those are two different numbers referencing two different definitions with no structural reconciliation between them.
We go deep on trust architecture. Most AI analytics tools optimize for answering your question. Lopus optimizes for refusing to answer when the data cannot support one. Danylo described a deliberate test: ask the agent to join Mixpanel product data with Salesforce lead records to find power users. The two tables are deliberately not connected. The agent examines both data sources, recognizes it cannot join them, and tells you instead of fabricating a result. The investigation agent follows the same principle, running temporal sequencing, segment isolation, and confound surfacing before it hands you an explanation.
We cover the self-healing semantic layer (what happens when Stripe changes its API and your MRR definition breaks), the forward deployed data engineer model at seed stage (and why the Palantir comparison is accurate), why a 10,000-view blog post generated less revenue than a 1,000-view one, and what the AI analytics space gets structurally wrong about the relationship between context and accuracy.
Discussed in this episode
In this episode, we cover:
0:00 Intro: 12 tools, 12 definitions, zero agreement on revenue
1:22 What killed the generative UI product and forced the pivot
2:14 What YC forces you to confront about your original idea
3:24 Why a technical founder builds for growth teams
4:30 The hardest constraint at seed stage that has nothing to do with product
6:03 The most common data contradiction between CRM and billing
7:38 Who defines the semantic layer when marketing and sales disagree
9:04 No SQL required, but RevOps wants to see the query
10:29 500 integrations at seed stage: deeply maintained versus thin
12:19 When Lopus told a customer not to trust an answer
14:48 Single tenant architecture at $2K/month
16:34 How the investigation agent separates causality from correlation
19:02 The forward deployed data engineer model and how it scales
20:25 Live in days: first dashboard or fully governed semantic layer
21:48 What their own pipeline data reveals about their conversion funnel
24:55 Rapid fire
Key takeaways
Every company’s CRM is a mess in the same predictable ways
The CRM is never the source of truth, but it becomes the foundation of every metric in the business anyway. Billing data is closer to financial reality, but every company carries its own quirks in how MRR is defined, how churn is calculated, how a customer is counted. The definitions diverge between tools, and every dashboard built on top inherits the divergence. The result is 12 tools producing 12 answers, and the founder picks the one that matches their intuition.
The most dangerous analytics tool is the one that always gives you an answer
Most AI analytics systems optimize for response. They answer every question because that is what the models are trained to do. Lopus deliberately built the opposite: an agent that refuses to answer when the data cannot support a trustworthy result. It asks clarifying questions, checks whether tables can be joined, and stops before writing SQL if the data does not support the query. The test case is instructive: ask it to join two deliberately unconnected data sources, and it tells you it cannot instead of hallucinating a result.
The semantic layer is the missing architecture, not the dashboard
The fix for conflicting definitions across tools is not a better chart or a prettier report. It is a governed layer between raw data and every query the business runs against it. One place where MRR means one thing, churn means one thing, and every downstream query inherits those definitions. Without it, marketing and sales will never agree on “qualified” because they are referencing two different definitions with no structural reconciliation.
Maintenance is the real cost of analytics infrastructure
Every new dashboard, every new metric definition, every new data source adds ongoing maintenance hours to the data team. When Stripe updates its API and a field gets nullified, the SQL that defines your MRR breaks. In a traditional BI setup, a human notices and rewrites the query. Lopus holds the definition in plain English alongside the SQL. When the underlying schema changes, the system regenerates the SQL to match the original definition. That is the argument for self-healing: not speed, but durability.
The forward deployed model compounds at seed stage
Every edge case the forward deployed data engineer encounters during onboarding gets folded back into the product logic. The standard tool stack across growth-stage startups is similar (HubSpot or Salesforce, Stripe or Chargebee, PostHog or Mixpanel, Intercom or Pylon), but the custom fields, internal naming conventions, and bespoke metric definitions differ in ways no automated onboarding captures. The more customers Lopus onboards, the more edge cases the platform absorbs, and the less the next customer needs manual intervention. The model does not scale linearly. It compounds.
The content metric that misleads is the one that measures attention instead of revenue
Lopus published two blog posts. The first got 10,000 views. The second got 1,000 views. The marketing dashboard says the first one won. When they tracked the full customer journey through their own product, connecting content engagement to CRM to billing, the 1,000-view post produced higher-ACV customers who became more active users. The 10,000-view post generated attention. The 1,000-view post generated revenue. Without the full journey connected, the marketing team optimizes for the wrong input.
Frameworks from the episode
The trust architecture for AI analytics
Three mechanisms prevent the agent from producing confident noise. First, clarifying questions before any query runs. The agent checks what data exists and whether it supports what you asked. Second, join validation. If two data sources cannot be structurally connected, the agent says so instead of fabricating a result. Third, the anti-hypothesis. The investigation agent does not hand you the first plausible explanation. It tests competing explanations, checks temporal sequencing (did the supposed cause precede the effect?), isolates segments, and surfaces confounds before it gives you an answer.
The self-healing definition layer
The semantic layer holds every metric definition in two forms: SQL and plain English. The SQL executes the query. The plain English holds the intent. When the underlying schema changes (a field is renamed, a column is nullified, an API version shifts), the system uses the plain English definition to regenerate correct SQL without human intervention. This absorbs the maintenance cost that makes traditional BI infrastructure unsustainable at scale without a dedicated data team.
The full-journey content attribution model
Connect content engagement data to CRM to billing. Measure not which content gets the most views, but which content produces the highest-ACV customers who become the most active users. Danylo’s own data showed a 10X gap between the content that won on attention metrics and the content that won on revenue metrics. The structural lesson: any content measurement that stops at pageviews will optimize the marketing team toward the wrong inputs.
What to do this week
Ask your data team how many distinct definitions of MRR, churn, or “customer” exist across your tools. If nobody can answer immediately, you do not have a governed definition layer.
Run one query through your current analytics tool that requires joining data from two sources that should not be joinable. If the tool gives you a confident answer anyway, your analytics are not trustworthy by default.
List every metric on your primary GTM dashboard. For each one, identify whether the underlying definition is shared across marketing, sales, and finance, or whether each function is running a different version. If the definitions diverge, the dashboard reconciles nothing.
Check how many hours per month your data team spends maintaining existing dashboards versus building new ones. If maintenance exceeds 50%, the architecture is consuming the team, not serving it.
Why this matters
The default state of GTM analytics is fragmentation. Every tool defines the business differently. Every team trusts the metric that confirms their narrative. Every dashboard presents a version of reality that diverges from the one finance uses to plan the business.
The semantic layer is the architectural fix that most companies skip. Not because it is hard to understand, but because it requires formal agreement on definitions that most organizations have never made explicit. What counts as MRR. What counts as churn. What counts as qualified. When those definitions live inside individual tools instead of inside a governed layer that every query inherits, the analytics infrastructure produces answers that look precise and are structurally unreliable.
Lopus is building that layer. One governed workspace where every tool’s data passes through shared definitions before it reaches the human asking the question. The value is not the chart. It is the architecture underneath the chart that makes the answer trustworthy.
Revenue does not fail because teams lack data. It fails when the definitions underneath the data stopped agreeing and nobody reconciled them.
This is GTM Vault.
If this episode changed how you think about the relationship between your data tools and the answers they produce, forward it to one operator still making decisions on dashboards where every tool defines the business differently.
Connect
Follow Danylo Borodchuk // Lopus AI
Follow Rick Koleta // GTM Vault
Thanks for listening. See you in the next episode.
P.S. Annual paid subscribers get a Private GTM Blueprint Session. One working session to identify your primary GTM constraint and design the 90-day architecture to resolve it.












