Overview
SaaS pricing and packaging resets at a growth?stage firm dragged because decisions leaned on anecdotes and conflicting telemetry. Product usage signals were incomplete, revenue cohorts were stitched together in spreadsheets, and executive reviews debated whose chart was right. Intelligex piped Snowplow product events into Snowflake, harmonized identities to revenue cohorts, and delivered executive views in Mode governed by a change?approval cadence run by the Pricing Council. Leaders made pricing and packaging calls from consistent metrics with clearer assumptions, less rework, and fewer subjective debates.
Client Profile
- Industry: B2B SaaS
- Company size (range): Growth?stage vendor with multi?product subscriptions
- Stage: Scaling customer segments and evolving pricing and packaging
- Department owner: Strategy, Analytics & Executive Leadership (Corporate Strategy / Pricing Council)
- Other stakeholders: Product, RevOps/Sales Operations, Customer Success, Finance/FP&A, Data Engineering/Analytics, Marketing, Legal & Compliance, Support
The Challenge
Pricing and packaging meetings needed a dependable picture of who used what, how usage related to value, and the revenue profile of each cohort. In practice, Snowplow events varied by product team, revenue cohorts were derived from billing and CRM in spreadsheets, and entitlements were tracked separately in application tables. When Strategy asked, which features drive expansion in this segment, teams produced different answers based on differing event names, time windows, and account mappings. Slides conflicted, and decisions paused for reconciliation.
Governance was informal. Definitions like product qualified account, active seat, and upgrade trigger did not match across teams. Mode dashboards were updated independently from ad hoc queries, and there was no formal cadence to lock definitions and approve changes before executive forums. Pricing tests were proposed without a shared baseline, and post?mortems restarted classification debates.
Why It Was Happening
Identity and taxonomies were fragmented. Product telemetry used user?level identifiers, CRM tracked accounts and opportunities, and billing referenced subscriptions and invoices with different keys. Events were named differently by product area, and late?arriving fixes changed counts without notice. Mapping from users and events to accounts and revenue cohorts required manual joins that drifted over time.
Governance arrived at the end. Dashboards had similar names but different filters and cut?offs, and there was no catalog to define metrics or a process to approve changes. Pricing decisions were drafted in slides without citations to the underlying queries, and revenue alignment was checked manually late in the cycle.
The Solution
We created a governed telemetry?to?revenue model and an approvals?backed decision flow. Snowplow events flowed to Snowflake on a schedule, where dbt transformations harmonized event names, stitched identities, and mapped usage to accounts, subscriptions, and revenue cohorts. Executive views in Mode drew from certified datasets with a visible definitions catalog. A formal change?approval cadence, owned by the Pricing Council, locked metric definitions and scenario inputs before leadership reviews, with reason codes and an audit trail. No core tools were replaced; the orchestration unified data, definitions, and approvals around the existing stack.
- Snowplow collectors and enrichments feeding a curated event stream with consistent schemas (Snowplow Docs)
- Conformed analytics model in Snowflake that maps users and events to accounts, products, plans, entitlements, and revenue cohorts
- dbt transformations and tests encoding metric definitions, identity stitching, and accepted values (dbt Docs)
- Billing and CRM alignment for cohorts via connectors to systems such as Zuora or Stripe and Salesforce (Zuora Knowledge Center, Stripe Docs)
- Certified Mode reports and datasets with filters for segment, plan, and timeframe, plus drill?through to query and model lineage (Mode Help)
- Metric catalog and glossary maintained by Strategy and Analytics, referenced in every report
- Change?approval cadence with reason codes, pre?reads, and sign?off captured in a lightweight workflow; human?in?the?loop review for sensitive metric or cohort changes
- Role?based access and data minimization for customer?level data backed by identity groups (Okta Groups)
- Audit log binding each decision to dataset versions, queries, definitions, and approvers
Implementation
- Discovery: Cataloged Snowplow event schemas and gaps, current Mode dashboards and ad hoc queries, CRM and billing fields used for cohorts, and common pricing questions from recent councils. Reviewed conflicting definitions and prior test write?ups to surface recurring pain points.
- Design: Defined the conformed event and identity model, cohort taxonomy, and shared calendars. Authored dbt models and tests for key metrics, outlined the metric catalog and ownership, and designed certified Mode views. Specified the change?approval cadence, reason codes, and the audit schema that captures definitions and approvers.
- Build: Landed Snowplow streams into Snowflake; implemented dbt transformations for identity stitching, event normalization, and cohort assignment; published certified datasets and Mode reports; and stood up the approval flow and audit logging, including read?backs for proposed definition changes.
- Testing and QA: Replayed prior periods to reconcile counts with legacy reports; validated event normalization and cohort mapping across flagship customers and plans; verified lineage links from Mode to dbt models; and dry?ran the approval cadence with Pricing, Product, Finance, and RevOps.
- Rollout: Released certified Mode views beside legacy dashboards; after teams validated definitions, made certified views the source for council materials. Activated the approval cadence for metric changes and scenario inputs, maintaining an exception path for urgent needs with post?review documentation.
- Training and hand?off: Delivered quick guides for PMs and RevOps on reading cohorts and metrics, for Finance on scenario inputs and audit reads, and for Analytics on catalog stewardship and approval preparation. Established a cadence for catalog updates and a human?in?the?loop review of sensitive changes.
Results
Pricing and packaging decisions referenced the same certified metrics, definitions, and cohorts across Strategy, Product, Finance, and RevOps. Executive reviews focused on options, guardrails, and trade?offs rather than reconciling charts. The approval cadence brought clarity to assumptions and locked inputs before leadership forums, reducing late edits and rebuilds.
Teams developed a shared language for usage, value, and revenue alignment. Packaging scenarios pointed to the features that correlated with expansion in specific cohorts, and proposed changes linked to the same dataset versions visible in Mode. With lineage and approvals captured, follow?ups and post?mortems started from a common baseline instead of reassembling telemetry.
What Changed for the Team
- Before: Anecdotes and ad hoc spreadsheets drove packaging debates. After: Certified Mode views grounded choices in governed metrics and cohorts.
- Before: Event names, filters, and cut?offs varied by team. After: dbt encoded definitions and calendars, and reports showed lineage to models.
- Before: Revenue mapping to usage was manual. After: Cohorts aligned to CRM and billing in Snowflake with clear identity stitching.
- Before: Changes slipped into dashboards without review. After: A change?approval cadence locked definitions and inputs with reason codes.
- Before: Post?mortems reopened definitions. After: An audit log tied decisions to dataset versions, queries, and approvers.
Key Takeaways
- Unify telemetry and revenue under a conformed model; pricing and packaging require consistent identities and cohorts.
- Encode metric definitions in transformations and certify executive views; governance in code and BI reduces debate.
- Institute a change?approval cadence with reason codes and audit logs so inputs are trusted and stable during reviews.
- Keep Snowplow, Snowflake, and Mode; orchestrate mapping, definitions, and approvals rather than replatforming.
- Make lineage visible from dashboard to model to source; transparency accelerates alignment and post?decision learning.
FAQ
What tools did this integrate with?
We ingested product events via Snowplow, modeled identities and cohorts in Snowflake with transformations in dbt, and delivered decision views in Mode. Cohorts aligned to CRM and billing through systems such as Salesforce and Zuora or Stripe. Access was governed via identity groups like Okta Groups, and approvals ran through the Pricing Councils workflow.
How did you handle quality control and governance?
Metric definitions and calendars were encoded in dbt with tests for identity joins and accepted values. Datasets were certified for executive use, and Mode reports linked back to models for lineage. A formal change?approval cadence required reason codes and read?backs before metrics or scenario inputs changed, and an audit log captured dataset versions, queries, definitions, and approvers.
How did you roll this out without disruption?
Certified views ran alongside legacy dashboards while teams validated definitions and counts. Once trust was established, executive materials shifted to the certified views and the approval cadence governed changes. Core tools stayed the same; the new layer standardized mapping, definitions, and review.
How were usage signals mapped to revenue cohorts?
Identity stitching linked Snowplow users and events to accounts and subscriptions using stable keys and mapping tables. Cohorts were defined from CRM and billing attributes, then joined to normalized usage metrics in Snowflake. Reports in Mode allowed filters by cohort and drill?through to query and model lineage for verification.
How did you manage pricing experiments and packaging changes?
Proposed tests and package revisions used scenario inputs that referenced the same certified datasets. The approval cadence logged assumptions and locked definitions before launch, and post?test reviews compared outcomes to the baseline with the audit log preserving context. Sensitive changes went through a human?in?the?loop review to protect longitudinal comparability.
Get a FREE
Proof of Concept
& Consultation
No Cost, No Commitment!


