Overview

A streaming service’s Marketing & Customer Engagement team struggled to time save offers because churn signals lived in different systems and arrived on different cadences. Intelligex consolidated behavioral and billing features into Google BigQuery, validated data quality with Great Expectations, and triggered Braze journeys from model outputs. Offers began aligning with real customer behavior, status became transparent to stakeholders, and support saw fewer escalations about outreach that felt off-base.

Client Profile

  • Industry: Direct-to-consumer streaming media
  • Company size: Consumer subscription brand with a national footprint
  • Stage: Scaling lifecycle marketing and retention operations
  • Department owner: Marketing & Customer Engagement (Lifecycle/CRM)
  • Other stakeholders: Data Engineering, Analytics, Billing/Finance, Customer Support, Legal/Privacy, CRM Operations

The Challenge

The team had churn predictors scattered across Mixpanel event data, Google Analytics 4 (GA4) funnels, and billing status flags. Lifecycle managers needed to act when a subscriber exhibited at-risk patterns, but the signals were inconsistent and out of sync. Some users received save offers long after they had already decided to cancel, while others were contacted despite stable engagement. That misalignment eroded trust and created extra work for support.

Marketers operated out of Braze for messaging, but targeting required pulling lists from different sources and stitching them together by hand. Exports from analytics tools and the billing system landed on different schedules, and identity keys did not always match. When a model existed, it ran outside the warehouse and produced a separate file that was hard to reconcile with campaign logic. Teams compensated with frequent coordination meetings and manual spot checks, yet the gaps persisted.

Replacing systems was not an option. Product analytics needed to stay in Mixpanel and GA4, billing stayed with Finance, and Braze remained the campaign engine. The mandate was to integrate these pieces, establish a governed feature set, and give marketers a reliable trigger without creating a parallel stack or asking people to learn unfamiliar tools.

Why It Was Happening

The data path encouraged drift. Event streams and billing tables moved independently, with differing definitions for active, paused, or delinquent states. There was no single feature store to standardize at-risk behaviors, time windows, and exclusions. Identity resolution relied on ad hoc joins between device identifiers, application user IDs, and subscriber account IDs, which failed in edge cases such as shared devices or email changes.

Ownership was split across teams. Analytics owned event taxonomies, Finance controlled billing attributes, and Marketing tuned offer logic. Without a shared data contract or validation, changes in one system quietly broke downstream segments. Model outputs were delivered as static files, disconnected from the cadence of campaigns and the state of subscriber consent in Braze. Governance was mostly manual, and quality checks happened after an offer misfired rather than before a journey launched.

The Solution

Intelligex implemented a centralized, governed feature layer in Google BigQuery, validated with Great Expectations, and wired it to Braze so journeys triggered directly from model outputs and eligibility rules. The approach kept analytics tools and billing systems intact, standardized identity resolution in the warehouse, and delivered a marketer-friendly set of attributes and segments that updated on a predictable schedule. A human-in-the-loop gate ensured offers respected consent, suppressions, and business rules before activation.

  • BigQuery as the central store for churn features and model outputs, with clear schemas and documented transformations. Reference: BigQuery documentation.
  • Validated pipelines using Great Expectations to enforce data quality on critical features, identity joins, and model output ranges before publishing to downstream tables.
  • Inbound data from GA4 and Mixpanel using native exports and connectors, including GA4’s BigQuery export capability: GA4 BigQuery Export.
  • Billing system ingestion via scheduled loads, with a standardized mapping for account status, delinquency, and refund events.
  • Identity resolution layer that prioritized subscriber account ID, reconciled device and application IDs, and tracked confidence scores for joins.
  • Modeling inside the warehouse using existing tooling, with features and outputs written to governed tables. If needed, external models published predictions back into BigQuery with the same contract.
  • Direct orchestration to Braze via Cloud Data Ingestion from BigQuery, delivering traits and segments for journeys and canvases. Reference: Braze Cloud Data Ingestion for BigQuery.
  • Eligibility rules for suppressions and compliance, including opt-out states, recent support interactions, and offer frequency caps.
  • Human-in-the-loop review gate for threshold changes, new features, and offer tests, with a checklist for Legal/Privacy sign-off.
  • Dashboards that surfaced feature freshness, validation status, and segment sizes to Marketing and Support, so everyone saw the same state before a journey turned on.

Implementation

  • Discovery: Cataloged churn indicators across tools, mapped identity keys, and documented where false positives and false negatives occurred. Gathered legal requirements for consent, suppression, and retention of model outputs. Aligned on definitions for at-risk, save-eligible, and not-contactable categories.
  • Design: Defined a feature schema in BigQuery with source lineage, update cadence, and null-handling rules. Designed identity resolution with precedence for subscriber account ID and fallback logic for device and application IDs. Created a data contract for model inputs and outputs, including confidence thresholds and allowable ranges.
  • Build: Implemented ingestion from GA4, Mixpanel, and billing into raw tables, then transformed into curated feature tables. Added Great Expectations suites for row counts, uniqueness of keys, join integrity, and reasonableness checks on engagement metrics. Built warehouse-native modeling and scoring, and wrote outputs to a Braze-ready view.
  • Testing and QA: Ran pipelines in shadow mode to compare model outputs against historical outcomes and manual spot checks. Injected controlled edge cases like account merges and email changes to test identity resilience. Verified that failed validations stopped publication and alerted owners with clear remediation steps.
  • Rollout: Started with a narrow save journey focused on high-confidence at-risk segments and a conservative offer. Enabled Braze ingestion in mirror mode so campaigns could be previewed without sending. Expanded to additional segments once data stability and governance steps proved reliable.
  • Training and hand-off: Delivered role-based sessions for lifecycle managers on reading feature freshness and segment status, for analytics on authoring validations, and for support on interpreting journeys in the context of customer conversations. Documented playbooks for adjusting thresholds and pausing journeys.
  • Human-in-the-loop review: Introduced a pre-activation checklist covering consent compliance, suppression lists, and recent policy changes. Any change to model thresholds or eligibility rules required approval from Marketing, Analytics, and Legal, with a recorded rationale.

Results

Save offers began reaching the right subscribers at the right moments because the model ran against consistent, validated features and triggered journeys directly. Lifecycle managers no longer pulled manual lists from different systems or guessed at timing. Stakeholders saw a clear picture of who was eligible, why, and when the next refresh would occur. The end-to-end flow operated predictably, and changes were governed through a small set of documented gates.

Support reported fewer escalations related to irrelevant outreach. Offers reflected actual engagement patterns and account states, so conversations with subscribers felt consistent across channels. Finance and Legal gained confidence from a repeatable process with a visible audit trail. The team shifted energy from triage to testing creative, tailoring benefits, and learning from outcomes rather than debating data lineage.

What Changed for the Team

  • Before: Lists stitched from Mixpanel, GA4, and billing snapshots; After: A governed BigQuery view with validated features and predictions.
  • Before: Manual timing guesses for save offers; After: Braze journeys triggered from model outputs on a predictable cadence.
  • Before: Identity mismatches and edge-case failures; After: Standardized resolution with clear precedence and confidence.
  • Before: Quality checks after a bad send; After: Great Expectations blocked publish when core validations failed.
  • Before: Frequent cross-team meetings to reconcile status; After: Shared dashboards for feature freshness, segment eligibility, and journey state.
  • Before: Unclear ownership of threshold changes; After: Human-in-the-loop approvals with Marketing, Analytics, and Legal sign-off.

Key Takeaways

  • Centralize churn features in the warehouse and validate them before they reach messaging systems.
  • Keep existing analytics, billing, and CRM tools; add a thin, governed layer to connect them reliably.
  • Resolve identity with explicit precedence and track confidence to avoid edge-case surprises.
  • Trigger journeys from model outputs, not ad hoc lists, and make eligibility and cadence transparent to stakeholders.
  • Use data contracts and validation suites to prevent silent schema drift and broken joins.
  • Keep humans in the loop for thresholds, suppressions, and offer policy changes, with a clear audit trail.

FAQ

What tools did this integrate with?
Mixpanel and GA4 supplied behavioral events, the billing platform provided account states, BigQuery hosted features and model outputs, Great Expectations validated data quality, and Braze orchestrated messaging. The integration used native exports and warehouse connectors wherever possible, with Braze Cloud Data Ingestion receiving traits and segments from BigQuery.

How did you handle quality control and governance?
We enforced a data contract for core features and model outputs, backed by Great Expectations suites that halted publication when validations failed. A human-in-the-loop gate required approvals for threshold changes, new features, and suppressions. All changes and model versions were logged, and dashboards exposed feature freshness and validation status before activation.

How did you roll this out without disruption?
Pipelines ran in shadow mode first, and Braze ingestion started in mirror mode so teams could preview segments without sending. We began with a narrow journey and conservative rules, then expanded once stability and governance were proven. No core tools were replaced; work stayed in familiar systems.

Where did the model run and who owned it?
The model ran in the warehouse so features and outputs shared the same governance and lineage. Analytics owned feature engineering and model training, while Marketing owned thresholds, suppressions, and creative. Ownership was documented, and changes followed the same approval process as other campaign-critical assets.

How did you manage identity and consent?
Identity resolution prioritized subscriber account IDs and reconciled device and application identifiers with explicit rules. Consent and subscription states flowed from Braze and the billing system into the eligibility layer, and journeys respected global suppressions, recent support interactions, and frequency caps. Any exceptions required documented approval.

You need a similar solution?

Get a FREE
Proof of Concept
& Consultation

No Cost, No Commitment!