Overview
A fintech onboarding product team could not pinpoint why applicants dropped off across Know Your Customer (KYC) vendors. Vendor logs, session analytics, and decision outcomes lived in separate systems, so experiments took too long to interpret and vendor conversations relied on anecdotes. Intelligex integrated vendor events, app sessions, and decision results into a single, traceable view of the identity-proofing funnel, with anomaly alerts on step-level friction. Product managers isolated issues quickly, experiments closed with clear narratives, and vendor reviews used concrete evidencewithout changing the core onboarding flow, the analytics stack, or the existing feature flag system.
Client Profile
- Industry: Consumer and SMB financial services (identity verification and onboarding)
- Company size (range): Multi-product platform with web and mobile onboarding
- Stage: Established vendors and routing logic; analysis and experiments handled ad hoc
- Department owner: Product Management & R&D
- Other stakeholders: Risk/Compliance, Fraud, Data/Analytics, Mobile/Web Engineering, Platform/Identity, Customer Support, Legal/Privacy, Partner/Vendor Management
The Challenge
Onboarding required customers to complete document capture, liveness, and data checks through third-party KYC providers. Each vendor exposed different event models and result codes. Mobile and web clients recorded their own analytics, and the decision engine emitted outcomes and risk reasons. When drop-offs spiked, teams argued about whether the issue was capture UX, mobile permissions, network behavior, or vendor-side failures. Experiments that shifted traffic between vendors or changed step order produced inconclusive results because the team could not reconstruct an end-to-end story quickly.
Signals were fragmented. Vendor logs arrived via webhooks or files, analytics lived in a separate tool, and decision outcomes were stored in the platforms warehouse. Timezones and clocks differed, user identities changed from pre-auth to post-auth, and device/platform details were not carried through vendor payloads. Analysts ran one-off joins to piece together funnels, but definitions varied by person, and anomaly detection depended on someone noticing a trend in a dashboard.
Compliance and privacy needs added constraints. The organization followed identity proofing practices aligned to the NIST Digital Identity Guidelines (SP 800?63A), and PII had to be handled under strict access controls. Any solution had to avoid duplicating sensitive images, hash identifiers, and keep role-based access. The team preferred to keep its warehouse, analytics, and feature flags, and add a governed layer for correlation and alerting. Observability practices drew on OpenTelemetry concepts for traceability.
Why It Was Happening
Root causes were mismatched identities and unstandardized events. Vendors emitted success and failure codes with different semantics, mobile and web used divergent session keys, and decision systems did not tag outcomes with the same identifiers used upstream. Without a canonical funnel model and time normalization, similar incidents were labeled differently, and results could not be compared across vendors or platforms with confidence.
Ownership was diffuse. Product and Risk set targets, Engineering routed traffic, vendors controlled parts of the UX, and Analytics stitched data together. No shared workflow bound vendor events to app sessions and decisions with validation checks and alerts, so investigations started from scratch every time.
The Solution
Intelligex implemented a unified onboarding data model and correlation layer that joined vendor events, app session analytics, and decision outcomes into one view. A mapping service normalized vendor result codes into canonical step outcomes, and identity crosswalks linked pre-auth sessions to post-auth accounts. Time normalization aligned events across systems. Funnel definitions and guardrails were codified, and anomaly alerts fired when step-level behavior deviated. PM-facing dashboards and vendor-ready evidence packs exposed the same narratives. Human reviewers remained the gate for policy-sensitive changes and threshold updates.
- Integrations: Vendor webhooks/files ingested to the warehouse (for example, Snowflake); session analytics from tools like Amplitude or Mixpanel; decision outcomes from the risk engine; feature flag metadata from existing routing (for example, LaunchDarkly); optional trace and identity propagation aligned to OpenTelemetry; incident and follow-up work in Jira.
- Canonical funnel model: Standard steps (document capture, liveness, data checks, review), normalized result codes, and reasons mapped from each vendors taxonomy; effective dating for code mappings.
- Identity crosswalks: Joins between pre-auth device/session IDs, vendor request IDs, and post-auth account/user IDs with effective dates and environment tags.
- Time normalization: Single time base with drift handling and correlation windows by step and vendor to align events without forcing exact matches.
- Validation and quality checks: Rules for orphaned vendor events, inconsistent step ordering, and missing platform context; low-confidence joins routed for review.
- Anomaly alerts and guardrails: Policy-driven detectors for unusual step fallout, increased retries, and vendor error reasons; alerts to Slack/Teams with links to evidence; optional auto-pauses for experiments when guardrails tripped.
- Dashboards and evidence packs: Views by vendor, platform, region, and step; drill-downs with sample sessions and normalized reasons; exportable briefs for vendor management.
- Permissions and privacy: Hashed identifiers in curated views, strict access to raw payloads, and no storage of vendor images; governance aligned to NIST guidance and internal policies.
Implementation
- Discovery: Cataloged vendor event schemas and result codes, session analytics fields, and decision outputs; collected recent incidents and experiment readouts; documented identity and time mismatches; aligned with Risk and Privacy on allowed fields.
- Design: Defined the canonical funnel model and vendor code mappings; authored identity crosswalk rules and time normalization; specified validation checks and anomaly thresholds; designed dashboards and vendor evidence packs; agreed on roles for threshold changes and experiment guardrails.
- Build: Implemented ingestion and normalization jobs; created crosswalk and correlation logic; built validation and anomaly detectors; assembled dashboards and alerting; added Jira automations to open follow-ups with evidence.
- Testing/QA: Ran in shadow mode: produced correlated funnels and draft alerts while teams continued existing analysis; replayed past outages and experiments to tune mappings and thresholds; included human-in-the-loop reviews with PMs, Risk, Analytics, and Vendor Management.
- Rollout: Enabled dashboards and alerts for selected routes and vendors first; kept manual analyses as a controlled fallback; expanded platform and regional coverage as teams gained confidence; connected alerts to experiment guardrails after initial cycles.
- Training/hand-off: Delivered short sessions for PMs, Risk, Analytics, and Support on reading funnels, interpreting reasons, and using evidence packs; updated SOPs for experiment reviews and vendor communications; transferred ownership of mappings, thresholds, and dashboards to Product Ops and Analytics under change control.
Results
Investigations started from a shared funnel rather than from scattered exports. When friction rose, alerts linked to step-level evidence with normalized reasons and example sessions. PMs isolated whether issues were device permissions, capture UX, network latency, or vendor errors. Experiments that shifted traffic between vendors closed with clear attribution, and rollout decisions were less contentious because everyone referenced the same model and guardrails.
Vendor conversations changed tone. Evidence packs showed step outcomes, platform context, and normalized error reasons over the same time window, so requests for remediation were specific. Internally, Risk and Compliance saw how results mapped to policy expectations, and Support used the same narratives when helping customers. The onboarding flow, analytics tools, and flags stayed in place; the difference was a governed layer that tied vendor events to sessions and outcomes in a defensible way.
What Changed for the Team
- Before: Vendor logs, session analytics, and outcomes were analyzed separately. After: A unified funnel correlated all signals with shared identifiers and time.
- Before: Experiments dragged on with ambiguous readouts. After: Guardrails and anomaly alerts framed decisions with step-level evidence.
- Before: Vendor reviews relied on anecdotes. After: Evidence packs showed normalized reasons and examples by platform and region.
- Before: Identity mismatches broke joins. After: Crosswalks linked pre-auth sessions, vendor IDs, and accounts with effective dating.
- Before: Alerts were dashboard-driven and late. After: Policy-based detectors notified teams with links to curated views.
- Before: Privacy concerns limited access broadly. After: Hashed identifiers and role-based views let PMs and Risk collaborate safely.
Key Takeaways
- Standardize the funnel; normalize vendor reasons into shared steps before comparing performance.
- Fix identity and time first; crosswalks and normalization make correlation credible.
- Automate anomaly detection; policy-based alerts catch friction early and protect experiments.
- Package evidence; vendor-ready briefs reduce back-and-forth and accelerate fixes.
- Keep humans on thresholds; reviewers tune detectors and mappings as vendors and flows evolve.
- Integrate, dont replace; add correlation and governance around your vendors, warehouse, analytics, and flags.
FAQ
What tools did this integrate with? Vendor webhooks and files were ingested into the existing warehouse (for example, Snowflake). Session analytics flowed from the teams current tool, and feature flag metadata came from the existing platform (for example, LaunchDarkly). Correlation and time normalization followed patterns inspired by OpenTelemetry. Alerts linked to Jira issues for follow-ups, and dashboards ran in the companys BI layer.
How did you handle quality control and governance? Vendor code mappings, identity crosswalks, and anomaly thresholds lived under change control with Product Ops and Analytics ownership. Validation checks flagged orphaned events, inconsistent step order, and low-confidence joins for review. All mapping edits, threshold changes, alerts, and approvals were logged. Practices aligned with internal policies and identity proofing guidance such as NIST SP 800?63A.
How did you roll this out without disruption? The model ran in shadow mode, generating correlated funnels and draft alerts while teams used existing analyses. Past incidents and experiments were replayed to tune mappings and thresholds. Rollout began with selected vendors and routes, expanded by platform and region, and only then connected alerts to experiment guardrails. Manual paths remained as a controlled fallback early on.
How did you tie vendor events to sessions and outcomes? The correlation layer built crosswalks between pre-auth session IDs, vendor request/transaction IDs, and post-auth accounts. Time normalization aligned events with transport-specific windows. Where IDs were ambiguous, the system flagged low-confidence joins for review and stored rationale alongside edits.
How did anomaly detection work? Detectors watched step outcomes by vendor, platform, and region against expected bands. Signals included unusual fallout, elevated retries, and shifts in normalized error reasons. Alerts routed to owners with links to curated funnels, examples, and recent changes, and could pause experiments when guardrails were exceeded.
How did you protect PII? Curated views stored hashed identifiers and avoided vendor images or raw documents. Access to raw payloads remained limited to approved roles. All queries and exports were logged, and dashboards displayed only the fields needed for product and vendor decisions, aligned with internal privacy policies.
Department/Function: Analytics & Executive LeadershipLegal & ComplianceProduct Management & R&DStrategy
Capability: Data IntegrationPipelines & Reliability
Get a FREE
Proof of Concept
& Consultation
No Cost, No Commitment!


