Overview

Quarterly business review decks at a telecom operator took too long to assemble because analysts rebuilt charts and pasted screenshots from different sources. Filters, cut-offs, and definitions drifted between teams, and last-minute edits pushed meetings into slide cleanup rather than discussion. Intelligex built a deck assembly workflow that pulled certified Tableau views and narrative summaries from a Databricks Delta table directly into Google Slides via API. A two-step approval captured content owner and executive sign-off before distribution. Reviews ran on fresher numbers with less rework because every slide drew from the same governed snapshot and template.

Client Profile

  • Industry: Telecommunications operator
  • Company size (range): Enterprise with regional business units and shared services
  • Stage: Established public company with recurring executive reviews
  • Department owner: Strategy, Analytics & Executive Leadership (Corporate Strategy and FP&A)
  • Other stakeholders: Sales Operations, Network Operations, Customer Care, Regional General Management, IT/Data Engineering, Investor Relations, Internal Audit, Legal & Compliance

The Challenge

QBR decks combined commercial, network, and customer experience views. Analysts exported Tableau charts to images, made ad hoc edits in spreadsheets, and pasted screenshots into Google Slides. Minor differences in filters, date cut-offs, and business unit roll-ups led to inconsistencies between sections. When leaders requested a change, teams reopened files, refreshed screenshots, and tried to keep formatting consistent across versions.

There were also governance gaps. The company used Databricks for curated data with Delta tables, Tableau for analytics, and Google Slides for distribution. But deck assembly happened outside these systems. There was no enforced cut-off for a reporting snapshot, no structured way to bind a slide to a source view and its parameters, and no approval trail. Analysts spent cycles chasing alignment, and the executive conversation competed with slide production.

Why It Was Happening

Content production was manual and fragmented. Screenshots broke lineage to data, so a small parameter change in Tableau did not propagate to slides reliably. Narrative sections were rewritten each quarter, with inconsistent definitions and stale text blocks pulled from old files. Because the process lacked a governed snapshot, two teams could present similar metrics with different time windows and still appear correct in isolation.

Approvals arrived late and informally. Decks were circulated by email with comments, and edits sometimes overwritten by a later version. There was no single place to confirm which charts and narratives were final, which hindered audit trails and introduced risk for investor-facing variants of the review.

The Solution

We implemented deck assembly automation that draws from governed sources and writes into a standard Google Slides template. Tableau views rendered through the API with predefined parameters and filters. Narrative summaries and key figures came from a Databricks Delta table that stored precomputed text and values by segment. A snapshot policy froze the dataset using Delta Lake capabilities at a defined cut-off. A two-step approval—content owner in Strategy/FP&A and the sponsoring executive—locked the deck before distribution. Nothing was replatformed: Tableau remained the analytics layer, Databricks remained the data engine, and Google Slides remained the distribution channel.

  • Data snapshot at cut-off using Delta Lake features so all charts and narratives reflect the same version (Delta Lake Time Travel)
  • Automated export of Tableau views as images with parameter locks and consistent styling via REST API (Tableau REST: Query View Image)
  • Narrative and KPI registry stored in a Databricks Delta table for reusable text blocks and figures (Databricks Delta Lake)
  • Google Slides template with placeholders for charts, captions, footnotes, and narrative text, populated via API (Google Slides API)
  • Assembly service that maps slide placeholders to Tableau view IDs and Delta fields, with a content catalog and parameter checks
  • Two-step approval workflow with reason codes and a change log; only approved decks move to the distribution folder
  • Job orchestration for scheduled refresh and assembly runs, with alerts on failures and data freshness (Databricks Jobs)
  • Role-based permissions that separate editors from approvers and viewers; immutable PDF export for external circulation
  • Audit metadata stamped on each deck: snapshot ID, view IDs, parameter set, approvers, and publication timestamp

Implementation

  • Discovery: Cataloged the existing QBR deck structure, the Tableau views in use, and the narrative sections that repeated each quarter. Mapped Databricks Delta tables to the KPIs and text blocks analysts rebuilt by hand. Collected current approval paths and distribution lists.
  • Design: Defined a template with slide placeholders and a content catalog that bound each slide to a Tableau view ID, parameter set, and narrative field. Established a cut-off policy and snapshot procedure. Designed the approval workflow and change log. Documented naming and folder conventions for distribution.
  • Build: Implemented a snapshot job in Databricks and created the Delta table for narrative text and KPI inserts. Built the Tableau export scripts with parameter locks and styling controls. Developed the Google Slides assembly service to replace placeholders and attach footnotes. Wired the two-step approval and logging, and set up scheduled runs with alerts.
  • Testing and QA: Reproduced a prior quarter’s deck with the automation, reconciled figures to Tableau, and validated that narratives matched the Delta snapshot. Tested parameter variations and failed-refresh scenarios. Verified approvals, version stamps, and distribution permissions.
  • Rollout: Ran the automated deck in parallel with the manual process for one cycle. After sign-off from Strategy and FP&A, made the assembled deck the primary artifact and retained a manual override for exceptional slides. Expanded coverage to regional appendices once the core deck stabilized.
  • Training and hand-off: Provided quick guides for analysts on adding slides to the content catalog, for Strategy on approvals and change logs, and for IT on monitoring jobs and handling exceptions. Assigned stewardship for templates, view mappings, and narrative fields, with a cadence for review before each quarter-end.

Results

QBRs moved from screenshot assembly to a governed workflow. Charts rendered directly from the same Tableau views leaders used day to day, and narratives pulled from a snapshot in Databricks. The two-step approval captured accountability, and distribution happened from a controlled folder with consistent permissions. Discussions focused on trends and actions rather than reconciling slides.

Rework declined because the template and mappings removed formatting debates and duplicated effort. When leaders asked for a drill-back, the deck referenced the snapshot ID and view IDs, so analysts answered with confidence. Investor-facing derivatives drew from the same approved deck with minor redactions, which simplified audit trails.

What Changed for the Team

  • Before: Analysts rebuilt charts and pasted screenshots. After: Slides populated via API from governed Tableau views and Delta narratives.
  • Before: Filters and cut-offs drifted by team. After: A snapshot policy locked the dataset, and parameters were enforced per slide.
  • Before: Approvals happened by email threads. After: A two-step approval and change log governed edits and publication.
  • Before: Deck versions multiplied across drives. After: Distribution used a controlled folder with immutable exports for external use.
  • Before: Formatting consumed prep time. After: A standard template and content catalog kept layout consistent and reusable.

Key Takeaways

  • Automate deck assembly from governed analytics and data snapshots; screenshots erode lineage and consistency.
  • Bind each slide to a source view and parameter set; enforce a cut-off so metrics match across sections.
  • Use a standard template and a content catalog to eliminate layout thrash and reduce rework.
  • Capture approvals and change logs before distribution to improve accountability and audit readiness.
  • Keep Tableau, Databricks, and Google Slides; orchestrate them with APIs, scheduling, and light governance.

FAQ

What tools did this integrate with?
The workflow pulled charts from Tableau using the REST API to render parameterized views as images, used Databricks Delta tables for KPI inserts and narrative text, assembled slides through the Google Slides API, and scheduled runs with Databricks Jobs. Delta Lake snapshotting ensured a consistent cut-off for each cycle (Time Travel).

How did you handle quality control and governance?
A cut-off procedure created an immutable snapshot with a unique ID. Each slide mapped to a Tableau view ID and parameter set, stored in a content catalog. The assembly logged source IDs, parameter values, and the snapshot used. A two-step approval captured content owner and executive sign-off, and only approved decks were distributed. Immutable exports were used for external circulation to preserve consistency.

How did you roll this out without disruption?
We ran the automated assembly in parallel with the legacy process for a cycle, reconciled differences, and tuned mappings. Once Strategy and FP&A were comfortable, the automated deck became the primary artifact. Manual overrides remained for exceptional cases, and regional appendices were onboarded gradually.

How were narrative summaries created and kept accurate?
Narratives were generated from a curated Delta table that stored text blocks and key figures by segment and topic. Text referenced the same snapshot as the charts. Owners refreshed narrative entries as part of quarter-end prep, and the approval step required confirmation that narratives matched updated definitions and cut-offs.

How did you prevent parameter drift and formatting issues?
Parameter values for each view were locked in the content catalog and validated at assembly. The template controlled placement, fonts, and footnotes, so charts rendered consistently. If a view changed shape, the assembly flagged a mismatch and routed it to an analyst to adjust the mapping before publication.

You need a similar solution?

Get a FREE
Proof of Concept
& Consultation

No Cost, No Commitment!