Overview
A third-party logistics providers network redesign effort bogged down because transportation costs were stale, warehouse constraints were inconsistent by site, and every scenario required a bespoke spreadsheet. Leaders spent review time reconciling inputs instead of comparing options. Intelligex connected Transportation Management System (TMS) shipments, Warehouse Management System (WMS) throughput, and broker rate sources into a governed data model, then built a Python scenario engine that generated viable network designs under service and capacity rules. Options were exposed in a leadership dashboard with an approval history for selected designs. Decisions moved faster with fewer one-off analyses and clearer rationale, and the team aligned on shared assumptions without changing core systems.
Client Profile
- Industry: Third-party logistics (3PL) provider
- Company size (range): Enterprise with multi-client, multi-node fulfillment and transportation
- Stage: Mature operator refreshing regional network and procurement strategies
- Department owner: Strategy, Analytics & Executive Leadership (Network Strategy)
- Other stakeholders: Operations (Fulfillment and Transportation), Engineering/Continuous Improvement, Procurement, Finance/FP&A, Sales/Account Management, IT/Data Engineering, Legal & Compliance
The Challenge
Network decisions required a reliable view of flows, costs, and constraints. In practice, TMS held shipment histories and lane attributes, WMS tracked throughput, labor, and storage, and rate inputs were a mix of carrier contracts and broker benchmarks. Analysts exported data, adjusted spreadsheets per site, and applied local rules for dock and labor capacity. Assumptions drifted between models, and cost tables aged while teams reconciled cuts, accessorials, and fuel assumptions against the latest procurement view.
Governance was light. There was no single assumptions library, no standard treatment for service windows or cut-offs, and no audit trail binding a recommended design to the inputs used. Every redesign kicked off new models and email threads with competing versions of the same question. Leadership wanted to keep the current TMS and WMS, use existing rate sources, and add a consistent path from data to scenario to approval with clear ownership.
Why It Was Happening
Data and calendars were fragmented. TMS and WMS used different location keys and time windows, rate tables refreshed on separate cadences, and fuel and accessorial policies varied by analyst. Scenario work happened in isolated spreadsheets, which broke lineage to sources and created multiple truths. When a late rate update arrived, it was unclear which scenarios to update and why outcomes shifted.
Constraints were interpreted inconsistently. Service promises, cut-offs, dock slots, labor shifts, and storage limits were applied differently by site and client. Without encoded rules and a shared validation layer, two models could claim feasibility while assuming different operational realities. The lack of a review gate meant proposals reached leadership before Procurement and Operations validated assumptions.
The Solution
We implemented a governed pipeline and scenario engine that drew on current TMS, WMS, and rate sources, encoded operational rules, and exposed choices in a single decision view. Shipment and lane data flowed from TMS; throughput and capacity signals came from WMS; broker and contract rates refreshed on a set cadence. A Python engine using a combinatorial optimizer generated feasible designs under service and capacity constraints, and a dashboard presented cost, service, and risk trade-offs for each option. An approval workflow captured Operations, Procurement, and Finance sign-offs with reason codes. Core systems remained; the orchestration standardized inputs, rules, and reviews.
- TMS shipment and lane ingestion mapped to lanes, modes, service windows, and accessorials (for example, Oracle Transportation Management)
- WMS throughput and capacity feeds for pick/pack rates, dock slots, labor shifts, and storage by zone (for example, Manhattan WMS)
- Broker and contract rate inputs, including spot benchmarks from sources such as DAT Freight & Analytics and lane-specific accessorial policies
- Conformed data model and snapshots in Snowflake, with transformations and validations in dbt
- Python scenario engine with encoded service and capacity constraints using a mature optimizer (OR-Tools)
- Orchestration for data refresh and scenario runs using Apache Airflow, with alerts on stale rates and failed validations
- Leadership dashboard in Power BI showing options, cost drivers, service coverage, and utilization with drill-through to inputs
- Approval workflow in ServiceNow capturing Operations, Procurement, and Finance sign-offs, reason codes, and change logs
- Role-based access separating client-sensitive details from internal summaries; audit trail for assumptions, models, and approvals
Implementation
- Discovery: Cataloged TMS shipments and lane attributes, WMS capacity metrics and calendars, and rate sources and refresh cadences. Mapped location and client identifiers, common accessorials, and service-level commitments. Reviewed prior redesign decks to pinpoint recurring reconciliation and assumption conflicts.
- Design: Defined a shared identity and calendar model, cost component taxonomy, and an assumptions library (fuel basis, accessorials, service windows, cut-offs). Authored constraint encodings for dock, labor, and storage by site. Designed validation checks and exception categories. Planned dashboard views and the approval path with reason codes and required reviewers.
- Build: Landed TMS, WMS, and rate feeds into Snowflake; implemented dbt models for identity stitching, cost normalization, and validations; built the Python scenario engine with configurable constraints and scenario parameters; scheduled runs via Airflow; developed Power BI views with drill paths; and configured ServiceNow approvals and audit logging.
- Testing and QA: Replayed recent bids and redesigns to reconcile outcomes against known decisions. Verified identity joins, rate application, and accessorial logic. Stress-tested constraint encodings for edge cases (seasonal volume spikes, labor shifts, dock bottlenecks). Tuned scenario defaults and validations with Operations and Procurement. Exercised the approval flow and change logs.
- Rollout: Launched read-only scenario views alongside legacy spreadsheets. After validation, enabled approvals for targeted clients and regions. Expanded coverage as teams gained confidence, keeping a manual override for time-sensitive changes with post-review documentation.
- Training and hand-off: Delivered concise guides for analysts on scenario parameters and constraints, for Operations on stewardship of capacity inputs, for Procurement on rate refresh and accessorial policies, and for Finance on cost component interpretation. Established ownership for the assumptions library, constraint rules, and approvals with a set review cadence.
Results
Leadership compared network options built from the same cost, service, and capacity assumptions. Each scenario was stamped with a snapshot and an assumptions list, and drill-through showed the shipment, rate, and site capacity inputs behind outcomes. Operations and Procurement validated rules and rates before executive reviews, and Finance saw cost components with consistent treatment of accessorials and fuel.
One-off spreadsheet analyses were replaced by repeatable scenario runs. Rate changes and capacity updates propagated through scheduled refreshes, with validations catching stale or conflicting inputs. Approval histories captured the rationale for selections, which reduced rework and made it easier to explain trade-offs to clients and internal stakeholders.
What Changed for the Team
- Before: Scenario models lived in spreadsheets with bespoke rules. After: A Python engine generated options under shared, encoded constraints.
- Before: Rates and accessorials were stale or applied inconsistently. After: Scheduled feeds and a common cost taxonomy kept scenarios aligned.
- Before: TMS and WMS identifiers and calendars conflicted. After: A conformed model harmonized location keys and time windows.
- Before: Approvals and assumptions were buried in decks. After: ServiceNow recorded sign-offs, reason codes, and scenario snapshots.
- Before: Reviews reopened data questions. After: Dashboards showed sourced inputs and utilization with drill-through for validation.
Key Takeaways
- Unify transportation, warehouse, and rate data under a shared identity and calendar; network decisions depend on consistent inputs.
- Encode service and capacity constraints in the scenario engine; feasibility should be enforced, not assumed.
- Standardize cost components and accessorials; a clear taxonomy prevents silent changes and misinterpretation.
- Freeze inputs per cycle and capture approvals with reason codes; traceability accelerates alignment and client conversations.
- Keep TMS, WMS, and rate sources; layer a governed pipeline, optimizer, and decision views to reduce reconciliation without replatforming.
FAQ
What tools did this integrate with?
We ingested shipments and lanes from TMS (for example, Oracle Transportation Management), throughput and capacity from WMS (for example, Manhattan WMS), and broker and contract rates from existing procurement sources such as DAT. Data landed in Snowflake, transformations ran in dbt, scenarios were produced by a Python engine using OR-Tools and scheduled with Apache Airflow, views were delivered in Power BI, and approvals executed in ServiceNow.
How did you handle quality control and governance?
A shared assumptions library defined fuel basis, accessorial policies, service windows, cut-offs, and constraint defaults. dbt tests validated identity joins, stale rates, and out-of-range values. Airflow alerted on failed refreshes. Each scenario carried a snapshot and assumptions list, and ServiceNow captured approvals with reason codes. An audit trail bound outcomes to inputs and model versions.
How did you roll this out without disruption?
We ran the dashboard and engine in read-only mode alongside existing spreadsheets while teams validated outcomes and tuned assumptions. Once confident, we activated approvals for selected regions and clients, expanding coverage progressively. Core systems and workflows remained; the new layer standardized data, rules, and reviews around them.
How were service-level and capacity constraints modeled?
Constraints included promised delivery windows by market, fulfillment cut-offs, dock slot schedules, labor shifts, pick/pack rates, and storage by zone. These were encoded as parameters in the optimizer and sourced from WMS and operations standards. Exceptions routed to a human-in-the-loop review when site-specific realities warranted a deviation, with rationale recorded.
How did you keep costs current and comparable across scenarios?
Rate tables refreshed on a defined cadence and included fuel basis and accessorial schedules. Cost components were normalized to a common taxonomy and calendar so scenarios compared like with like. When Procurement updated a lane or accessorial, validations flagged the change and downstream scenarios re-computed automatically, with version stamps visible in the dashboard.
Get a FREE
Proof of Concept
& Consultation
No Cost, No Commitment!


