Overview

A multi-hospital healthcare system was running supplier reviews off ad hoc spreadsheets. Each category manager defined scorecards differently, sourced data from different systems, and spent meeting time debating whose numbers were right. Intelligex standardized the metric library inside the existing Supplier Relationship Management (SRM) tool and automated feeds from Accounts Payable (AP), Quality Assurance (QA), and logistics systems. With a common data model, governance, and a supplier-facing view, business reviews shifted to root-cause and improvement actions instead of reconciling reports.

Client Profile

  • Industry: Healthcare provider network (acute, ambulatory, and outpatient)
  • Company size (range): Multi-hospital system with regional supply distribution
  • Stage: Established ERP, SRM, AP automation, and basic logistics visibility
  • Department owner: Procurement, Supply Chain & Logistics
  • Other stakeholders: Clinical sourcing, Quality and patient safety, Finance/AP, Materials management, Logistics and receiving, Legal and compliance, IT applications

The Challenge

Supplier performance was measured differently by every category. One team weighted on-time delivery heavily, another emphasized invoice match rates, and a third focused on complaint counts. Underlying data came from different extracts and time windows. Suppliers disputed findings, and category managers burned cycles stitching together evidence before each quarterly business review.

Core platforms were already in place. The Enterprise Resource Planning (ERP) system handled purchase orders, pricing, and receipts. AP automation tracked invoice exceptions. The Quality Management System (QMS) captured nonconformances and complaints. Logistics systems recorded Advanced Shipping Notice (ASN) accuracy and delivery status. None of these owned a shared metric definition or a single place to see performance by supplier, contract, and category. Leadership asked for standard scorecards inside the SRM, automated data refresh, and a clear governance model so results were trusted.

Why It Was Happening

Metrics and definitions were fragmented. On-time meant appointment adherence for one group and dock timestamp for another. Damage rates were measured at the case level in one file and at the line level in another. Date ranges varied by report, and supplier identities did not map cleanly across AP, ERP, and logistics. Without a canonical data model and metric dictionary, well-meaning teams produced incompatible views.

Processes reinforced the fragmentation. Scorecards were assembled close to review dates, which meant limited time to reconcile exceptions with suppliers. Disputes were handled in email threads with no consistent reason codes or audit trail. Suppliers lacked a transparent view of how scores were constructed, so debates focused on inputs rather than fixes.

The Solution

Intelligex implemented a standardized, SRM-embedded scorecard framework and automated the data pipeline from AP, QA, and logistics. A canonical supplier and item model aligned records across systems, metrics were defined once with category-specific weightings, and data refreshed on a predictable cadence. Suppliers saw the same facts as category managers and could request corrections through a governed workflow. Review packs generated automatically, with drill-down to the underlying events and documents.

  • Integrations: Bi-directional sync with ERP for supplier master, contracts, purchase orders, receipts, and price agreements (for example, SAP S/4HANA); AP automation feeds for invoice-match exceptions and payment timeliness; QMS for nonconformances and complaints; Transportation/Warehouse systems for ASN accuracy, on-time delivery, and receiving discrepancies; optional item identity alignment using GS1 GTIN where available.
  • Metric library: Standard definitions for on-time delivery, fill/complete, ASN accuracy, receiving discrepancy rate, invoice match rate, price adherence, corrective action responsiveness, complaint/NC counts, and quality holds; category-specific weightings maintained centrally.
  • Canonical data model: Unified supplier identities, contract references, category taxonomy aligned to the United Nations Standard Products and Services Code (UNSPSC), and normalized date/time intervals.
  • Data pipeline and validations: Scheduled ingestion from source systems with schema checks; deduplication and near-match logic for supplier identities; validations for time-window consistency and unit-of-measure alignment; reason codes for excluded records.
  • Supplier portal: Permissions-aware views of current and historical scores, trend charts, and issue lists; dispute submission with evidence upload and SLA-driven responses.
  • Review packs and dashboards: Auto-generated business review decks with summary scores, trends, and drill-down to line-level exceptions; portfolio views across suppliers, categories, and facilities.
  • Governance and approvals: Human-in-the-loop review for metric definition changes, weighting updates, and supplier dispute resolutions; version history and audit trails for all edits.
  • Permissions: Role-based access for category managers, suppliers, QA, AP, and leadership; read-only views for clinicians and compliance; immutable logs of data refreshes and decisions.

Implementation

  • Discovery: Mapped current scorecard practices across categories; inventoried source data and refresh cadences in ERP, AP, QMS, and logistics; identified common disputes and definitional differences; cataloged supplier identity mismatches and contract references.
  • Design: Defined the canonical supplier and category model; built the metric dictionary and status glossary; set a standard time window and refresh schedule; designed event schemas and reason codes for disputes and corrections; agreed on a baseline weighting by category with a governed change process.
  • Build: Implemented data connectors and the transformation layer; configured the SRM scorecard module and supplier portal; created validation and exception queues; built review pack templates and leadership dashboards.
  • Testing/QA: Replayed prior quarters’ reviews in a sandbox; reconciled key suppliers’ scores against legacy spreadsheets; validated metric calculations end-to-end; piloted supplier disputes in observe-only mode; enforced human-in-the-loop checks for weighting changes.
  • Rollout: Launched with a subset of categories and strategic suppliers; kept the legacy spreadsheet scorecards as a reference; enabled automated refresh and review packs after data quality stabilized; expanded to the broader supplier base and categories in phases.
  • Training/hand-off: Scenario-based sessions for category managers, QA, AP, and logistics; quick guides embedded in the SRM; supplier enablement webinars on portal use and dispute SLAs; transitioned operations to procurement analytics with IT support on call.

Results

Scorecards became consistent and defensible. Category managers and suppliers looked at the same definitions, time windows, and drill-down data. AP exceptions, QA findings, and logistics events fed the SRM routinely, so reviews reflected current performance rather than stitched-together snapshots. Suppliers could contest inputs with evidence, and decisions were captured in a clear audit trail.

Business reviews changed tone. Meetings shifted from reconciling numbers to identifying root causes and agreeing on corrective actions. Trends and outliers were visible across categories and facilities, so leadership could focus on systemic issues. Category managers spent less time preparing decks and more time driving improvements with stakeholders.

What Changed for the Team

  • Before: Each category built its own scorecard; After: A standardized metric library and weightings lived in the SRM.
  • Before: Data prep required one-off extracts; After: AP, QA, and logistics feeds refreshed on a schedule with validations and exception queues.
  • Before: Suppliers disputed definitions and time windows; After: Shared dashboards and a dispute workflow created transparency and faster resolution.
  • Before: Meetings debated data; After: Reviews focused on corrective actions and sustained improvements.
  • Before: Identity mismatches across systems caused confusion; After: A canonical supplier model aligned records across ERP, AP, QMS, and logistics.

Key Takeaways

  • Standardize the metric dictionary and weightings inside the SRM so categories compare performance on the same basis.
  • Automate data feeds from ERP/AP/QMS/logistics and validate at ingest to avoid late-cycle reconciliation.
  • Unify supplier identities and category taxonomy using recognized standards such as UNSPSC and, where applicable, GS1 GTIN.
  • Give suppliers a transparent view and a governed dispute process so conversations shift to improvements.
  • Introduce human-in-the-loop controls for metric and weighting changes to protect comparability over time.
  • Roll out by category and supplier tier, validate against legacy scorecards, then retire spreadsheets once results are trusted.

FAQ

What tools did this integrate with?
The scorecard framework synchronized supplier master, contracts, purchase orders, receipts, and pricing from the ERP (for example, SAP S/4HANA). It ingested invoice-match and payment data from AP automation, nonconformances and complaints from the QMS, and ASN/delivery and receiving discrepancy events from logistics systems. Category taxonomy aligned to UNSPSC, and item identity could reference GS1 GTIN for product-level rollups where used.

How did you handle quality control and governance?
We defined a metric dictionary with version control, enforced a shared refresh cadence, and validated feeds for schema, time-window alignment, and identity mapping. Disputes followed a human-in-the-loop workflow with evidence, reason codes, and SLAs. Changes to metric definitions or weightings required approvals, and every decision was audit-logged with user, timestamp, and context.

How did you roll this out without disruption?
We piloted with a few categories and strategic suppliers while continuing legacy spreadsheet scorecards as a reference. The SRM ran in observe-only mode initially, generating review packs without replacing the old decks. After results matched expectations and disputes flowed successfully through the portal, we expanded coverage and formally retired the spreadsheets.

How were suppliers involved in the process?
Suppliers received portal access to view their scorecards, drill into exceptions, and submit disputes with supporting documents. They saw the same metric definitions and time windows. Resolution outcomes were published back to both parties, and corrective actions were tracked in the same workspace to keep reviews focused.

Can metrics differ by category without losing comparability?
Yes. The metric library is shared, but weightings can vary by category with governance. For example, on-time delivery may carry more weight in med-surg than in office supplies, while invoice accuracy may be emphasized for services. Weighting changes are versioned and approved so trends remain meaningful over time.

You need a similar solution?

Get a FREE
Proof of Concept
& Consultation

No Cost, No Commitment!