Context

Flanders plays a central role as a European logistics hub. Supply chains increasingly rely on multimodal transport (road, inland waterways, rail, maritime) to balance cost, lead time, and sustainability.

In practice, shipment tracking is often fragmented:

  • Different carriers, terminals, and platforms each expose partial data.
  • End-to-end visibility requires manual monitoring of multiple portals and bespoke integrations.
  • Sharing operational data across organizations raises questions about confidentiality, accountability, and “share only what’s needed”.

The use case: tracking shipments across a multimodal chain

The goal is to track the location and status of shipments (goods in transit) throughout a supply chain, across multiple legs and transport modes, with enough precision to:

  • Improve predictability of arrival times (ETA).
  • Enable synchromodal decisions such as rerouting or switching transport modes.
  • Give stakeholders timely, reliable, role-appropriate visibility without centralizing all data.

Example flows include multimodal deliveries and inbound maritime shipments that continue via inland waterways to inland terminals—where tracking otherwise requires monitoring multiple platforms.

Solution approach

This use case aligns with logistics data space thinking: a federated architecture that enables decentralized data exchange, so data can flow between stakeholders without transferring data ownership.

Key ingredients:

  • Interoperable semantics for tracking events and shipment-related concepts, so “location”, “handover”, “arrival”, and “exception” are consistently interpreted.
  • Event-driven data sharing to keep participants up-to-date with what happens in transit.
  • Policy-based access and usage control, to share only what is necessary for a role/purpose, and to support accountability (who accessed what, when, and under which policy).

Technological exploration

We have been looking at the core technology choices for enforcing usage control policies in a Solid ecosystem:

  • Solid pods as user/organization-controlled stores, exposing resources via LDP and identifying agents via WebID (typically authenticated via Solid-OIDC).
  • Solid authorization via WAC and/or ACP (Access Control Resources / ACRs) for what the server can enforce at request time.
  • ODRL to express richer usage control constraints (e.g., temporal constraints) that exceed WAC/ACP expressivity.
  • A Usage Control Policy Knowledge Graph (UCP KG) maintained by the resource owner, where ODRL policies live and can be updated over time.
  • A long-running, rule-based Web agent that subscribes to changes in the UCP KG and materializes ODRL policies into enforceable Solid access-control rules (e.g., ACP ACR updates).
  • Condition–action rules (expressed as N3 rules) to decompose policy constraints into executable tasks (e.g., grant now, revoke later).

The Interoperable policy evaluation (FORCE) contains:

  • An ODRL Compliance Report Model to express evaluation results in a structured, explainable way (not just allow/deny, but also why).
  • An ODRL Evaluator that is stateless, requiring necessary facts (e.g., current time) to be provided explicitly via a State of the World.
  • An ODRL Test Suite to compare different evaluators for interoperability/consistency across implementations.
  • A web demonstrator UI to explore policies, requests, state-of-the-world inputs, and human-readable explanations of the resulting compliance report.

In Trustflows terms, this aims for a write-to-read pipeline where tracking events are captured with provenance and policy context, and different stakeholders consume projections at the granularity they are entitled to.