Demo showing Expanso Edge pipeline consuming O-RAN metrics and pushing to OTEL Collector on OpenShift.
Landing Page | Quick Start Guide
This demo showcases how Expanso can collect, transform, and route O-RAN telemetry data from edge Distributed Units (DUs) running on OpenShift Single Node deployments. The pipeline transforms raw DU metrics into OpenTelemetry Protocol (OTLP) format for ingestion by any OTLP-compatible observability backend.
- Edge-native processing - Runs directly on SNO nodes with minimal footprint
- Real-time transformation - Bloblang processors normalize and enrich telemetry
- Multi-destination routing - Fan-out to multiple OTLP endpoints simultaneously
- Cloud-managed pipelines - Deploy and update pipelines via Expanso Cloud
This demo is designed for integration with Red Hat's observability stack:
- OpenShift Container Platform (Single Node deployment)
- OpenTelemetry Collector (metrics routing)
- Red Hat Observability or any OTLP-compatible backend
Expanso Edge pipelines follow an Input → Processors → Output model:
┌─────────────────────────────────────────────────────────────────┐
│ Expanso Edge Pipeline │
│ │
│ ┌─────────┐ ┌─────────────────────┐ ┌─────────────┐ │
│ │ INPUT │────▶│ PROCESSORS │────▶│ OUTPUT │ │
│ │ │ │ │ │ │ │
│ │ - file │ │ - transform │ │ - HTTP │ │
│ │ - http │ │ - filter │ │ - OTLP │ │
│ │ - kafka │ │ - enrich │ │ - Kafka │ │
│ │ - generate│ │ - validate │ │ - stdout │ │
│ └─────────┘ └─────────────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
The pipeline uses Bloblang for data transformation. Key patterns:
pipeline:
processors:
- mapping: |
# Transform raw metrics to OTLP format
let ts = (this.timestamp * 1000000000).string()
root.resourceMetrics = [{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "du-simulator"}},
{"key": "du.id", "value": {"stringValue": this.du_id}}
]
},
"scopeMetrics": [{
"scope": {"name": "du-telemetry", "version": "1.0.0"},
"metrics": [
{
"name": "du.ptp4l_offset_ns",
"unit": "ns",
"gauge": {
"dataPoints": [{"timeUnixNano": $ts, "asInt": this.ptp4l_offset_ns}]
}
}
]
}]
}]┌──────────────────────────────────────────────────────────────────────────────┐
│ OpenShift Single Node (Edge Site) │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ O-RAN Metrics │────▶│ Expanso Edge │────▶│ OTEL Collector │ │
│ │ (DU Simulator) │ │ (Pipeline) │ │ │ │
│ └─────────────────┘ └─────────────────┘ └────────┬────────┘ │
│ INPUT PROCESSORS OUTPUT │ │
│ - transform to OTLP │ │
│ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Grafana │◀────│ Prometheus │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────────────────┘
- Input: DU simulator generates O-RAN telemetry every 5 seconds
- Processing: Bloblang transforms metrics to OTLP JSON format
- Output: HTTP POST to OTEL Collector at
/v1/metrics - Storage: Prometheus scrapes OTEL Collector's Prometheus exporter
- Visualization: Grafana queries Prometheus for dashboards
The pipeline can fan-out to multiple OTLP Collectors:
┌──────────────────────────────────────────────────────────────────────────────┐
│ OpenShift Single Node (Edge Site) │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ O-RAN Metrics │────▶│ Expanso Edge │──┬─▶│ OTEL Collector │ │
│ │ (DU Simulator) │ │ (Pipeline) │ │ │ (local) │ │
│ └─────────────────┘ └─────────────────┘ │ └─────────────────┘ │
│ INPUT PROCESSORS │ │
│ │ │
└───────────────────────────────────────────────┼──────────────────────────────┘
│
│ OUTPUT (multiple destinations)
▼
┌─────────────────┐
│ OTEL Collector │
│ (external/cloud)│
└─────────────────┘
This enables:
- Local observability for on-site operators
- Central aggregation for fleet-wide visibility
- Hybrid routing based on metric criticality
The demo simulates key O-RAN DU telemetry:
| Metric | Unit | Description |
|---|---|---|
du.ptp4l_offset_ns |
nanoseconds | PTP clock offset from grandmaster |
du.cpu_pct |
percent | CPU utilization on isolated cores |
du.prb_dl_pct |
percent | Physical Resource Block utilization (downlink) |
Per O-RAN and Red Hat RAN Reference Design specifications:
| Metric | Threshold | Status |
|---|---|---|
| ptp4l offset | ±100ns | DEGRADED if exceeded |
| phc2sys offset | ±50ns | DEGRADED if exceeded |
| Lock state | != LOCKED | CRITICAL |
- OpenShift cluster - Single Node OpenShift (SNO) or standard cluster
- oc CLI - Logged in with permissions to create deployments, services, routes
- Expanso Cloud account - Register at cloud.expanso.io
# Verify cluster access
oc whoami
oc get nodes
# Ensure you have a project/namespace
oc project <your-project>Create your Expanso Cloud account and set up credentials for the edge node.
- Go to cloud.expanso.io
- Create a new Network (logical grouping of edge nodes)
- Generate a bootstrap token - this authenticates the edge node
Note: The bootstrap token is a one-time credential. Store it securely.
Store the bootstrap token as a Kubernetes secret:
oc create secret generic expanso-bootstrap \
--from-literal=token=<BOOTSTRAP_TOKEN>Verify the secret:
oc get secret expanso-bootstrap -o yamlDeploy the OTEL Collector, Prometheus, and Grafana:
oc apply -f observability-deployment.yamlThis creates:
- OTEL Collector - Receives OTLP metrics, exports to Prometheus format
- Prometheus - Scrapes OTEL Collector, stores time-series data
- Grafana - Visualization and dashboards
Verify pods are running:
oc get pods -wExpected output:
NAME READY STATUS RESTARTS AGE
otel-collector-xxxxxxxxx-xxxxx 1/1 Running 0 30s
prometheus-xxxxxxxxx-xxxxx 1/1 Running 0 30s
grafana-xxxxxxxxx-xxxxx 1/1 Running 0 30s
Deploy the Expanso Edge agent:
oc apply -f expanso-edge-deployment.yamlThe agent will:
- Read the bootstrap token from the secret
- Register itself with Expanso Cloud
- Begin polling for pipeline configurations
Verify registration:
oc logs -f deployment/expanso-edgeDeploy the pipeline through Expanso Cloud:
- Go to cloud.expanso.io
- Navigate to your Network
- Click Create Pipeline
- Paste the contents of
expanso-pipeline-FOR-EXPANSO-CLOUD.yaml - Click Deploy
The pipeline will be pushed to all nodes in the network.
Access the Grafana dashboard:
# Get the route URL
oc get route grafana -o jsonpath='{.spec.host}'- Open the URL in your browser
- Login with
admin/admin - Go to Explore → Select Prometheus datasource
- Query metrics:
du_ptp4l_offset_nsdu_cpu_pctdu_prb_dl_pct
| File | Description |
|---|---|
observability-deployment.yaml |
OTEL Collector, Prometheus, Grafana stack |
expanso-edge-deployment.yaml |
Expanso Edge agent deployment |
expanso-pipeline-FOR-EXPANSO-CLOUD.yaml |
Pipeline config (for Expanso Cloud UI) |
expanso-pipeline-FOR-EXPANSO-CLI.yaml |
Pipeline config (for CLI deployment) |
observability-deployment.yaml:
# ConfigMaps for OTEL Collector and Prometheus
# Deployments for all three services
# Services for internal communication
# Route for Grafana external accessexpanso-edge-deployment.yaml:
# Deployment with bootstrap token from secret
# Service for Expanso Edge API (port 9010)
# Route for external access (optional)# Check Expanso Edge logs
oc logs deployment/expanso-edge
# Check OTEL Collector logs
oc logs deployment/otel-collector
# Verify services are running
oc get svc# Check Prometheus targets
oc port-forward svc/prometheus 9090:9090
# Open http://localhost:9090/targets
# Verify OTEL Collector is exporting
oc port-forward svc/otel-collector 8889:8889
curl http://localhost:8889/metrics# Verify secret exists
oc get secret expanso-bootstrap
# Check token value (base64 encoded)
oc get secret expanso-bootstrap -o jsonpath='{.data.token}' | base64 -dRemove all deployed resources:
oc delete -f expanso-edge-deployment.yaml
oc delete -f observability-deployment.yaml
oc delete secret expanso-bootstrap