32 releases (19 breaking)
new 0.31.0 | Sep 27, 2025 |
---|---|
0.30.0 | Jul 18, 2025 |
0.28.0 | Mar 31, 2025 |
0.25.0 | Dec 10, 2024 |
0.12.0 | Jul 2, 2023 |
#618 in Debugging
44,611 downloads per month
Used in 4 crates
76KB
1K
SLoC
init-tracing-opentelemetry
A set of helpers to initialize (and more) tracing + opentelemetry (compose your own or use opinionated preset)
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Simple preset
let _guard = init_tracing_opentelemetry::TracingConfig::production().init_subscriber()?;
//...
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// custom configuration
let _guard = init_tracing_opentelemetry::TracingConfig::default()
.with_json_format()
.with_stderr()
.with_log_directives("debug")
.init_subscriber()?;
//...
Ok(())
}
The init_subscriber()
function returns an OtelGuard
instance. Following the guard pattern, this struct provides no functions but, when dropped, ensures that any pending traces/metrics are sent before it exits. The syntax let _guard
is suggested to ensure that Rust does not drop the struct until the application exits.
Configuration Options
Presets
TracingConfig::development()
- Pretty format, stderr, with debug infoTracingConfig::production()
- JSON format, stdout, minimal metadataTracingConfig::debug()
- Full verbosity with all span eventsTracingConfig::minimal()
- Compact format, no OpenTelemetryTracingConfig::testing()
- Minimal output for tests
Custom Configuration
use init_tracing_opentelemetry::TracingConfig;
TracingConfig::default()
.with_pretty_format() // or .with_json_format(), .with_compact_format()
.with_stderr() // or .with_stdout(), .with_file(path)
.with_log_directives("debug") // Custom log levels
.with_line_numbers(true) // Include line numbers
.with_thread_names(true) // Include thread names
.with_otel(true) // Enable OpenTelemetry
.init_subscriber()
.expect("valid tracing configuration");
Legacy API (deprecated)
For backward compatibility, the old API is still available:
pub fn build_loglevel_filter_layer() -> tracing_subscriber::filter::EnvFilter {
// filter what is output on log (fmt)
// std::env::set_var("RUST_LOG", "warn,axum_tracing_opentelemetry=info,otel=debug");
std::env::set_var(
"RUST_LOG",
format!(
// `otel::tracing` should be a level trace to emit opentelemetry trace & span
// `otel::setup` set to debug to log detected resources, configuration read and infered
"{},otel::tracing=trace,otel=debug",
std::env::var("RUST_LOG")
.or_else(|_| std::env::var("OTEL_LOG_LEVEL"))
.unwrap_or_else(|_| "info".to_string())
),
);
EnvFilter::from_default_env()
}
pub fn build_otel_layer<S>() -> Result<OpenTelemetryLayer<S, Tracer>, BoxError>
where
S: Subscriber + for<'a> LookupSpan<'a>,
{
use crate::{
init_propagator, //stdio,
otlp,
resource::DetectResource,
};
let otel_rsrc = DetectResource::default()
//.with_fallback_service_name(env!("CARGO_PKG_NAME"))
//.with_fallback_service_version(env!("CARGO_PKG_VERSION"))
.build();
let otel_tracer = otlp::init_tracer(otel_rsrc, otlp::identity)?;
// to not send trace somewhere, but continue to create and propagate,...
// then send them to `axum_tracing_opentelemetry::stdio::WriteNoWhere::default()`
// or to `std::io::stdout()` to print
//
// let otel_tracer =
// stdio::init_tracer(otel_rsrc, stdio::identity, stdio::WriteNoWhere::default())?;
init_propagator()?;
Ok(tracing_opentelemetry::layer().with_tracer(otel_tracer))
}
To retrieve the current trace_id
(eg to add it into error message (as header or attributes))
# use tracing_opentelemetry_instrumentation_sdk;
let trace_id = tracing_opentelemetry_instrumentation_sdk::find_current_trace_id();
//json!({ "error" : "xxxxxx", "trace_id": trace_id})
Configuration based on the environment variables
To ease setup and compliance with OpenTelemetry SDK configuration, the configuration can be done with the following environment variables (see sample init_tracing()
above):
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
fallback toOTEL_EXPORTER_OTLP_ENDPOINT
for the url of the exporter / collectorOTEL_EXPORTER_OTLP_TRACES_PROTOCOL
fallback toOTEL_EXPORTER_OTLP_PROTOCOL
, fallback to auto-detection based on ENDPOINT portOTEL_SERVICE_NAME
for the name of the serviceOTEL_PROPAGATORS
for the configuration of the propagatorsOTEL_TRACES_SAMPLER
&OTEL_TRACES_SAMPLER_ARG
for configuration of the sampler
Few other environment variables can also be used to configure OTLP exporter (eg to configure headers, authentication,, etc...):
# For GRPC:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://localhost:4317"
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="grpc"
export OTEL_TRACES_SAMPLER="always_on"
# For HTTP:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://127.0.0.1:4318/v1/traces"
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="http/protobuf"
export OTEL_TRACES_SAMPLER="always_on"
In the context of kubernetes, some of the above environment variables can be injected by the Opentelemetry operator (via inject-sdk
):
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
# to inject environment variables only by opentelemetry-operator
instrumentation.opentelemetry.io/inject-sdk: "opentelemetry-operator/instrumentation"
instrumentation.opentelemetry.io/container-names: "app"
containers:
- name: app
Or if you don't setup inject-sdk
, you can manually set the environment variable eg
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
containers:
- name: app
env:
- name: OTEL_SERVICE_NAME
value: "app"
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "grpc"
# for otel collector in `deployment` mode, use the name of the service
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: "http://opentelemetry-collector.opentelemetry-collector:4317"
# for otel collector in sidecar mode (imply to deploy a sidecar CR per namespace)
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://localhost:4317"
# for `daemonset` mode: need to use the local daemonset (value interpolated by k8s: `$(...)`)
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: "http://$(HOST_IP):4317"
# - name: HOST_IP
# valueFrom:
# fieldRef:
# fieldPath: status.hostIP
Troubleshot why no trace?
-
check you only have a single version of opentelemtry (could be part of your CI/build), use
cargo-deny
orcargo tree
# Check only one version of opentelemetry should be used # else issue with setup of global (static variable) # check_single_version_opentelemtry: cargo tree -i opentelemetry
-
check the code of your exporter and the integration with
tracing
(as subscriber's layer) -
check the environment variables of opentelemetry
OTEL_EXPORTER...
andOTEL_TRACES_SAMPLER
(values are logged on targetotel::setup
) -
check that log target
otel::tracing
enable log leveltrace
(orinfo
if you usetracing_level_info
feature) to generate span to send to opentelemetry collector.
Metrics
To configure opentelemetry metrics, enable the metrics
feature, this will initialize a SdkMeterProvider
, set it globally and add a a MetricsLayer
to allow using tracing
events to produce metrics.
The opentelemetry_sdk
can still be used to produce metrics as well, since we configured the SdkMeterProvider
globally, so any Axum/Tonic middleware that does not use tracing
but directly opentelemetry::metrics will work.
Configure the following set of environment variables to configure the metrics exporter (on top of those configured above):
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
override toOTEL_EXPORTER_OTLP_ENDPOINT
for the url of the exporter / collectorOTEL_EXPORTER_OTLP_METRICS_PROTOCOL
override toOTEL_EXPORTER_OTLP_PROTOCOL
, fallback to auto-detection based on ENDPOINT portOTEL_EXPORTER_OTLP_METRICS_TIMEOUT
to set the timeout for the connection to the exporterOTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
to set the temporality preference for the exporterOTEL_METRIC_EXPORT_INTERVAL
to set frequence of metrics export in milliseconds, defaults to 60s
Changelog - History
Dependencies
~7–22MB
~301K SLoC