Skip to content

Logging

Loquent uses tracing for structured logging throughout the codebase. The logging system is initialized on server boot and supports two modes: development (pretty colored output) and production (JSON structured output). Both modes optionally integrate with Betterstack Logs for centralized observability.

tracing macros (info!, error!, etc.)
→ tracing-subscriber registry
→ fmt::layer (pretty in dev, JSON in prod)
→ betterstack::Layer (optional, when BETTERSTACK_SOURCE_TOKEN is set)
→ Background thread with batching
→ POST to Betterstack HTTP ingestion API

The logging system is initialized in src/bases/logging.rs with logging::init() called early in main.rs.

Log level filtering is controlled by the RUST_LOG environment variable or falls back to sensible defaults.

Development mode (debug build):

Terminal window
RUST_LOG=debug,hyper=info,sea_orm=info,tower=info,tower_http=debug

Production mode (release build):

Terminal window
RUST_LOG=info,tower_http=debug

These defaults reduce noise from HTTP/DB libraries while preserving request-level observability via tower_http=debug.

When BETTERSTACK_SOURCE_TOKEN is set, logs are sent to Betterstack Logs in addition to local output.

Terminal window
BETTERSTACK_SOURCE_TOKEN=your_token_here
BETTERSTACK_INGESTING_HOST=in.logs.betterstack.com # optional, defaults to in.logs.betterstack.com

The betterstack::Layer spawns a background thread (betterstack-flusher) that:

  1. Receives log events via a bounded channel (capacity: 10,000)
  2. Batches events (up to 100 per request)
  3. Flushes every 2 seconds or when the batch is full
  4. POSTs JSON payloads to https://{host}/ with Bearer auth

The channel is bounded to prevent memory exhaustion — if the channel fills, new events are dropped silently (back-pressure).

Each event is serialized as JSON:

{
"dt": "2026-03-07T12:34:56.789Z",
"level": "INFO",
"message": "Processing incoming call",
"target": "loquent::mods::twilio::routes",
"call_sid": "CA123...",
"org_id": "550e8400-e29b-41d4-a716-446655440000"
}

Fields are collected from both the event itself and all ancestor spans (outermost span fields are included first, event fields take precedence).

Structured fields attached to spans are automatically propagated to all events within that span:

let span = tracing::info_span!("handle_call", call_sid = %call_sid, org_id = %org_id);
let _guard = span.enter();
tracing::info!("Call started"); // Includes call_sid and org_id in Betterstack

This enables correlation across related events in Betterstack’s log explorer.

Loquent uses tower-http tracing middleware to log HTTP requests and responses. This layer is attached to the Axum router and logs at DEBUG level:

tower_http::trace::on_request → DEBUG log with method, uri, version
tower_http::trace::on_response → DEBUG log with status, latency

Set RUST_LOG=tower_http=debug to enable HTTP observability in production.

Panics from background tasks or spawned threads are routed through tracing::error! via a custom panic hook:

std::panic::set_hook(Box::new(|info| {
tracing::error!("{info}");
}));

This ensures panics are captured in Betterstack instead of only appearing in stderr.

use tracing::{info, warn, error};
info!("Server started on port 8080");
warn!(user_id = %user_id, "Invalid login attempt");
error!(error = %e, "Database connection failed");
let span = tracing::info_span!("process_recording", call_sid = %call_sid);
let _guard = span.enter();
info!("Downloading recording");
info!("Transcribing audio");
// Both events include call_sid in Betterstack

Use the AppError type for server-side errors — it automatically logs the error before returning:

let user = db.find_user(id).await?; // Logs DB error via AppError

The Betterstack layer runs in a dedicated background thread with blocking I/O (reqwest::blocking::Client). This prevents async runtime contention. Events are batched to reduce HTTP overhead (100 events per request, 2-second flush interval).

If Betterstack is unavailable, the background thread logs errors to stderr but continues processing — the app remains operational.