First-Class Structured Logging in Spring Boot
19 Feb 2026Logging in Spring Framework projects has traditionally been treated as an afterthought: strings written to files, parsed later by humans or fragile regexes. That approach does not scale. Modern production systems ship logs to centralized backends like ELK or Graylog, correlate them with traces and metrics, and query them programmatically. For that world, plain text logging is a liability.
Spring Boot has quietly evolved to make structured logging a first-class citizen. With built-in support for JSON log formats such as Logstash, ECS, and GELF, Spring Boot allows developers to emit machine-readable logs without custom appenders or ad-hoc conventions. Used correctly, this dramatically improves observability, debuggability, and operational confidence.
In this article I focus on what you, as a developer, can do today to improve a Spring Framework project by adopting structured logging end-to-end.
Why Structured Logging Matters in Spring Projects
Structured logging means emitting logs as data, not formatted prose. Each log entry is a JSON document with well-defined fields: timestamp, level, message, logger, thread, trace IDs, request metadata, and domain-specific attributes.
In a production Spring application, this matters for three reasons:
- Search and analytics become reliable: Instead of free-text search, you query fields (
http.status:500 AND service.name:orders). This is faster, safer, and composable. - Correlation with traces and requests works automatically: Trace IDs, span IDs, and request IDs become first-class fields instead of embedded strings. This is essential when using OpenTelemetry, Spring Cloud Sleuth (legacy), or Micrometer Tracing.
- Logs become backend-agnostic: Whether you ship to Elasticsearch, OpenSearch, Graylog, Datadog, or Loki, structured JSON is the common denominator.
Spring Boot recognizes this reality and provides standardized JSON outputs out of the box.
First-Class JSON Logging in Spring Boot
Spring Boot 3.x ships with native support for JSON logging via Logback. You do not need custom encoders or third-party libraries to get started.
Enabling JSON Logging
The simplest step is configuration. In application.yaml:
logging:
structured:
format:
console: ecs
file: ecs
This enables ECS-compatible JSON logs for both console and file outputs. Supported formats include:
ecs: Elastic Common Schema (recommended for ELK and OpenSearch)logstash: Classic Logstash JSON formatgelf: Graylog Extended Log Format
Switching formats does not require code changes, which is a critical design win.
Example Output (ECS)
{
"@timestamp": "2026-02-07T10:15:30.123Z",
"log.level": "INFO",
"message": "Order created",
"service.name": "order-service",
"event.dataset": "application",
"trace.id": "4bf92f3577b34da6a3ce929d0e0e4736",
"span.id": "00f067aa0ba902b7",
"log.logger": "com.example.orders.OrderService"
}
This is not decoration. This is structured telemetry.
Improving Correlation with Requests and Traces
Spring Boot integrates logging with tracing infrastructure via MDC (Mapped Diagnostic Context). When tracing is enabled, trace and span IDs are automatically injected into log events.
What You Should Do
- Enable tracing (Micrometer Tracing / OpenTelemetry)
- Avoid manual trace ID logging
- Ensure structured output is enabled
Example dependency setup:
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
With this in place, every log entry emitted during an HTTP request or async operation automatically carries correlation identifiers.
This eliminates patterns like:
log.info("traceId={} processing order {}", traceId, orderId);
Instead, log the domain fact:
log.info("Processing order {}", orderId);
The trace context is already attached as structured metadata.
Logging Domain Data the Right Way
A common mistake is to serialize domain objects into log messages. Structured logging gives you a better option: key-value logging.
Use Structured Arguments
Spring Boot supports structured arguments via Logback and SLF4J.
import static net.logstash.logback.argument.StructuredArguments.keyValue;
log.info("Order created",
keyValue("orderId", order.getId()),
keyValue("customerId", order.getCustomerId()),
keyValue("totalAmount", order.getTotalAmount())
);
This produces fields, not string concatenation:
{
"message": "Order created",
"orderId": "A123",
"customerId": "C456",
"totalAmount": 149.99
}
Why This Matters
- Fields are indexed independently
- Numeric values remain numeric
- Queries become deterministic
This is a direct improvement you can apply to existing codebases incrementally.
HTTP and Error Logging That Scales
Logging HTTP Requests
Do not log full request/response bodies by default. Instead, log structured metadata:
- HTTP method
- Path or route template
- Status code
- Latency
Spring Boot already emits much of this via access logs or filters. If you need application-level logging:
log.info("HTTP request completed",
keyValue("method", request.getMethod()),
keyValue("path", request.getRequestURI()),
keyValue("status", response.getStatus()),
keyValue("durationMs", duration)
);
Logging Exceptions
Never log stack traces as formatted strings. Let the logging framework handle exceptions:
try {
process(order);
} catch (OrderException ex) {
log.error("Order processing failed",
keyValue("orderId", order.getId()),
ex
);
throw ex;
}
Structured logging preserves:
- Exception class
- Stack trace
- Root cause
- Correlation IDs
Backends like Elasticsearch and Graylog understand these fields natively.
Shipping Logs to ELK, Graylog, and Beyond
The target audience for structured logging is any production system shipping logs centrally.
With Spring Boot JSON logging:
- ELK / OpenSearch: ECS format aligns directly with index templates and dashboards.
- Graylog: GELF output avoids brittle text extractors.
- Cloud platforms: JSON logs are parsed automatically by most log agents.
The key insight: Spring Boot emits the data; shipping is an infrastructure concern. Filebeat, Fluent Bit, or platform agents consume JSON without transformation.
This separation of concerns keeps application code clean and portable.
Opinionated Guidance for Spring Developers
If you maintain Spring Framework projects in production, structured logging is no longer optional.
Do this:
- Enable JSON logging globally
- Use ECS unless you have a strong reason not to
- Log domain events, not prose
- Prefer key-value arguments over string interpolation
- Let tracing handle correlation
Avoid this:
- Custom log formats per service
- Manual trace ID handling
- Logging entire objects or payloads
- Regex-based parsing downstream
The result is a system where logs are queryable, analyzable, and trustworthy under load.
Conclusion
Spring Boot’s built-in structured logging support removes the historical friction around JSON logs. What used to require custom encoders and conventions is now a configuration toggle and a few disciplined coding practices.
For Spring Framework projects running in production and shipping logs to ELK, Graylog, or similar backends, structured logging is one of the highest-ROI improvements you can make. It directly improves incident response, debugging speed, and long-term operability, without adding architectural complexity.
Good logging is not about verbosity. It is about emitting facts in a form machines can understand. Spring Boot finally makes that the default.