Seven years is a long time in observability. Since Prometheus 2.0 landed in 2017, the ecosystem has been transformed by cloud-native adoption, the rise of distributed tracing, and the emergence of OpenTelemetry as the de facto standard for instrumentation. Prometheus 3.0, released in November 2024, is the project’s answer to that transformation — and its most significant change is the native ability to ingest OpenTelemetry metrics directly, without an intermediary collector standing in the way.
This article goes deep on what Prometheus 3.0 actually changes for platform engineers and cloud architects who are running — or planning to run — OTel-instrumented workloads alongside Prometheus-based monitoring stacks. We will cover the native OTLP ingestion endpoint, UTF-8 metric name support, Remote Write 2.0, migration considerations, and the architectural patterns that still make sense even when native OTLP is available.
What Changed in Prometheus 3.0: The OTel-Relevant Picture
Prometheus 3.0 ships a substantial set of changes. Not all of them are equally relevant to OpenTelemetry integration, so let’s focus on what actually moves the needle for OTel users before diving into each area in detail.
Native OTLP Ingestion
The flagship feature: Prometheus 3.0 ships with a built-in OTLP receiver that exposes an HTTP endpoint accepting metrics in the OpenTelemetry Protocol format. Applications instrumented with any OTel SDK can now push metrics directly to Prometheus without routing through an OpenTelemetry Collector. This is not a sidecar, not a plugin, not an external adapter — it is a first-class endpoint in the Prometheus binary itself.
UTF-8 Metric Names
Prometheus historically restricted metric names to [a-zA-Z_:][a-zA-Z0-9_:]*. OpenTelemetry uses dots and slashes in metric names by convention — http.server.request.duration is a canonical OTel metric name. Prometheus 3.0 lifts this restriction and supports arbitrary UTF-8 characters in metric names and label names, which is the single most important compatibility change for OTel interoperability.
Remote Write 2.0
Remote Write 2.0
Remote Write 2.0 replaces the original protocol with a more efficient encoding based on protobuf, adds native histogram support in the wire format, and reduces bandwidth consumption significantly for large-scale deployments. If you are federating metrics to Thanos, Mimir, or Cortex, this matters for operational cost.
New UI
The Prometheus web UI has been completely rewritten. The new UI uses React, supports metric metadata exploration, and provides a significantly improved query-building experience. This is a quality-of-life improvement rather than an architectural change, but it reduces the dependency on external tools like Grafana for ad-hoc investigation.
Breaking Changes Summary
Prometheus 3.0 removes several features that were deprecated in 2.x. The most operationally significant are: removal of the --web.enable-admin-api deprecated flag path, removal of certain legacy storage format options, changes to default scrape timeouts, and stricter validation of configuration that was previously silently accepted. We cover a migration checklist later in this article.
The OTLP Receiver: How It Works and What It Accepts
The OTLP receiver in Prometheus 3.0 is implemented as an optional feature that must be explicitly enabled. Once enabled, it exposes an HTTP endpoint at /api/v1/otlp/v1/metrics that accepts protobuf-encoded OTLP ExportMetricsServiceRequest payloads — the same wire format used by the OpenTelemetry Collector’s OTLP exporter.
What It Accepts (and What It Does Not)
This is critical to understand before you architect around native OTLP ingestion: Prometheus 3.0 OTLP support is metrics-only. It does not accept traces or logs. OTLP is a unified protocol covering all three signals, but Prometheus is a metrics store — the receiver handles only the metrics portion of the OTLP specification.
Supported metric types in the OTLP receiver:
- Gauge — maps directly to a Prometheus Gauge
- Sum (monotonic) — maps to a Prometheus Counter
- Sum (non-monotonic) — maps to a Prometheus Gauge
- Histogram (explicit bucket) — maps to a Prometheus Histogram
- ExponentialHistogram — maps to Prometheus Native Histograms (a 3.0 feature)
- Summary — maps to a Prometheus Summary
Resource attributes from the OTLP payload — things like service.name, k8s.pod.name, cloud.region — are converted to Prometheus labels. This conversion is configurable, and by default Prometheus applies a promotion strategy that converts the most common resource attributes to labels while discarding ones that would create extremely high cardinality.
Enabling the OTLP Receiver
Enabling native OTLP ingestion requires two things: a feature flag and a configuration block in prometheus.yml.
Start the Prometheus binary with the feature flag:
prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--enable-feature=otlp-write-receiver
Then add the OTLP receiver configuration to your prometheus.yml:
# prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
otlp:
# Promote these OTLP resource attributes to Prometheus labels
promote_resource_attributes:
- service.name
- service.namespace
- service.instance.id
- k8s.namespace.name
- k8s.pod.name
- k8s.node.name
- cloud.region
- deployment.environment
With this configuration, Prometheus will listen on port 9090 (default) and accept OTLP metrics at http://<prometheus-host>:9090/api/v1/otlp/v1/metrics.
Resource Attribute Promotion Strategy
The promote_resource_attributes list deserves careful thought. OTLP carries rich resource-level context — every metric payload includes a ResourceMetrics object with attributes describing the source: service name, version, environment, Kubernetes pod, node, cluster, cloud provider details, and more. Prometheus labels are flat key-value pairs on each time series. Promoting too many resource attributes explodes cardinality; promoting too few loses important context.
A pragmatic starting list for Kubernetes deployments:
otlp:
promote_resource_attributes:
- service.name # Critical: identifies the service
- service.namespace # Logical grouping
- deployment.environment # prod/staging/dev
- k8s.namespace.name # Kubernetes namespace
- k8s.pod.name # Pod-level cardinality — consider omitting in high-scale
- k8s.node.name # Useful for infrastructure correlation
Avoid blindly promoting k8s.pod.name at scale — in a cluster with thousands of short-lived pods, this creates significant cardinality pressure. Prefer service.name and service.namespace for most alerting use cases, reserving pod-level labels for debugging dashboards.
UTF-8 Metric Names: Why This Is the Real Game-Changer
To appreciate why UTF-8 metric name support matters so much, you need to understand the friction it eliminates. OpenTelemetry semantic conventions define metric names using dots as namespace separators. The canonical HTTP server duration metric is http.server.request.duration. The canonical database query duration is db.client.operation.duration. These names are standardized across languages and frameworks — your Go service and your Java service and your Python service all emit the same metric name when instrumented with OTel.
Prometheus 2.x could not store these names. The dots are illegal characters in Prometheus metric naming. Every OTel-to-Prometheus bridge — the OpenTelemetry Collector’s Prometheus exporter, prom-client compatibility layers, the older prometheusremotewrite exporter — had to translate these names, typically by replacing dots with underscores: http_server_request_duration.
This translation is lossy and creates multiple problems:
- Name collisions:
http.server.request_duration and http.server.request.duration both become http_server_request_duration - Dashboard breakage: Grafana dashboards built against OTel semantic conventions don’t work against translated Prometheus metrics without modification
- Cross-signal correlation: Trace attributes use dot notation; when metric names differ, automated correlation tools lose the thread
- Vendor lock-in pressure: Teams end up with separate naming conventions for “Prometheus metrics” vs “OTel metrics” and maintain both
Prometheus 3.0 with UTF-8 support stores http.server.request.duration natively. No translation. No collision. The metric name you instrument with is the metric name you query.
Enabling UTF-8 Metric Names
UTF-8 metric names require the utf8-names feature flag:
prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--enable-feature=utf8-names \
--enable-feature=otlp-write-receiver
Once enabled, PromQL queries must use quoted metric names when the name contains characters outside the legacy character set:
# Legacy metric name — unquoted works fine
http_server_requests_total
# OTel metric name with dots — requires quoting in PromQL
{"__name__"="http.server.request.duration"}
# Or using the new PromQL syntax in Prometheus 3.0
http.server.request.duration{service_name="api-gateway"}
The PromQL parser in Prometheus 3.0 has been updated to handle quoted metric names as a first-class construct. Grafana’s PromQL engine has also been updated to handle this syntax — verify your Grafana version (10.3+ has full support) before deploying.
OTel SDK to Prometheus 3.0 Directly: No Collector Required
For teams that only need to get application metrics into Prometheus, native OTLP ingestion enables a dramatically simpler architecture. Here’s what it looks like with different OTel SDKs.
Go (OpenTelemetry SDK)
package main
import (
"context"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp"
"go.opentelemetry.io/otel/sdk/metric"
"go.opentelemetry.io/otel/sdk/resource"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
func initMetrics(ctx context.Context) (*metric.MeterProvider, error) {
res, err := resource.New(ctx,
resource.WithAttributes(
semconv.ServiceName("my-api"),
semconv.ServiceNamespace("platform"),
semconv.DeploymentEnvironment("production"),
),
)
if err != nil {
return nil, err
}
// Point directly at Prometheus 3.0 OTLP endpoint
exporter, err := otlpmetrichttp.New(ctx,
otlpmetrichttp.WithEndpoint("prometheus:9090"),
otlpmetrichttp.WithURLPath("/api/v1/otlp/v1/metrics"),
otlpmetrichttp.WithInsecure(), // Use WithTLSClientConfig for production
)
if err != nil {
return nil, err
}
provider := metric.NewMeterProvider(
metric.WithResource(res),
metric.WithReader(
metric.NewPeriodicReader(exporter,
metric.WithInterval(30*time.Second),
),
),
)
otel.SetMeterProvider(provider)
return provider, nil
}
Python (OpenTelemetry SDK)
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME, SERVICE_NAMESPACE
resource = Resource.create({
SERVICE_NAME: "my-api",
SERVICE_NAMESPACE: "platform",
"deployment.environment": "production",
})
exporter = OTLPMetricExporter(
endpoint="http://prometheus:9090/api/v1/otlp/v1/metrics",
)
reader = PeriodicExportingMetricReader(
exporter,
export_interval_millis=30_000,
)
provider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(provider)
# Use the meter
meter = metrics.get_meter("my-api")
request_counter = meter.create_counter(
name="http.server.request.count",
description="Total HTTP server requests",
unit="1",
)
request_duration = meter.create_histogram(
name="http.server.request.duration",
description="HTTP server request duration",
unit="s",
)
Java (OpenTelemetry SDK with Spring Boot)
# application.properties (Spring Boot with OTel auto-instrumentation)
otel.service.name=my-api
otel.resource.attributes=service.namespace=platform,deployment.environment=production
# Configure OTLP exporter to push directly to Prometheus
otel.metrics.exporter=otlp
otel.exporter.otlp.metrics.endpoint=http://prometheus:9090/api/v1/otlp/v1/metrics
otel.exporter.otlp.metrics.protocol=http/protobuf
# Export interval
otel.metric.export.interval=30000
With Spring Boot and the OTel Java agent, no code changes are required beyond configuration — the agent instruments your HTTP server, database clients, and messaging systems automatically and pushes metrics using the names defined in OTel semantic conventions.
OTel Collector to Prometheus 3.0: When You Need the Intermediary
Native OTLP ingestion is compelling, but the OpenTelemetry Collector remains relevant for a significant set of use cases. Understanding when each pattern is appropriate is the core architectural decision you will face when adopting Prometheus 3.0 in an OTel environment.
Pattern 1: OTel Collector as Fan-Out Gateway
When you need to send metrics to multiple backends simultaneously — Prometheus for alerting, a long-term store like Thanos for historical analysis, and a commercial observability platform for full-stack correlation — the OTel Collector handles fan-out efficiently. Applications push once to the Collector; the Collector distributes to all backends.
# otel-collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 10s
send_batch_size: 1000
memory_limiter:
check_interval: 1s
limit_mib: 512
exporters:
# Push to Prometheus 3.0 via OTLP
otlphttp/prometheus:
endpoint: http://prometheus:9090/api/v1/otlp
tls:
insecure: true
# Fan-out to Thanos via remote_write
prometheusremotewrite/thanos:
endpoint: http://thanos-receive:10908/api/v1/receive
resource_to_telemetry_conversion:
enabled: true
# Fan-out to commercial backend
otlp/datadog:
endpoint: https://otel-intake.datadoghq.com
headers:
DD-API-KEY: "${DD_API_KEY}"
service:
pipelines:
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlphttp/prometheus, prometheusremotewrite/thanos, otlp/datadog]
Pattern 2: Collector for Metric Transformation
The OTel Collector’s transform processor and metricstransform processor allow you to reshape metrics before they reach Prometheus: rename labels, add static attributes, filter out high-cardinality series, aggregate metrics to reduce storage cost, or apply unit conversions. These operations are not available in Prometheus’s native OTLP receiver.
processors:
transform/metrics:
metric_statements:
- context: metric
statements:
# Drop internal debug metrics
- delete_matching_keys(attributes, "internal.*")
# Normalize environment label values
- set(attributes["deployment.environment"], "prod")
where attributes["deployment.environment"] == "production"
filter/drop_debug:
metrics:
exclude:
match_type: regexp
metric_names:
- ".*\.debug\..*"
- "runtime\.go\.internal\..*"
metricstransform:
transforms:
# Rename a metric to match your existing Prometheus naming convention
- include: http.server.request.duration
action: update
new_name: http_server_request_duration_seconds
Pattern 3: Collector for Traces and Logs (Always Required)
If your architecture includes traces and logs alongside metrics — and in 2025 it almost certainly does — you need an OTel Collector regardless of what you do with metrics. Prometheus does not accept traces or logs. Jaeger, Tempo, and Loki all have their own ingestion protocols. The Collector is the universal routing layer for the three pillars of observability.
In this architecture, it is usually simpler to route all three signals through the Collector and let it push metrics to Prometheus via OTLP or remote_write, rather than splitting metrics to go directly and everything else through the Collector.
When to Use Native OTLP vs. OTel Collector: Decision Framework
| Scenario | Native OTLP | OTel Collector |
|---|
| Single metrics backend (Prometheus only) | Preferred | Overkill |
| Multiple metrics backends | <td