Help build the future of open source observability software Open positions

Check out the open source projects we support Downloads

Grafana Tempo 2.8 release: memory improvements, new TraceQL features, and more

Grafana Tempo 2.8 release: memory improvements, new TraceQL features, and more

2025-06-12 6 min

Grafana Tempo 2.8 is officially here, delivering new TraceQL features, performance improvements, and bug fixes, as well as some breaking changes. 

Watch the video below to learn more about the TraceQL features, or continue reading to get a quick overview of these and other updates. If you’re looking for something more in-depth for all of the changes that happened in this release, head over to the Grafana Tempo 2.8 release notes or the changelog.

TraceQL features

No Tempo release is complete without some new TraceQL features. Here are a few we’re excited to share from the 2.8 release. If you want to run the same queries you see in this post, you can go to our local Docker Compose example and follow the instructions there. Then, go to localhost:3000 and click on Explore, which should take you to Tempo.

Show most recent results

By default, TraceQL searches are non-deterministic — they return the first matching results they find, not necessarily the most recent.

A long-running community request has been to always return the most recent results. Now, in Tempo 2.8, we’ve added this feature as a query hint to modify the behavior of the query. If you specifically want the latest traces, use the TraceQL query hint with (most_recent=true). You can see a few more example queries in our docs.

{ resource.service.name="article-service" } with (most_recent=true)

Ultimately we would like to make this the default TraceQL behavior, but due to regressions on certain queries, we decided to make it a query hint for now. We will re-evaluate as we continue to improve Tempo performance!

Search by parent span ID

If you have a parent span ID, you can now look up the associated children spans using the new span:parentID intrinisic

For example, let’s look at the span ID of a list-articles span:

An screenshot showing example traces and spans.
Example traces and spans

We can make our query as follows: 

{ span:parentID = "cb3557c2873e9e07" }
A screenshot showing two children of a parent span.
Two children of span cb3557c2873e9e07

And we can see that our parent, list-articles (cb3557c2873e9e07), has two children spans, which are fetch-articles (1420e572dec5a4d1) and authenticate (8c3c32d2f25bb86b).

To learn more about intrinsics, please visit our technical docs.

Metrics: sum over time

In the 2.7 release, we added three new TraceQL metrics functions: avg_over_time, min_over_time, and max_over_time. This time around we’ve added sum_over_time, which lets you directly compute cumulative sums inside TraceQL, like total bytes transferred, total error counts, or resource consumption over time.

Here’s an example:

{ span:name = "foo" } | sum_over_time(span:duration)
A screenshot of the sum over time metric on span duration.
sum_over_time on span duration

Metrics: topk and bottomk

We’ve also added support for topk and bottomk functions to provide you with more control over cardinality. These behave like the functions with the same names in Prometheus, allowing you to immediately narrow your focus to the top‑ or bottom‑ranked spans in a single, efficient query.  

Imagine a query that returns far too many series to be useful, for example:

{} | rate() by (span:name)
A screenshot of query returning a lot of series.
{} | rate() by (span:name) over the last hour

We can use a function like topk(N) to better focus on the series that are interesting to us:

{} | rate() by (span:name) | topk(2)
A screenshot of query results using the topk function.
{} | rate() by (span:name) with narrowed focus using topk(2)

Similarly, you can focus on the lowest-valued series using bottomk(N):

{} | rate() by (span:name) | bottomk(10)

To see these functions in action, check out the video below, as well as the full Tempo 2.8 release video.

If you want to learn more about TraceQL metrics queries, please check out our technical docs.

And to learn more about the new TraceQL features covered in this section, take a look at our What’s new post.

Performance enhancements: compactor memory usage

One of the performance enhancements we’re most excited about with the Tempo 2.8 release relates to compactor pooling. 

We noticed a spike in compactor memory usage and an increase in the frequency of Out of Memory (OOM) errors over time. If we take a look at the following Grafana Pyrosocope flame graph for the compactor prior to an OOM, we can see that the vParquet4 row pool was holding 85% of in-use memory. This was due to overly aggressive pooling in the compactors, and was exposed by changes in traffic patterns that caused larger and larger traces to be stored and combined. 

A screenshot of a flame graph showing compactor memory.
A flame graph for compactor prior to an OOM.

In the 2.8 release, we are pooling less aggressively in the compactor and relying on the garbage collector to clean up traces. This change has shown to have a huge improvement in memory, cutting memory high-water marks to less than half in a high traffic/tenant install.

That said, if we continue to see issues with compactor memory, we may continue to adjust this pooling value moving forward.

A screenshot of a graphy showing compactor memory.
An improvement in compactor memory usage

To explore other performance improvements in the 2.8 release, please reference the release notes.

Breaking changes

We’ve made a few breaking changes in the Tempo 2.8 release. Here, we’ll dig into some of these changes, but you can find the complete list in the release notes and upgrade guide.

Remove Tempo serverless features

In Tempo 2.7, we deprecated serverless features. In the 2.8 release, we fully removed them.

As a result, the following config options under querier.search are no longer valid. Please remove these if you are using them in your Tempo config:

querier:
      search:
          prefer_self: <int>
          external_hedge_requests_at: <duration>
          external_hedge_requests_up_to:  <duration>
          external_backend: <string>
          google_cloud_run: <string>
          external_endpoints: <array>

Additionally, the following serverless-related metrics have been removed:

  • tempo_querier_external_endpoint_duration_seconds
  • tempo_querier_external_endpoint_hedged_roundtrips_total
  • tempo_feature_enabled

You can read more about this change on GitHub.

Change default http-listen-port from 80 to 3200

It was brought up by users that port 80 is generally considered a privileged port, and most of the Tempo configuration (such as here in our docker-compose examples) and packages use port 3200. As a result, we’ve changed the default http-list-port to 3200 in the code, as well.

Max attribute bytes

In the Tempo configuration, you could use max_span_attr_byte to set the max attribute size (bytes) for spans to avoid out-of-memory crashes. This configuration parameter is now renamed to max_attribute_bytes.

Previously, you could only use this parameter to only enforce max attribute size for spans and resources, but now it is also used for events, links, and instrumentation.

What’s next in Grafana Tempo?

Looking ahead, we’re now rolling out the new RF1 rearchitecture into production! As we start understanding our TCO savings, we will start sharing them. 

We’re also adding a Tempo MCP server to the query frontend so that LLM agents can leverage Tempo directly and do cool things like correcting syntax errors in queries. Stay tuned for more! 

If you are interested in hearing more about Grafana Tempo news or search progress, please join us on the Grafana Labs Community Slack channel #tempo, post a question in our community forums, reach out on X (formerly Twitter), or join our monthly Tempo community call. See you there!

And if you want to get even closer to where the magic happens, why not have a look at our open positions at Grafana Labs? 

The easiest way to get started with Grafana Tempo is with Grafana Cloud, and our free forever tier now includes 50GB of traces along with 50GB of logs and 10K series of metrics. Sign up today for free!