Releases: timescale/timescaledb
2.23.0 (2025-10-29)
This release contains performance improvements and bug fixes since the 2.22.1 release. We recommend that you upgrade at the next available opportunity.
Highlighted features in TimescaleDB v2.23.0
- This release introduces full PostgreSQL 18 support for all existing features. TimescaleDB v2.23 is available for PostgreSQL 15, 16, 17, and 18.
- UUIDv7 compression is now enabled by default on the columnstore. This feature was shipped in v2.22.0. It saves you at least 30% of storage and delivers ~2× faster query performance with UUIDv7 columns in the filter conditions.
- Added the ability to set hypertables to unlogged, addressing an open community request #836. This allows the tradeoff between durability and performance, with the latter being favourable for larger imports.
- By allowing set-returning functions in continuous aggregates, this addresses a long standing blocker, raised by the community #1717.
PostgreSQL 15 deprecation announcement
We will continue supporting PostgreSQL 15 until June 2026. Closer to that time, we will announce the specific TimescaleDB version in which PostgreSQL 15 support will not be included going forward.
Features
- #8373 More precise estimates of row numbers for columnar storage based on Postgres statistics.
- #8581 Allow mixing Postgres and TimescaleDB options in
ALTER TABLE SET. - #8582 Make
partition_columninCREATE TABLE WITHoptional. - #8588 Automatically create a columnstore policy when a hypertable with columnstore enabled is created via
CREATE TABLE WITHstatement. - #8606 Add job history config parameters for maximum successes and failures to keep for each job.
- #8632 Remove
ChunkDispatchcustom node. - #8637 Add
INSERTsupport for direct compress. - #8661 Allow
ALTER TABLE ONLYto changereloptionsto apply setting changes only to future chunks. - #8703 Allow set-returning functions in continuous aggregates.
- #8734 Support direct compress when inserting into a chunk.
- #8741 Add support for unlogged hypertables.
- #8769 Remove continuous aggregate invalidation trigger.
- #8798 Enable UUIDv7 compression by default.
- #8804 Remove
insert_blockertrigger.
Bugfixes
- #8561 Show warning when direct compress is skipped due to triggers or unique constraints.
- #8567 Do not require a job to have executed to show status.
- #8654 Fix
approximate_row_countfor compressed chunks. - #8704 Fix direct
DELETEon compressed chunk. - #8728 Don't block dropping hypertables with other objects.
- #8735 Fix
ColumnarScanforUNIONqueries. - #8739 Fix cached utility statements.
- #8742 Potential internal program error when grouping by
boolcolumns of a compressed hypertable. - #8743 Modify schedule interval for job history pruning.
- #8746 Support show/drop chunks with UUIDv7 partitioning.
- #8753 Allow sorts over decompressed index scans for
ChunkAppend. - #8758 Improve error message on catalog version mismatch.
- #8774 Add GUC for WAL based invalidation of continuous aggregates.
- #8782 Stops sparse index from allowing multiple options.
- #8799 Set
next_startforWITHclause compression policy. - #8807 Only warn but not fail the compression if bloom filter indexes are configured but disabled with a GUC.
GUCs
cagg_processing_wal_batch_size: Batch size when processing WAL entries.enable_cagg_wal_based_invalidation: Enable experimental invalidations for continuous aggregates using WAL.enable_direct_compress_insert: Enable direct compression duringINSERT.enable_direct_compress_insert_client_sorted: Enable direct compressINSERTwith presorted data.enable_direct_compress_insert_sort_batches: Enable batch sorting during direct compressINSERT.
Thanks
- @brandonpurcell-dev For highlighting issues with
show_chunks()and UUIDv7 partitioning - @moodgorning for reporting an issue with the
timescaledb_information.job_statsview - @ruideyllot for reporting set-returning functions not working in continuous aggregates
- @t-aistleitner for reporting an issue with utility statements in plpgsql functions
2.22.1 (2025-09-30)
This release contains performance improvements and bug fixes since the 2.22.0 release. We recommend that you upgrade at the next available opportunity.
This release blocks the ability to create new concurrent refresh policies in hierarchical continuous aggregates, as in rare cases a deadlock can occur. Concurrent refresh policies were introduced in 2.21.0 and allow users to define multiple time ranges to refresh on the same continuous aggregate, e.g. data from the last hour in policy and the last day in a second policy. Existing concurrent refresh policies on hierarchical continuous aggregates will continue to be executed. To avoid any potential deadlock, remove such existing policies and create a new policy for the full range you want to refresh, of the continuous aggregate as follows:
--- Find the job ID's of the concurrent refresh policies
SELECT * FROM timescaledb_information.jobs WHERE proc_name = 'policy_refresh_continuous_aggregate';
--- Remove the job
SELECT delete_job("<job_id_of_concurrent_policy>");
--- Create new policy for hierarchical continuous aggregate
SELECT add_continuous_aggregate_policy('<name_of_materialized_view>',
start_offset => INTERVAL '1 month',
end_offset => INTERVAL '1 day',
schedule_interval => INTERVAL '1 hour');
Bugfixes
- #7766 Load OSM extension in retention background worker to drop tiered chunks
- #8550 Error in gapfill with expressions over aggregates and groupby columns and out-of-order columns
- #8593 Error on change of invalidation method for continuous aggregate
- #8599 Fix attnum mismatch bug in chunk constraint checks
- #8607 Fix interrupted continous aggregate refresh materialization phase leaving behind pending materialization ranges
- #8638
ALTER TABLE RESETfororderbysettings - #8644 Fix migration script for sparse index configuration
- #8657 Fix
CREATE TABLE WITHwhen using UUIDv7 partitioning - #8659 Don't propagate
ALTER TABLEcommands to foreign data wrapper chunks - #8693 Compressed index not chosen for
varchartypedsegmentbycolumns - #8707 Block concurrent refresh policies for hierarchical continous aggregate due to potential deadlocks
Thanks
- @MKrkkl for reporting a bug in Gapfill queries with expressions over aggregates and groupby columns
- @brandonpurcell-dev for creating a test case that showed a bug in
CREATE TABLE WITHwhen using UUIDv7 partitioning - @snyrkill for reporting a bug when interrupting a continous aggregate refresh
2.21.4 (2025-09-25)
This release contains performance improvements and bug fixes since the 2.21.3 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
#8667 Fix wrong selectivity estimates uncovered by the recent Postgres minor releases 15.14, 16.10, 17.6.
2.22.0 (2025-09-02)
Warning
We recommend holding off on upgrading to 2.22.0 if you have Continuous Aggregates in your service and to wait for the fix in the next patch. We apologize for the inconvenience this may have caused. We are working on a fix, which will be shipped with the upcoming 2.22.1 release. Please subscribe to this ticket as updates will be posted here.
This release contains performance improvements and bug fixes since the 2.21.3 release.
Highlighted features in TimescaleDB v2.22.0
- Sparse indexes on compressed hypertables can now be explicitly configured via
ALTER TABLErather than relying only on internal heuristics. Users can define indexes on multiple columns to improve query performance for their specific workloads. - [Tech Preview] Continuous aggregates now support the
timescaledb.invalidate_usingoption, enabling invalidations to be collected either via triggers on the hypertable or directly from WAL using logical decoding. Aggregates inherit the hypertable’s method if none is specified. - UUIDv7 compression and vectorization are now supported. The compression algorithm leverages the timestamp portion for delta-delta compression while storing the random portion separately. The vectorized equality/inequality filters with bulk decompression deliver ~2× faster query performance. The feature is disabled by default (
timescaledb.enable_uuid_compression) to simplify the downgrading experience, and will be enabled out of the box in the next minor release. - Hypertables can now be partitioned by UUIDv7 columns, leveraging their embedded timestamps for time-based chunking. We’ve also added utility functions to simplify working with UUIDv7, such as generating values or extracting timestamps - e.g.,
uuid_timestamp()returns a PostgreSQL timestamp from a UUIDv7. - SkipScan now supports multi-column indexes in not-null mode, improving performance for distinct and ordered queries across multiple keys.
Removal of the hypercore table access method
We made the decision to deprecate the hypercore table access method (TAM) with the 2.21.0 release. Hypercore TAM was an experiment and it did not show the performance improvements we hoped for. It is removed with this release. Upgrades to 2.22.0 and higher are blocked if TAM is still in use. Since TAM’s inception in 2.18.0, we learned that btrees were not the right architecture. Recent advancements in the columnstore, such as more performant backfilling, SkipScan, adding check constraints, and faster point queries, put the columnstore close to or on par with TAM without needing to store an additional index. We apologize for the inconvenience this action potentially causes and are here to assist you during the migration process.
Migration path
do $$
declare
relid regclass;
begin
for relid in
select cl.oid from pg_class cl
join pg_am am on (am.oid = cl.relam)
where am.amname = 'hypercore'
loop
raise notice 'converting % to heap', relid::regclass;
execute format('alter table %s set access method heap', relid);
end loop;
end
$$;
Features
- #8247 Add configurable alter settings for sparse indexes
- #8306 Add option for invalidation collection using WAL for continuous aggregates
- #8340 Improve selectivity estimates for sparse minmax indexes, so that an index scan on a table in the columnstore is chosen more often when it's beneficial.
- #8360 Continuous aggregate multi-hypertable invalidation processing
- #8364 Remove hypercore table access method
- #8371 Show available timescaledb
ALTERoptions when encountering unsupported options - #8376 Change
DecompressChunkcustom node name toColumnarScan - #8385 UUID v7 functions for testing pre PG18
- #8393 Add specialized compression for UUIDs. Best suited for UUID v7, but still works with other UUID versions. This is experimental at the moment and backward compatibility is not guaranteed.
- #8398 Set default compression settings at compress time
- #8401 Support
ALTER TABLE RESETfor compression settings - #8414 Vectorised filtering of UUID Eq and Ne filters, plus bulk decompression of UUIDs
- #8424 Block downgrade when orderby setting is
NULL - #8454 Remove internal unused index helper functions
- #8494 Improve job stat history retention policy
- #8496 Fix dropping chunks with foreign keys
- #8505 Add support for partitioning on UUIDv7
- #8513 Support multikey SkipScan when all keys are guaranteed to be non-null
- #8514 Concurrent continuous aggregates improvements
- #8528 Add the
_timescaledb_functions.chunk_status_texthelper function - #8529 Optimize direct compress status handling
Bugfixes
- #8422 Don't require
columnstore=falsewhen using the TimescaleDB Apache 2 Edition - #8493 Change log level of
not nullconstraint message - #8500 Fix uniqueness check with generated columns and hypercore
- #8545 Fix error in LOCF/Interpolate with out-of-order and repeated columns
- #8558 Error out on bad args when processing invalidation
- #8559 Fix
timestamp out of rangeusingend_offset=NULLon CAgg refresh policy
GUCs
enable_multikey_skipscan: Enable SkipScan for multiple distinct keys, default: onenable_uuid_compression: Enable UUID compression functionality, default: offcagg_processing_wal_batch_size: Batch size when processing WAL entries, default: 10000cagg_processing_low_work_mem: Low working memory limit for continuous aggregate invalidation processing, default: 38.4MBcagg_processing_high_work_mem: High working memory limit for continuous aggregate invalidation processing, default: 51.2MB
Thanks
- @CodeTherapist for reporting an issue where foreign key checks did not work after several insert statements
- @moodgorning for reporting a bug in queries with LOCF/Interpolate using out-of-order columns
- @nofalx for reporting an error when using
end_offset=NULLon CAgg refresh policy - @pierreforstmann for fixing a bug that happened when dropping chunks with foreign keys
- @Zaczero for reporting a bug with CREATE TABLE WITH when using the TimescaleDB Apache 2 Edition
2.21.3 (2025-08-12)
This release contains performance improvements and bug fixes since the 2.21.2 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #8471 Fix MERGE behaviour with updated values
2.21.2 (2025-08-05)
This release contains performance improvements and bug fixes since the 2.21.1 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
2.21.1 (2025-07-22)
This release contains a bug fix since the 2.21.0 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #8336 Fix generic plans for foreign key checks and prepared statements
Thanks
- @CodeTherapist for reporting the issue with foreign key checks not working after several
INSERTstatements
2.21.0 (2025-07-08)
This release contains performance improvements and bug fixes since the 2.20.3 release. We recommend that you upgrade at the next available opportunity.
Highlighted features in TimescaleDB v2.21.0
- The attach & detach chunks feature allows manually adding or removing chunks from a hypertable with uncompressed chunks, similar to PostgreSQL’s partition management.
- Continued improvement of backfilling into the columnstore, achieving up to 2.5x speedup for constrained tables, by introducing caching logic that boosts throughput for writes to compressed chunks, bringing
INSERTperformance close to that of uncompressed chunks. - Optimized
DELETEoperations on the columstore through batch-level deletions of non-segmentby keys in the filter condition, greatly improving performance to up to 42x faster in some cases, as well as reducing bloat, and lowering resource usage. - The heavy lock taken in Continuous Aggregate refresh was relaxed, enabling concurrent refreshes for non-overlapping ranges and eliminating the need for complex customer workarounds.
- [tech preview] Direct Compress is an innovative TimescaleDB feature that improves high-volume data ingestion by compressing data in memory and writing it directly to disk, reducing I/O overhead, eliminating dependency on background compression jobs, and significantly boosting insert performance.
Sunsetting of the hypercore access method
We made the decision to deprecate hypercore access method (TAM) with the 2.21.0 release. It was an experiment, which did not show the signals we hoped for and will be sunsetted in TimescaleDB 2.22.0, scheduled for September 2025. Upgrading to 2.22.0 and higher will be blocked if TAM is still in use. Since TAM’s inception in 2.18.0, we learned that btrees were not the right architecture. The recent advancements in the columnstore—such as more performant backfilling, SkipScan, adding check constraints, and faster point queries—put the columnstore close to or on par with TAM without the storage from the additional index. We apologize for the inconvenience this action potentially causes and are here to assist you during the migration process.
Migration path
do $$
declare
relid regclass;
begin
for relid in
select cl.oid from pg_class cl
join pg_am am on (am.oid = cl.relam)
where am.amname = 'hypercore'
loop
raise notice 'converting % to heap', relid::regclass;
execute format('alter table %s set access method heap', relid);
end loop;
end
$$;
Features
- #8081 Use JSON error code for job configuration parsing
- #8100 Support splitting compressed chunks
- #8131 Add policy to process hypertable invalidations
- #8141 Add function to process hypertable invalidations
- #8165 Reindex recompressed chunks in compression policy
- #8178 Add columnstore option to
CREATE TABLE WITH - #8179 Implement direct
DELETEon non-segmentby columns - #8182 Cache information for repeated upserts into the same compressed chunk
- #8187 Allow concurrent Continuous Aggregate refreshes
- #8191 Add option to not process hypertable invalidations
- #8196 Show deprecation warning for TAM
- #8208 Use
NULLcompression for bool batches with all null values like the other compression algorithms - #8223 Support for attach/detach chunk
- #8265 Set incremental Continous Aggregate refresh policy on by default
- #8274 Allow creating concurrent continuous aggregate refresh policies
- #8314 Add support for timescaledb_lake in loader
- #8209 Add experimental support for Direct Compress of
COPY - #8341 Allow quick migration from hypercore TAM to (columnstore) heap
Bugfixes
- #8153 Restoring a database having NULL compressed data
- #8164 Check columns when creating new chunk from table
- #8294 The "vectorized predicate called for a null value" error for WHERE conditions like
x = any(null::int[]). - #8307 Fix missing catalog entries for bool and null compression in fresh installations
- #8323 Fix DML issue with expression indexes and BHS
GUCs
enable_direct_compress_copy: Enable experimental support for direct compression duringCOPY, default: offenable_direct_compress_copy_sort_batches: Enable batch sorting during direct compressCOPY, default: onenable_direct_compress_copy_client_sorted: Correct handling of data sorting by the user is required for this option, default: off
2.20.3 (2025-06-11)
This release contains bug fixes since the 2.20.2 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #8107 Adjust SkipScan cost for quals needing full scan of indexed data.
- #8211 Fixed dump and restore when chunk skipping is enabled.
- #8216 Fix for dropped quals bug in SkipScan.
- #8230 Fix for inserting into compressed data when vectorised check is not available.
- #8236 Fixed the snapshot handling in background workers.
Thanks
- @ikaakkola for reporting that SkipScan is slow when non-index quals do not match any tuples.
2.20.2 (2025-06-02)
This release contains bug fixes since the 2.20.1 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #8202 Fix NULL compression handling for vectorized constraint checking