-
Notifications
You must be signed in to change notification settings - Fork 972
8587 #8708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
8587 #8708
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Member
natalya-aksman
commented
Sep 29, 2025
- release artefacts from 2.21.3 to 2.22.0
- Use active and registered snapshots only
- Fix LOCF with out-of-order columns error (Fix LOCF with out-of-order columns error #8545)
- Fix SQLSmith15 bug in distinct window func target uncovered by multikey skipscan (Fix SQLSmith15 bug in distinct window func target uncovered by multikey skipscan #8548)
- Serialize WAL-based invalidation processing
- Fix timestamp out of range in CAgg refresh policy
- Fix homebrew ci check
- Error out on bad args when processing invalidation
- Rename UUIDv7 functions
- Release 2.22.0 (Release 2.22.0 #8536)
- Bump to next release version (Bump to next release version #8572)
- Add update files according to version bump
- Fix attnum mismatch bug in chunk constraint checks (Fix attnum mismatch bug in chunk constraint checks #8599)
- Error on change of invalidation method for continuous aggregate
- Fix gapfill bug when aggregates are in expressions with groupby columns (Fix gapfill bug when aggregates are in expressions with groupby columns #8550)
- Improve uuid support in the vectorized pipeline (Improve uuid support in the vectorized pipeline #8585)
- Fix interrupted CAgg refresh materialization phase
- Revert "Use stateful detoaster in bloom1_contains (Use stateful detoaster in bloom1_contains #8300)" (Revert "Use stateful detoaster in bloom1_contains (#8300)" #8625)
- Remove an incorrect UUID unit test (Remove an incorrect UUID unit test #8554)
- Enforce the OSM library load in retention policy
- Fixes ALTER TABLE RESET for orderby settings
- Improve pending materializations search (Improve pending materializations search #8647)
- Relax lock when processing CAgg invalidation logs
- Fix CREATE TABLE WITH when using UUID partitioning
- Skip OSM chunk when propagating ALTER TABLE commands
- Fix migration issue with sparse index conf
- Adjust test for 2.22.x backport
- TimescaleDB 2.21.4 forwardport
- Fix downgrade script for 2.21.3
- Run merge_append_partially_compressed only for PG17GE as MergeAppend was broken in PG16LE
Snapshot being used for scans needs to be active or registered so that its pinned and cannot be invalidated from under us. postgres/postgres@8076c005
Fix second issue reported in timescale#4894.
…ey skipscan (timescale#8548) Fix a bug in SkipScan PG15 code checking distinct window function targets for possibility of using SkipScan.
If two sessions using WAL-based invalidation processing starts processing the WAL at the same time, the second one will get an error rather than waiting. This is solved by adding a database object lock on the materialization table. This will not conflict with relation locks on the same table, but will allow concurrent refresh to serialize on the invalidation processing. (cherry picked from commit bbbebbf)
When setting a refresh policy with `end_offset=>NULL` for a CAgg with variable sized bucket it was erroring out when there was o data to be refreshed. This was happening cause we're using the wrong util function to get the maximum value for the given type. Fixed it using the proper function to handle both the fixed and variable bucket size. (cherry picked from commit 16d8b98)
When calling `_timescaledb_functions.process_hypertable_invalidations` either directly or indirectly, it is necessary to check that the array contain only hypertables and error out on any elements that are not hypertables. (cherry picked from commit 1e2a8a5)
Rename functions that work on UUIDv7 so that they have more user-friendly names and are located in the extension schema (i.e., "public"). This makes it much more convenient to use the functions in queries. The new function names are: - generate_uuidv7() - to_uuidv7() - to_uuidv7_boundary() - uuid_timestamp() - uuid_timestamp_micros() - uuid_version()
## 2.22.0 (2025-09-02) This release contains performance improvements and bug fixes since the 2.21.3 release. We recommend that you upgrade at the next available opportunity. **Highlighted features in TimescaleDB v2.22.0** * Sparse indexes on compressed hypertables can now be explicitly configured via `ALTER TABLE` rather than relying only on internal heuristics. Users can define indexes on multiple columns to improve query performance for their specific workloads. * [Tech Preview] Continuous aggregates now support the `timescaledb.invalidate_using` option, enabling invalidations to be collected either via triggers on the hypertable or directly from WAL using logical decoding. Aggregates inherit the hypertable’s method if none is specified. * UUIDv7 compression and vectorization are now supported. The compression algorithm leverages the timestamp portion for delta-delta compression while storing the random portion separately. The vectorized equality/inequality filters with bulk decompression deliver ~2× faster query performance. The feature is disabled by default (`timescaledb.enable_uuid_compression`) to simplify the downgrading experience, and will be enabled out of the box in the next minor release. * Hypertables can now be partitioned by UUIDv7 columns, leveraging their embedded timestamps for time-based chunking. We’ve also added utility functions to simplify working with UUIDv7, such as generating values or extracting timestamps - e.g., `uuid_timestamp()` returns a PostgreSQL timestamp from a UUIDv7. * SkipScan now supports multi-column indexes in not-null mode, improving performance for distinct and ordered queries across multiple keys. **Removal of the hypercore table access method** We made the decision to deprecate the hypercore table access method (TAM) with the 2.21.0 release. Hypercore TAM was an experiment and it did not show the performance improvements we hoped for. It is removed with this release. Upgrades to 2.22.0 and higher are blocked if TAM is still in use. Since TAM’s inception in [2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0), we learned that btrees were not the right architecture. Recent advancements in the columnstore, such as more performant backfilling, SkipScan, adding check constraints, and faster point queries, put the [columnstore](https://www.timescale.com/blog/hypercore-a-hybrid-row-storage-engine-for-real-time-analytics) close to or on par with TAM without needing to store an additional index. We apologize for the inconvenience this action potentially causes and are here to assist you during the migration process. Migration path ``` do $$ declare relid regclass; begin for relid in select cl.oid from pg_class cl join pg_am am on (am.oid = cl.relam) where am.amname = 'hypercore' loop raise notice 'converting % to heap', relid::regclass; execute format('alter table %s set access method heap', relid); end loop; end $$; ``` **Features** * [timescale#8247](timescale#8247) Add configurable alter settings for sparse indexes * [timescale#8306](timescale#8306) Add option for invalidation collection using WAL for continuous aggregates * [timescale#8340](timescale#8340) Improve selectivity estimates for sparse minmax indexes, so that an index scan on a table in the columnstore is chosen more often when it's beneficial. * [timescale#8360](timescale#8360) Continuous aggregate multi-hypertable invalidation processing * [timescale#8364](timescale#8364) Remove hypercore table access method * [timescale#8371](timescale#8371) Show available timescaledb `ALTER` options when encountering unsupported options * [timescale#8376](timescale#8376) Change `DecompressChunk` custom node name to `ColumnarScan` * [timescale#8385](timescale#8385) UUID v7 functions for testing pre PG18 * [timescale#8393](timescale#8393) Add specialized compression for UUIDs. Best suited for UUID v7, but still works with other UUID versions. This is experimental at the moment and backward compatibility is not guaranteed. * [timescale#8398](timescale#8398) Set default compression settings at compress time * [timescale#8401](timescale#8401) Support `ALTER TABLE RESET` for compression settings * [timescale#8414](timescale#8414) Vectorised filtering of UUID Eq and Ne filters, plus bulk decompression of UUIDs * [timescale#8424](timescale#8424) Block downgrade when orderby setting is `NULL` * [timescale#8454](timescale#8454) Remove internal unused index helper functions * [timescale#8494](timescale#8494) Improve job stat history retention policy * [timescale#8496](timescale#8496) Fix dropping chunks with foreign keys * [timescale#8505](timescale#8505) Add support for partitioning on UUIDv7 * [timescale#8513](timescale#8513) Support multikey SkipScan when all keys are guaranteed to be non-null * [timescale#8514](timescale#8514) Concurrent continuous aggregates improvements * [timescale#8528](timescale#8528) Add the `_timescaledb_functions.chunk_status_text` helper function * [timescale#8529](timescale#8529) Optimize direct compress status handling **Bugfixes** * [timescale#8422](timescale#8422) Don't require `columnstore=false` when using the TimescaleDB Apache 2 Edition * [timescale#8493](timescale#8493) Change log level of `not null` constraint message * [timescale#8500](timescale#8500) Fix uniqueness check with generated columns and hypercore * [timescale#8545](timescale#8545) Fix error in LOCF/Interpolate with out-of-order and repeated columns * [timescale#8558](timescale#8558) Error out on bad args when processing invalidation * [timescale#8559](timescale#8559) Fix `timestamp out of range` using `end_offset=NULL` on CAgg refresh policy **GUCs** * `enable_multikey_skipscan`: Enable SkipScan for multiple distinct keys, default: on * `enable_uuid_compression`: Enable UUID compression functionality, default: off * `cagg_processing_wal_batch_size`: Batch size when processing WAL entries, default: 10000 * `cagg_processing_low_work_mem`: Low working memory limit for continuous aggregate invalidation processing, default: 38.4MB * `cagg_processing_high_work_mem`: High working memory limit for continuous aggregate invalidation processing, default: 51.2MB **Thanks** * @CodeTherapist for reporting an issue where foreign key checks did not work after several insert statements * @moodgorning for reporting a bug in queries with LOCF/Interpolate using out-of-order columns * @nofalx for reporting an error when using `end_offset=NULL` on CAgg refresh policy * @pierreforstmann for fixing a bug that happened when dropping chunks with foreign keys * @Zaczero for reporting a bug with CREATE TABLE WITH when using the TimescaleDB Apache 2 Edition
After the 2.22 tag, we need to bump the release version on the release branch
(cherry picked from commit ede3c23)
Using `ALTER MATERIALIZED VIEW` to change the invalidation method for a continuous aggregate does not work since it is necessary to remove or add the trigger on the hypertable and add or remove a slot as well as syncing the hypertable invalidations. (cherry picked from commit f78d327)
…ns (timescale#8550) Fixes timescale#4894 Gapfill used to error out when aggregates were used in expressions with group-by columns like `(agg + gby_var)` and when gapfill target entries were not matching the table column order. It is fixed by fixing group-by vars to match execution group-by vars. (cherry picked from commit 6e0e00d)
Some places did not handle the uuids which led to internal program errors. (cherry picked from commit 345a0f0) backport of timescale#8585
In timescale#8514 we improved concurrent CAgg refreshes by splitting the second transaction (invalidation processing and data materialization) into two separated transactions. But now when interrupting the third transaction (data materialization) we'll left behind pending materialization ranges in the new metada table `continuous_aggs_materialization_ranges`. Fixed it by properly checking the existance of pending materialization ranges and if it exists execute the materialization. (cherry picked from commit a0a109d)
…imescale#8625) This reverts commit a592416. Apparently the memory context reset callback is called too late, when the resource owner does not exist already. This happens in parallel workers. Reverting this pending investigation to speed up the patch release. (cherry picked from commit da609e0)
4-byte varlena header cannot be unaligned in Postgres. This fixes a sanitizer failure on main. Couldn't reproduce the failure itself locally though. (cherry picked from commit 7d10797)
The OSM library doesn't get loaded when a retention policy runs from a background worker. This patch ensures that the library gets loaded by calling `GetFdwRoutineByRelId()` on the OSM chunk. (cherry picked from commit c3559d4)
Previously only the orderby column was being without changing the index setting. (cherry picked from commit 099ced3)
PR timescale#8607 addressed the issue when a failed refresh left behind pending materializations (i.e., rows in _timescaledb_catalog.continuous_aggs_materialization_ranges). The patch searched for those pending materializations that had some overlap with the current refresh's refresh window. However that PR used the refresh window that was passed as reference to process_cagg_invalidations_and_refresh and as such, was modified by that function to match with the invalidated buckets. This PR changes it to use the original refresh window (from the policy) so that it is more likely to overlap with the pending materializations (and more deterministic, since it doesn't depend on the data being materialized) Disable-check: force-changelog-file (cherry picked from commit 9a47b63)
In timescale#8515 we made some improvements on Concurrent CAgg Refreshes but an oversight happened keeping the `ExclusiveLock` in the materialization hypertable during the CAgg invalidation logs processing ending up with concurrent refreshes on non-overlaping ranges waiting for each other. Fixed it by relaxing the `ExclusiveLock` to `ShareUpdateExclusiveLock` since we should only guarantee that the CAgg invalidation logs processing transaction be executed serially but the next transaction (materialization phase) can't be blocked by this or block other concurrent refreshes. (cherry picked from commit 26a2c50)
The CREATE TABLE WITH option to create hypertables didn't treat UUID as a time type accepting an chunk interval of type "Interval", leading to the wrong interval being configured. Fix this by adding the missing checks for UUID partitioning.
When a hypertable has tiered chunks setting reloptions would stop working as the ALTER TABLE would be propagated to the FDW chunk which would error out as it doesnt support the command.
Migration script for configuration settings was materializing orderby sparse indexes into the configuration. This is wrong since we assume all hts use default sparse indexing before version 2.22. Correct default value is NULL. Also adjusted settings equality function to account for this. (cherry picked from commit 7833bf8)
In the 2.22.x branch partition_column is still mandatory and can't be left out from CREATE TABLE WITH.
This release contains performance improvements and bug fixes since the 2.21.3 release. We recommend that you upgrade at the next available opportunity. **Bugfixes** * [timescale#8667](timescale#8667) Fix wrong selectivity estimates uncovered by the recent Postgres minor releases 15.14, 16.10, 17.6.
(cherry picked from commit 3188e46)
…was broken in PG16LE
@dbeck, @kpan2034: please review this pull request.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.