-
Notifications
You must be signed in to change notification settings - Fork 972
Load OSM extension in retention background worker to drop tiered chunks #7766
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #7766 +/- ##
==========================================
+ Coverage 82.47% 82.53% +0.06%
==========================================
Files 247 247
Lines 47520 47494 -26
Branches 12081 12078 -3
==========================================
+ Hits 39190 39201 +11
- Misses 3474 3491 +17
+ Partials 4856 4802 -54 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
b24b4ec
to
c2d3700
Compare
@svenklemm, @antekresic: please review this pull request.
|
c2d3700
to
176fa37
Compare
src/osm_callbacks.c
Outdated
elog(LOG, "failed to load OSM extension: %s", error->message); | ||
|
||
/* Finally, free error data */ | ||
FreeErrorData(error); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The transaction is borked after an error, so you need rethrow the error here. If you want to continue processing, you need to execute the load in a sub-transaction that can fail while the main transaction continues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should probably throw the error here though... you only try to load this library if an OSM chunk exists (i.e., OSM is being used), so if you cannot load the extension lib at that point, you should probably throw an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I replaced the explicit library loading code with the GetFdwRoutineByRelId()
call as you suggested in slack, it works perfectly.
1f56ec5
to
01565d9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove the previous commit and update the commit message/PR comment.
01565d9
to
831f7d1
Compare
The OSM library doesn't get loaded when a retention policy runs from a background worker. This patch ensures that the library gets loaded by calling `GetFdwRoutineByRelId()` on the OSM chunk.
831f7d1
to
e637e4c
Compare
## 2.22.1 (2025-09-30) This release contains performance improvements and bug fixes since the [2.22.0](https://github.com/timescale/timescaledb/releases/tag/2.20.0) release. We recommend that you upgrade at the next available opportunity. This release blocks the ability to leverage **concurrent refresh policies** in **hierarchical continous aggregates**, as potential deadlocks can occur. If you have [concurrent refresh policies](https://docs.tigerdata.com/use-timescale/latest/continuous-aggregates/refresh-policies/#add-concurrent-refresh-policies) in **hierarchical** continous aggregates, [please disable the jobs](https://docs.tigerdata.com/api/latest/jobs-automation/alter_job/#samples), as following: ``` SELECT alter_job("<job_id_of_concurrent_policy>", scheduled => false); ``` **Bugfixes** * [#7766](#7766) Load OSM extension in retention background worker to drop tiered chunks * [#8550](#8550) Error in gapfill with expressions over aggregates and groupby columns and out-of-order columns * [#8593](#8593) Error on change of invalidation method for continuous aggregate * [#8599](#8599) Fix attnum mismatch bug in chunk constraint checks * [#8607](#8607) Fix interrupted continous aggregate refresh materialization phase leaving behind pending materialization ranges * [#8638](#8638) `ALTER TABLE RESET` for `orderby` settings * [#8644](#8644) Fix migration script for sparse index configuration * [#8657](#8657) Fix `CREATE TABLE WITH` when using UUIDv7 partitioning * [#8659](#8659) Don't propagate `ALTER TABLE` commands to foreign data wrapper chunks * [#8693](#8693) Compressed index not chosen for `varchar` typed `segmentby` columns * [#8707](#8707) Block concurrent refresh policies for hierarchical continous aggregate due to potential deadlocks **Thanks** * @MKrkkl for reporting a bug in Gapfill queries with expressions over aggregates and groupby columns * @brandonpurcell-dev for creating a test case that showed a bug in `CREATE TABLE WITH` when using UUIDv7 partitioning * @snyrkill for reporting a bug when interrupting a continous aggregate refresh --------- Signed-off-by: Philip Krauss <[email protected]> Co-authored-by: timescale-automation <123763385+github-actions[bot]@users.noreply.github.com> Co-authored-by: philkra <[email protected]> Co-authored-by: Philip Krauss <[email protected]> Co-authored-by: Iain Cox <[email protected]>
## 2.22.1 (2025-09-30) This release contains performance improvements and bug fixes since the [2.22.0](https://github.com/timescale/timescaledb/releases/tag/2.20.0) release. We recommend that you upgrade at the next available opportunity. This release blocks the ability to leverage **concurrent refresh policies** in **hierarchical continous aggregates**, as potential deadlocks can occur. If you have [concurrent refresh policies](https://docs.tigerdata.com/use-timescale/latest/continuous-aggregates/refresh-policies/#add-concurrent-refresh-policies) in **hierarchical** continous aggregates, [please disable the jobs](https://docs.tigerdata.com/api/latest/jobs-automation/alter_job/#samples), as following: ``` SELECT alter_job("<job_id_of_concurrent_policy>", scheduled => false); ``` **Bugfixes** * [#7766](#7766) Load OSM extension in retention background worker to drop tiered chunks * [#8550](#8550) Error in gapfill with expressions over aggregates and groupby columns and out-of-order columns * [#8593](#8593) Error on change of invalidation method for continuous aggregate * [#8599](#8599) Fix attnum mismatch bug in chunk constraint checks * [#8607](#8607) Fix interrupted continous aggregate refresh materialization phase leaving behind pending materialization ranges * [#8638](#8638) `ALTER TABLE RESET` for `orderby` settings * [#8644](#8644) Fix migration script for sparse index configuration * [#8657](#8657) Fix `CREATE TABLE WITH` when using UUIDv7 partitioning * [#8659](#8659) Don't propagate `ALTER TABLE` commands to foreign data wrapper chunks * [#8693](#8693) Compressed index not chosen for `varchar` typed `segmentby` columns * [#8707](#8707) Block concurrent refresh policies for hierarchical continous aggregate due to potential deadlocks **Thanks** * @MKrkkl for reporting a bug in Gapfill queries with expressions over aggregates and groupby columns * @brandonpurcell-dev for creating a test case that showed a bug in `CREATE TABLE WITH` when using UUIDv7 partitioning * @snyrkill for reporting a bug when interrupting a continous aggregate refresh --------- Signed-off-by: Philip Krauss <[email protected]> Co-authored-by: timescale-automation <123763385+github-actions[bot]@users.noreply.github.com> Co-authored-by: philkra <[email protected]> Co-authored-by: Philip Krauss <[email protected]> Co-authored-by: Iain Cox <[email protected]>
OSM sets a special callback for dropping tiered chunks that is used by timescaledb in ts_chunk_do_drop_chunks. But in background worker OSM library doesn't get loaded and retention policy ends up not removing tiered chunks.
Disable-check: approval-count
(for some reason Gayathri's approval did not count)