Skip to content

Conversation

dbeck
Copy link
Member

@dbeck dbeck commented May 19, 2025

As reported in the bug report 2912, dumping and restoring a column using NULL compression failed. This change fixes this issue.

@dbeck dbeck requested a review from antekresic May 19, 2025 16:44
@dbeck dbeck requested a review from akuzm May 19, 2025 16:44
@dbeck dbeck force-pushed the SDC-2912-squashed branch 2 times, most recently from a079629 to f2c0e98 Compare May 19, 2025 16:47
Copy link

codecov bot commented May 19, 2025

Codecov Report

Attention: Patch coverage is 50.00000% with 4 lines in your changes missing coverage. Please review.

Project coverage is 82.33%. Comparing base (4800693) to head (a5718cb).
Report is 30 commits behind head on main.

Files with missing lines Patch % Lines
tsl/src/compression/algorithms/null.c 40.00% 1 Missing and 2 partials ⚠️
tsl/test/src/compression_unit_test.c 50.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8153      +/-   ##
==========================================
+ Coverage   82.24%   82.33%   +0.09%     
==========================================
  Files         255      255              
  Lines       47718    47689      -29     
  Branches    12026    12024       -2     
==========================================
+ Hits        39247    39267      +20     
- Misses       3655     3666      +11     
+ Partials     4816     4756      -60     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@dbeck dbeck force-pushed the SDC-2912-squashed branch from f2c0e98 to d95b7e8 Compare May 19, 2025 17:06
As reported in the bug report 2912, dumping and restoring a
column using NULL compression failed. This change fixes this
issue.
@dbeck dbeck force-pushed the SDC-2912-squashed branch from d95b7e8 to a5718cb Compare May 19, 2025 17:12
@dbeck dbeck added the force-auto-backport Automatically backport this PR or fix of this issue, even if it's not marked as "bug" label May 19, 2025
@dbeck dbeck merged commit cf307e4 into timescale:main May 19, 2025
42 of 44 checks passed
@svenklemm svenklemm added this to the v2.20.1 milestone May 21, 2025
@philkra philkra mentioned this pull request May 27, 2025
@timescale-automation timescale-automation added the released-2.20.1 Released in 2.20.1 label Jun 2, 2025
@philkra philkra mentioned this pull request Jul 2, 2025
philkra added a commit that referenced this pull request Jul 8, 2025
## 2.21.0 (2025-07-08)

This release contains performance improvements and bug fixes since the
2.20.3 release. We recommend that you upgrade at the next available
opportunity.

**Highlighted features in TimescaleDB v2.21.0**
* The attach & detach chunks feature allows manually adding or removing
chunks from a hypertable with uncompressed chunks, similar to
PostgreSQL’s partition management.
* Continued improvement of backfilling into the columnstore, achieving
up to 2.5x speedup for constrained tables, by introducing caching logic
that boosts throughput for writes to compressed chunks, bringing
`INSERT` performance close to that of uncompressed chunks.
* Optimized `DELETE` operations on the columstore through batch-level
deletions of non-segmentby keys in the filter condition, greatly
improving performance to up to 42x faster in some cases, as well as
reducing bloat, and lowering resource usage.
* The heavy lock taken in Continuous Aggregate refresh was relaxed,
enabling concurrent refreshes for non-overlapping ranges and eliminating
the need for complex customer workarounds.
* [tech preview] Direct Compress is an innovative TimescaleDB feature
that improves high-volume data ingestion by compressing data in memory
and writing it directly to disk, reducing I/O overhead, eliminating
dependency on background compression jobs, and significantly boosting
insert performance.

**Sunsetting of the hypercore access method**
We made the decision to deprecate hypercore access method (TAM) with the
2.21.0 release. It was an experiment, which did not show the signals we
hoped for and will be sunsetted in TimescaleDB 2.22.0, scheduled for
September 2025. Upgrading to 2.22.0 and higher will be blocked if TAM is
still in use. Since TAM’s inception in
[2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0),
we learned that btrees were not the right architecture. The recent
advancements in the columnstore—such as more performant backfilling,
SkipScan, adding check constraints, and faster point queries—put the
[columnstore](https://www.timescale.com/blog/hypercore-a-hybrid-row-storage-engine-for-real-time-analytics)
close to or on par with TAM without the storage from the additional
index. We apologize for the inconvenience this action potentially causes
and are here to assist you during the migration process.

Migration path

```
do $$
declare   
   relid regclass;
begin
   for relid in
       select cl.oid from pg_class cl
       join pg_am am on (am.oid = cl.relam)
       where am.amname = 'hypercore'
   loop
       raise notice 'converting % to heap', relid::regclass;
       execute format('alter table %s set access method heap', relid);
   end loop;
end
$$;
```

**Features**
* [#8081](#8081) Use JSON
error code for job configuration parsing
* [#8100](#8100) Support
splitting compressed chunks
* [#8131](#8131) Add policy
to process hypertable invalidations
* [#8141](#8141) Add
function to process hypertable invalidations
* [#8165](#8165) Reindex
recompressed chunks in compression policy
* [#8178](#8178) Add
columnstore option to `CREATE TABLE WITH`
* [#8179](#8179) Implement
direct `DELETE` on non-segmentby columns
* [#8182](#8182) Cache
information for repeated upserts into the same compressed chunk
* [#8187](#8187) Allow
concurrent Continuous Aggregate refreshes
* [#8191](#8191) Add option
to not process hypertable invalidations
* [#8196](#8196) Show
deprecation warning for TAM
* [#8208](#8208) Use `NULL`
compression for bool batches with all null values like the other
compression algorithms
* [#8223](#8223) Support
for attach/detach chunk
* [#8265](#8265) Set
incremental Continous Aggregate refresh policy on by default
* [#8274](#8274) Allow
creating concurrent continuous aggregate refresh policies
* [#8314](#8314) Add
support for timescaledb_lake in loader
* [#8209](#8209) Add
experimental support for Direct Compress of `COPY`
* [#8341](#8341) Allow
quick migration from hypercore TAM to (columnstore) heap

**Bugfixes**
* [#8153](#8153) Restoring
a database having NULL compressed data
* [#8164](#8164) Check
columns when creating new chunk from table
* [#8294](#8294) The
"vectorized predicate called for a null value" error for WHERE
conditions like `x = any(null::int[])`.
* [#8307](#8307) Fix
missing catalog entries for bool and null compression in fresh
installations
* [#8323](#8323) Fix DML
issue with expression indexes and BHS

**GUCs**
* `enable_direct_compress_copy`: Enable experimental support for direct
compression during `COPY`, default: off
* `enable_direct_compress_copy_sort_batches`: Enable batch sorting
during direct compress `COPY`, default: on
* `enable_direct_compress_copy_client_sorted`: Correct handling of data
sorting by the user is required for this option, default: off

---------

Signed-off-by: Philip Krauss <[email protected]>
Co-authored-by: philkra <[email protected]>
Co-authored-by: philkra <[email protected]>
Co-authored-by: Fabrízio de Royes Mello <[email protected]>
Co-authored-by: Anastasiia Tovpeko <[email protected]>
@timescale-automation timescale-automation added the released-2.21.0 Released in 2.21.0 label Jul 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backported-2.20.x force-auto-backport Automatically backport this PR or fix of this issue, even if it's not marked as "bug" released-2.20.1 Released in 2.20.1 released-2.21.0 Released in 2.21.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants