Skip to content

[Bug] Postgres 17 query planner much slower with hypertable joins #8706

@t-aistleitner

Description

@t-aistleitner

What type of bug is this?

Performance issue

What subsystems and features are affected?

Query planner

What happened?

We recently upgraded our development environment from Postgres 14 → Postgres 17 and TimescaleDB 2.16 → 2.21.

Since then, we’ve noticed that certain queries involving joins now take 2.5× longer in the planning phase compared to Postgres 14. These queries are performance-critical in our application.

I’m aware that Postgres 17 introduced major changes in the query planner, but I don’t fully understand why planning times got significantly worse for these specific cases.

In particular, we have a query that joins an array of datetimes with data from a hypertable.

  • Querying the hypertable directly performs equally well on Postgres 14 and 17.
  • However, the planning time of the join query grows with the number of chunks in the hypertable.

Execution plans for reference:

I executed the queries several times to see if planning time decreases.

Question: Why does Postgres 17’s new planner cause such a large increase in planning time for this type of TimescaleDB query, and is there a recommended way to mitigate this regression?

TimescaleDB version affected

2.21

PostgreSQL version used

17.15

What operating system did you use?

Rocky Linux 9

What installation method did you use?

RPM

What platform did you run on?

On prem/Self-hosted

Relevant log output and stack trace

How can we reproduce the bug?

drop table if exists test_performance cascade;
create table if not exists test_performance (id INT not null,
							date TIMESTAMP not null,
							value INT not null);

create unique index test_performance_idx on test_performance (id, date);

SELECT create_hypertable('test_performance', by_range('date', INTERVAL '1 day'), create_default_indexes => False);
ALTER TABLE test_performance SET (timescaledb.compress, timescaledb.compress_orderby = 'date DESC', timescaledb.compress_segmentby = 'id');

INSERT INTO test_performance (id, date, value)
SELECT
    id_val,
    time_val,
    (random()*100)::INT
FROM
    generate_series('2024-01-01 00:00:00'::timestamp, '2024-10-30 23:59:59'::timestamp, '10 minutes') AS time_val
CROSS JOIN
    generate_series(1, 100) AS id_val;

explain analyse select id, date from unnest(array['2024-10-15 00:10'::timestamp, '2024-10-15 00:20'::timestamp]) as test_dates
left join (select id, date from test_performance where id = 1 and date >= '2024-10-15 00:10' and date <= '2024-10-15 00:20') as tp on (tp.date = test_dates);

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions