Skip to content

Conversation

@bkchr
Copy link
Member

@bkchr bkchr commented Nov 6, 2025

This changes kvdb-rocksdb to force compact the DB on startup and when we are writing a lot of data. This significantly improves the read performance after doing a warp sync for example.

bkchr added 2 commits November 6, 2025 15:49
This changes kvdb-rocksdb to force compact the DB on startup and when we are writing a lot of data. This significantly improves the read performance after doing a warp sync for example.
//
// Otherwise, rocksdb read performance is going down, after e.g. a warp sync.
if stats_total_bytes > self.config.compaction.initial_file_size as usize &&
self.last_compaction.lock().elapsed() > Duration::from_secs(60)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any ideas how long compaction lasts? 60s sounds too often - but it is a pure guess from my side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its only when we do large write and 60s have passed. If not much happens, we will not compact every 60 seconds. But also means that if changes to rocksdb trickle in, this branch will never be triggered. But maybe thats okay and rocksdb can handle that case by itself. At least we saw this problem mostly with large writes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not measure it, but it took less than 60s. Also we are not really writing that much. Even if it takes longer, rocksdb internally hopefully prevents this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hopefully

[x] doubts

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For whoever lands here in the future, sorry :)

@michalkucharczyk
Copy link
Contributor

Leaving this link here for the future readers: tikv/rfcs#110
Was mentioned in offline discussion.

Maybe worth adding to the PR description?

@bkchr
Copy link
Member Author

bkchr commented Nov 7, 2025

Leaving this link here for the future readers: tikv/rfcs#110

Not only this. It is also mentioned in their own docs of rocksdb that you sometimes need to do manual compaction.

@bkchr bkchr merged commit 9ecd9bd into master Nov 7, 2025
6 checks passed
@bkchr bkchr deleted the bkchr-rocksdb-force-compact branch November 7, 2025 10:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants