Skip to content

Master memory usage improvements (Raft LogCache GC and Global Memstore Limit) #8037

@jmeehan16

Description

@jmeehan16

In the process of initializing the block cache on master (#7873), two other improvement opportunities were uncovered by @bmatican . Both improvements involve differentiating memory limits on master from those on tservers. Quoting from D11171:

  • raft LogCache GC -- if a master follower falls behind, we start caching WAL entries in memory, to be able to quickly reply with several entries from memory...this I believe can grow as big as 1GB of memory, which is clearly too much for the master, given we default give it at most 10% of the machine's memory

  • global memstore limit -- in the tserver side, we can have several rocksdbs, each with their own memstores, which independently have a 128MB default size at which they flush, but we also want a global limit across memstores, to flush when we hit a certain limit across rocksdbs, default 10% of process memory limit...on the master, we really only have 1 tablet, so worst case, at a full memstore of 128MB vs 10% of the process memory, which for the master, is 10% of the machine memory, means we'll strike even on ~12GB machines

[These are suggested as] improvements on the master memory usage

  • under failure conditions
  • in steady state for lower memory machines

Metadata

Metadata

Assignees

Labels

area/docdbYugabyteDB core features

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions