-
Notifications
You must be signed in to change notification settings - Fork 228
Description
Feature description
Currently, cpu and memory resource settings for odh dashboard are in no way representative of real usage.
Let's start with CPU:
there is no reason for the pod to only be scheduled if 0.5 CPU is free on any worker node.
A request of 200m would be more than enough. Keep in mind: We've got two pods of odh-dashboard running, which means in case we change this setting, there is already 0.3 * 2 = 0.6 CPU Cores available for scheduling other containers and pods on the node.
Average usage in my experience never even exceeded 200m, so the limit can be set to let's say 250m, just in case.
Regarding memory:
We have a relatively simple Web app here ... average memory usage in my monitoring never exceeded 350Mi.
So, I'd say requests.memory should be changed from 1Gi to 350Mi and
limits.memory should be changed from 2Gi to 400Mi.
Using these settings, in terms of QoS, we allow for some bursting, while not over-reserving (requests) too much initially.
What do you think?
Thank you, @tkolo for pointing me to this. #1730 (comment)
to summarize, I'd propose changing the deployment.yaml section to the following for odh-dashboard
resources:
limits:
cpu: 300m
memory: 350Mi
requests:
cpu: 200m
memory: 250Mi
and for oauth-proxy:
resources:
limits:
cpu: 100m
memory: 120Mi
requests:
cpu: 50m
memory: 100Mi
Describe alternatives you've considered
No response
Anything else?
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Status