-
Notifications
You must be signed in to change notification settings - Fork 823
Description
Summary
We're using microk8s (managed via snap) as a way for our developers to boot our application stack locally during development. We would like a way to restrict its available CPU so it can't max out all the cores on the machine.
We essentially are looking for some equivalent of minikube's --cpu / --memory settings on startup: https://minikube.sigs.k8s.io/docs/faq/
Why is this important?
While our application stack boots, various apps will temporarily demand lots of CPU, before ultimately settling down once they've initialised. Our devs typically have machines with 16 CPU cores available - being able to restrict microk8s to only be able to use e.g. 12 of these maximum would prevent our machines locking up occasionally while a local stack boots up.
Developers will be running other applications alongside microk8s such as IDE(s), browsers, slack and so on - we want the machine to remain responsive/available while a local stack is booting.
Things I've already tried
I have already attempted to achieve this in a few ways, to no avail:
- Using snap quota groups. I created a group with e.g.
cpu=10x80%, assigned microk8s to it and restarted it. Node still showed all 16 of my cores available and booting services still maxed out my machine's CPU. I guess this is because the quota group is only restricting the microk8s process, not containers scheduled there? - Using the system reserved kubelet argument. I added e.g.
--system-reserved=cpu=6to/var/snap/microk8s/current/args/kubeletand restarted microk8s. Describing the node now shows CPU capacity of 16, but allocatable of 10. This got my hopes up but had no effect in practice; I guess the "allocatable" only affects the scheduler? So I think all this does is mean I can't have CPU requests totalling >10, it doesn't stop more than 10 actually being used by running containers. - Using a LimitRange with a default CPU limit of e.g. 1. This isn't really workable; CPU can still max out if we have >16 containers that are spiking, and it means apps get unnecessarily bottlenecked to 1 core even when there's plenty available overall on the machine.
- Using a compute ResourceQuota. Again, this would involve setting CPU limits on all our deployments, which isn't what we want to do - we want microk8s overall to be limited, but within that limit pods to be free to burst and use CPU when they need it.
Are you interested in contributing to this feature?
I wouldn't know where to start I'm afraid!