-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Restart console container when config changes #18411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restart console container when config changes #18411
Conversation
Use a livenessProbe to detect when the console config has changed.
@derekwaynecarr @sjenning If there was a way to directly restart a pod, we could write a controller that restarted pods that opt-in via an annotation. Restarting them based on whether the content of a secret or configmap they mounted changed. The logic is pretty simple. The same API endpoint could be used to drive restarts for static pods. What are my chances of getting such an API? |
You probably want some kind of a jitter or randomness to avoid killing all your pods at once, right? |
Just a quick note that this is really meant to be a stop-gap solution until we have something better. Also it looks like the md5sum command is not expensive at all.
|
Yeah. I thought there might be some jitter built-in since the liveness probes and config map updates won't happen at the same time for all pods, but maybe that's not the case. |
Don't liveness probes already have a jitter? |
Why is a rollout not desirable? That is the official way to restart the pods in a deploymentconfig and respects all the policies we have in place about how many pods can be down at once. I agree that it is overkill, but if the application is not going to inotify watch the configmap and reload on its own, then this is other official way to pick up the change. There really is no API to the kubelet to request a pod restart. The kubelet itself has internal mechanism for doing this, such as when liveness/readiness probes fail, but there is no way for external controllers to request this. Said another way, there is no property of the pod spec that would indicate that intention, like setting the deletionTimestamp indicates the kubelet should kill the pod. Failing the liveness probe when the configmap changes is a clever (ab)use of the mechanism, I must say :P |
I wish we could rollout. The problem is this is an k8s deployment, not a deployment config. There's no command to rollout a deployment again if the pod spec hasn't changed. So it'e either:
It doesn't feel great to ask users to do either of those things after editing console config.
Credit to @aweiteka :) |
Any reason not to use a DeploymentConfig instead of a Deployment? |
To avoid needing to migrate to a Deployment later on. |
@smarterclayton Any concerns with this change? We'd like to go ahead with it unless anyone objects. |
Do it |
Similar pattern here: #18391 |
/hold cancel @jwforres PTAL |
/lgtm but I dont think i have approver rights here |
@deads2k or @smarterclayton would you mind approving |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, jwforres, spadgett The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/retest |
/retest |
/test gcp |
/retest |
flake #18136 /test extended_conformance_install |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. |
Opening this to get feedback. We have a problem where there is no good way to rollout the console after editing the console config in its config map. Right now we have to tell users to delete the console pods, which is error prone and not friendly. The console only reads config at startup and doesn't watch for changes.
This adds a liveness probe that detects if the config has changed on the filesystem using an md5 hash. If the config changes, the liveness probe fails, and the container restarts. It'a similar to what @aweiteka has done for prometheus config changes. It's a bit of a hack, but it works.
@sdodson This would simplify the install because we'd no longer need to force a console rollout on config changes from the metrics and logging playbooks.
Any objections to this approach?
/assign @jwforres
/cc @smarterclayton @derekwaynecarr @deads2k
/hold
Holding for feedback :)
@jupierce fyi