-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Deployment controller w/ caches #9327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment controller w/ caches #9327
Conversation
cc: @ironcladlou @mfojtik |
I still need to update tests but this is ready for review @deads2k @smarterclayton @mfojtik |
} | ||
|
||
func (c *DeploymentController) worker() { | ||
for { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not following here the upstream pattern for worker
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because it is broken. workers never exit when the queue shuts down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kargakis I like that workers stay at work even the assembly line shut downs... reminds me us ;-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it's not good both health- and mental-wise
:)
On Mon, Jun 27, 2016 at 2:09 PM, Michal Fojtik [email protected]
wrote:
In pkg/deploy/controller/deployment/factory.go
#9327 (comment):
- // Set default environment values
- for _, env := range factory.Environment {
if set.Has(env.Name) {
+func (c _DeploymentController) rcForDeployerPod(pod *kapi.Pod) (_kapi.ReplicationController, error) {continue
- key := pod.Namespace + "/" + deployutil.DeploymentNameFor(pod)
- return c.getByKey(key)
+}
+func (c *DeploymentController) worker() {
- for {
@Kargakis https://github.com/kargakis I like that workers stay in work
even the assembly line shut downs... reminds me us ;-)—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/pull/9327/files/2bc801bef0539b5191b4d473cdf2a34f2ce817cd#r68565227,
or mute the thread
https://github.com/notifications/unsubscribe/ADuFf01MFStzCBcVkYwqfzTWjfvO9dUnks5qP713gaJpZM4I1iD4
.
[test] |
Updated unit tests (1137a5d)! Not a nice experience. |
LGTM, but govet does not like you ;) |
// Handle processes deployment and either creates a deployer pod or responds | ||
// to a terminal deployment status. | ||
func (c *DeploymentController) Handle(deployment *kapi.ReplicationController) error { | ||
currentStatus := deployutil.DeploymentStatusFor(deployment) | ||
nextStatus := currentStatus | ||
deploymentScaled := false | ||
|
||
deployerPodName := deployutil.DeployerPodNameForDeployment(deployment.Name) | ||
deployer, deployerErr := c.pn.Pods(deployment.Namespace).Get(deployerPodName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this get live?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once this is non-live, make sure to fend off mutations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the deployer pod and I am not sure I want to make it susceptible to a stale cache. Especially when we collapse the deployer pod controller on this controller, we will need its latest status in order to make calls for the status of the deployment. Isn't this a valid concern for having it live?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the deployer pod and I am not sure I want to make it susceptible to a stale cache. Especially when we collapse the deployer pod controller on this controller, we will need its latest status in order to make calls for the status of the deployment. Isn't this a valid concern for having it live?
Live checks with multiple etcds are surprisingly unreliable. Didn't we get here from either a pod update (you really want to use a matching cache, otherwise you get an update, do a get, hit a stale etcd, and get a 404) or an RC (we'll get a pod update later)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't we get here from either a pod update
Hm, I see. I think I agree. I will rework to use the cache
lgtm |
For this pull, I still need to fix unit tests:/ |
Tests updated finally, @deads2k do we need the queue wrapper for our ratelimiting queue? |
#9638, [test] |
Evaluated for origin test up to a62140f |
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5581/) |
[merge] |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5581/) (Image: devenv-rhel7_4495) |
Evaluated for origin merge up to a62140f |
@smarterclayton @deads2k this is the followup to #9291 that will ensure we will sync the deployment in UPDATE and DELETE events for deployer pods. It also paves the way for #9296.
Closes #9575
Closes #7190