Skip to content

Deployment controller w/ caches #9327

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 30, 2016
Merged

Deployment controller w/ caches #9327

merged 2 commits into from
Jun 30, 2016

Conversation

0xmichalis
Copy link
Contributor

@0xmichalis 0xmichalis commented Jun 14, 2016

@smarterclayton @deads2k this is the followup to #9291 that will ensure we will sync the deployment in UPDATE and DELETE events for deployer pods. It also paves the way for #9296.

Closes #9575
Closes #7190

@0xmichalis
Copy link
Contributor Author

cc: @ironcladlou @mfojtik

@0xmichalis 0xmichalis changed the title [WIP] Deployment controller w/ caches Deployment controller w/ caches Jun 27, 2016
@0xmichalis
Copy link
Contributor Author

I still need to update tests but this is ready for review @deads2k @smarterclayton @mfojtik

}

func (c *DeploymentController) worker() {
for {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not following here the upstream pattern for worker?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it is broken. workers never exit when the queue shuts down.

Copy link
Contributor

@mfojtik mfojtik Jun 27, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kargakis I like that workers stay at work even the assembly line shut downs... reminds me us ;-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it's not good both health- and mental-wise

:)

On Mon, Jun 27, 2016 at 2:09 PM, Michal Fojtik [email protected]
wrote:

In pkg/deploy/controller/deployment/factory.go
#9327 (comment):

  • // Set default environment values
  • for _, env := range factory.Environment {
  •   if set.Has(env.Name) {
    
  •       continue
    
    +func (c _DeploymentController) rcForDeployerPod(pod *kapi.Pod) (_kapi.ReplicationController, error) {
  • key := pod.Namespace + "/" + deployutil.DeploymentNameFor(pod)
  • return c.getByKey(key)
    +}

+func (c *DeploymentController) worker() {

  • for {

@Kargakis https://github.com/kargakis I like that workers stay in work
even the assembly line shut downs... reminds me us ;-)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/pull/9327/files/2bc801bef0539b5191b4d473cdf2a34f2ce817cd#r68565227,
or mute the thread
https://github.com/notifications/unsubscribe/ADuFf01MFStzCBcVkYwqfzTWjfvO9dUnks5qP713gaJpZM4I1iD4
.

@0xmichalis
Copy link
Contributor Author

[test]

@0xmichalis
Copy link
Contributor Author

Updated unit tests (1137a5d)! Not a nice experience.

@soltysh
Copy link
Contributor

soltysh commented Jun 27, 2016

LGTM, but govet does not like you ;)

// Handle processes deployment and either creates a deployer pod or responds
// to a terminal deployment status.
func (c *DeploymentController) Handle(deployment *kapi.ReplicationController) error {
currentStatus := deployutil.DeploymentStatusFor(deployment)
nextStatus := currentStatus
deploymentScaled := false

deployerPodName := deployutil.DeployerPodNameForDeployment(deployment.Name)
deployer, deployerErr := c.pn.Pods(deployment.Namespace).Get(deployerPodName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this get live?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once this is non-live, make sure to fend off mutations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the deployer pod and I am not sure I want to make it susceptible to a stale cache. Especially when we collapse the deployer pod controller on this controller, we will need its latest status in order to make calls for the status of the deployment. Isn't this a valid concern for having it live?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the deployer pod and I am not sure I want to make it susceptible to a stale cache. Especially when we collapse the deployer pod controller on this controller, we will need its latest status in order to make calls for the status of the deployment. Isn't this a valid concern for having it live?

Live checks with multiple etcds are surprisingly unreliable. Didn't we get here from either a pod update (you really want to use a matching cache, otherwise you get an update, do a get, hit a stale etcd, and get a 404) or an RC (we'll get a pod update later)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't we get here from either a pod update

Hm, I see. I think I agree. I will rework to use the cache

@deads2k
Copy link
Contributor

deads2k commented Jun 28, 2016

lgtm

@soltysh
Copy link
Contributor

soltysh commented Jun 28, 2016

@bparees out of curiosity do you plan to do something similar to the build controller, I mean switch to use caches like @Kargakis did to deployments?

@0xmichalis
Copy link
Contributor Author

@soltysh there is #9551

@0xmichalis
Copy link
Contributor Author

For this pull, I still need to fix unit tests:/

@bparees
Copy link
Contributor

bparees commented Jun 28, 2016

@soltysh yeah i think @Kargakis opened an issue against us to do it, so yes: #9551

@0xmichalis
Copy link
Contributor Author

Tests updated finally, @deads2k do we need the queue wrapper for our ratelimiting queue?

@0xmichalis
Copy link
Contributor Author

#9638, [test]

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to a62140f

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5581/)

@0xmichalis
Copy link
Contributor Author

[merge]

@openshift-bot
Copy link
Contributor

openshift-bot commented Jun 29, 2016

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5581/) (Image: devenv-rhel7_4495)

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to a62140f

@openshift-bot openshift-bot merged commit 9f0a587 into openshift:master Jun 30, 2016
@0xmichalis 0xmichalis deleted the deployment-controller-w-caches branch June 30, 2016 23:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants