Skip to content

Fix HPA scaling of deployment configs #19437

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 8, 2018
Merged

Fix HPA scaling of deployment configs #19437

merged 2 commits into from
May 8, 2018

Conversation

liggitt
Copy link
Contributor

@liggitt liggitt commented Apr 19, 2018

Fixes #19045

Verified it fixes HPA autoscaling of deploymentconfigs:

$ openshift version
openshift v3.7.2+b9fecd2-7
kubernetes v1.7.6+a08f5eeb62
etcd 3.2.8

$ oc create -f examples/deployment/recreate-example.yaml 
service "recreate-example" created
deploymentconfig "recreate-example" created
imagestream "recreate-example" created
route "recreate-example" created

$ oc autoscale dc/recreate-example --max=5
deploymentconfig "recreate-example" autoscaled

$ oc get hpa -o yaml
apiVersion: v1
items:
- apiVersion: autoscaling/v1
  kind: HorizontalPodAutoscaler
  metadata:
    annotations:
      autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2018-04-19T17:18:00Z","reason":"SucceededRescale","message":"the HPA controller was able to update the target scale to 1"}]'
    creationTimestamp: 2018-04-19T17:17:30Z
    name: recreate-example
    namespace: default
    resourceVersion: "1085"
    selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/recreate-example
    uid: 8a7ae056-43f5-11e8-a737-6a000155e300
  spec:
    maxReplicas: 5
    minReplicas: 1
    scaleTargetRef:
      apiVersion: v1
      kind: DeploymentConfig
      name: recreate-example
    targetCPUUtilizationPercentage: 80
  status:
    currentReplicas: 0
    desiredReplicas: 1
    lastScaleTime: 2018-04-19T17:22:00Z
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Verified scale integration test still passes as well:

$ hack/test-integration.sh Scale
++ Building go targets for darwin/amd64: test/integration/integration.test
[INFO] [13:26:06-0400] hack/build-go.sh exited with code 0 after 00h 00m 48s
[INFO] [13:26:06-0400] [CLEANUP] Cleaning up temporary directories
=== RUN   TestIntegration
=== RUN   TestIntegration/TestDeployScale
--- PASS: TestIntegration (0.05s)
	runner_test.go:79: using existing binary
    --- PASS: TestIntegration/TestDeployScale (11.48s)

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: liggitt

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 19, 2018
@openshift-ci-robot
Copy link

openshift-ci-robot commented Apr 19, 2018

@liggitt: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/openshift-jenkins/gcp b9fecd2 link /test gcp
ci/openshift-jenkins/extended_conformance_install b9fecd2 link /test extended_conformance_install
ci/openshift-jenkins/end_to_end b9fecd2 link /test end_to_end
ci/openshift-jenkins/verify b9fecd2 link /test verify
ci/openshift-jenkins/cmd b9fecd2 link /test cmd

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@alikhajeh1
Copy link

Any update on this being merged? It'll help users that are willing to build packages and update their deployments...

@liggitt
Copy link
Contributor Author

liggitt commented May 7, 2018

@soltysh I don't object to merging this...

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

But it looks like you're gonna have to greenbutton it.

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 8, 2018
@liggitt liggitt merged commit 5195979 into openshift:release-3.7 May 8, 2018
@aramalipoor
Copy link

Any ideas on when next 3.7.x version is released? This fix is much needed.

@liggitt
Copy link
Contributor Author

liggitt commented May 15, 2018

I don't think additional 3.7 releases are planned. 3.9 is already released and resolves this issue.

@kkorada
Copy link

kkorada commented Jun 8, 2018

i am using 3.9 but facing same issue. could you please help me ?

**oc version**
oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://master.openshift.capg-cloud.com:8443
openshift v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657

HPA config:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    autoscaling.alpha.kubernetes.io/conditions: >-
      [{"type":"AbleToScale","status":"False","lastTransitionTime":"2018-06-08T16:26:46Z","reason":"FailedGetScale","message":"the
      HPA controller was unable to get the target's current scale: no matches
      for extensions/, Kind=DeploymentConfig"}]
    openshift.io/generated-by: OpenShiftWebConsole
  creationTimestamp: '2018-06-08T16:26:16Z'
  labels:
    app: lb
  name: lb
  namespace: lb
  resourceVersion: '10212'
  selfLink: /apis/autoscaling/v1/namespaces/lb/horizontalpodautoscalers/lb
  uid: ab06350c-6b38-11e8-9f4d-06b110d63564
spec:
  maxReplicas: 5
  minReplicas: 1
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: DeploymentConfig
    name: lb
  targetCPUUtilizationPercentage: 50
status:
  currentReplicas: 0
  desiredReplicas: 0

Issue:

oc get hpa -n lb
NAME      REFERENCE             TARGETS           MINPODS   MAXPODS   REPLICAS   AGE
lb        DeploymentConfig/lb   <unknown> / 50%   1         5         0          49m

@kkorada
Copy link

kkorada commented Jun 8, 2018

hey i got this fixed after chainging scaleTargetRef's apiVersion in HPA config from
apiVersion: extensions/v1beta1
to
apiVersion: v1

but when i create with web-console, it is still giving old apiVersion (apiVersion: extensions/v1beta1
) had to go and change every time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants