Skip to content

extended broken: ResourceQuota should create a ResourceQuota and capture the life of a secret #9414

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
smarterclayton opened this issue Jun 17, 2016 · 13 comments
Assignees
Labels
component/kubernetes kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@smarterclayton
Copy link
Contributor

Appears to have regressed during the rebase, is currently disabled so that we can reenable e2e.

[k8s.io] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret.
  /data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/test/e2e/resource_quota.go:133

STEP: Creating a kubernetes client
Jun 17 19:00:12.979: INFO: >>> TestContext.KubeConfig: /tmp/openshift/openshift/test-extended/core/openshift.local.config/master/admin.kubeconfig

STEP: Building a namespace api object
Jun 17 19:00:13.046: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Discovering how many secrets are in namespace by default
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
Jun 17 19:00:15.303: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:17.303: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:19.304: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:21.302: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:23.305: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:25.302: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:27.316: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:29.302: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:31.303: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:33.306: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:35.303: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:37.302: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:39.306: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:41.304: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:43.305: INFO: resource secrets, expected 8, actual 9
Jun 17 19:00:43.309: INFO: resource secrets, expected 8, actual 9
STEP: Collecting events from namespace "e2e-tests-resourcequota-ib11v".
Jun 17 19:00:43.319: INFO: POD                                                  NODE          PHASE    GRACE  CONDITIONS
Jun 17 19:00:43.319: INFO: docker-registry-1-8xwym                              172.18.5.118  Running         [{Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-06-17 18:59:43 -0400 EDT}  }]
Jun 17 19:00:43.319: INFO: router-2-ohpwr                                       172.18.5.118  Running         [{Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-06-17 19:00:20 -0400 EDT}  }]
Jun 17 19:00:43.319: INFO: pod-configmaps-4e6f2107-34df-11e6-a104-0eb968085377  172.18.5.118  Pending         [{Ready False {0001-01-01 00:00:00 +0000 UTC} {2016-06-17 19:00:37 -0400 EDT} ContainersNotReady containers with unready status: [configmap-volume-test]}]
Jun 17 19:00:43.319: INFO: dns-test-4b233b05-34df-11e6-b635-0eb968085377        172.18.5.118  Pending         [{Ready False {0001-01-01 00:00:00 +0000 UTC} {2016-06-17 19:00:32 -0400 EDT} ContainersNotReady containers with unready status: [webserver querier jessie-querier]}]
Jun 17 19:00:43.319: INFO: pod-update-4ee8626c-34df-11e6-8b08-0eb968085377      172.18.5.118  Pending         [{Ready False {0001-01-01 00:00:00 +0000 UTC} {2016-06-17 19:00:38 -0400 EDT} ContainersNotReady containers with unready status: [nginx]}]
Jun 17 19:00:43.319: INFO: test-docker-1-build                                  172.18.5.118  Running         [{Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-06-17 19:00:05 -0400 EDT}  }]
Jun 17 19:00:43.319: INFO: 
Jun 17 19:00:43.322: INFO: 
Logging node info for node 172.18.5.118
Jun 17 19:00:43.325: INFO: Node Info: kind:"" apiVersion:"" 
Jun 17 19:00:43.325: INFO: 
Logging kubelet events for node 172.18.5.118
Jun 17 19:00:43.329: INFO: 
Logging pods the kubelet thinks is on node 172.18.5.118
Jun 17 19:00:43.341: INFO: test-docker-1-build started at 2016-06-17T18:59:56-04:00 (1 container statuses recorded)
Jun 17 19:00:43.341: INFO:  Container docker-build ready: true, restart count 0
Jun 17 19:00:43.341: INFO: pod-update-4ee8626c-34df-11e6-8b08-0eb968085377 started at 2016-06-17T19:00:38-04:00 (1 container statuses recorded)
Jun 17 19:00:43.342: INFO:  Container nginx ready: false, restart count 0
Jun 17 19:00:43.342: INFO: router-2-ohpwr started at 2016-06-17T19:00:00-04:00 (1 container statuses recorded)
Jun 17 19:00:43.342: INFO:  Container router ready: true, restart count 0
Jun 17 19:00:43.342: INFO: pod-configmaps-4e6f2107-34df-11e6-a104-0eb968085377 started at 2016-06-17T19:00:37-04:00 (1 container statuses recorded)
Jun 17 19:00:43.342: INFO:  Container configmap-volume-test ready: false, restart count 0
Jun 17 19:00:43.342: INFO: docker-registry-1-8xwym started at 2016-06-17T18:59:36-04:00 (1 container statuses recorded)
Jun 17 19:00:43.342: INFO:  Container registry ready: true, restart count 0
Jun 17 19:00:43.342: INFO: dns-test-4b233b05-34df-11e6-b635-0eb968085377 started at 2016-06-17T19:00:32-04:00 (3 container statuses recorded)
Jun 17 19:00:43.342: INFO:  Container jessie-querier ready: false, restart count 0
Jun 17 19:00:43.342: INFO:  Container querier ready: false, restart count 0
Jun 17 19:00:43.342: INFO:  Container webserver ready: false, restart count 0
W0617 19:00:43.348268   26794 metrics_grabber.go:74] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jun 17 19:00:44.113: INFO: 
Latency metrics for node 172.18.5.118
Jun 17 19:00:44.113: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:12.733535s}
Jun 17 19:00:44.113: INFO: Unknown output type: . Skipping.
Jun 17 19:00:44.113: INFO: Waiting up to 1m0s for all nodes to be ready
STEP: Destroying namespace "e2e-tests-resourcequota-ib11v" for this suite.


• Failure [36.164 seconds]
[k8s.io] ResourceQuota
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/test/e2e/framework/framework.go:505
  should create a ResourceQuota and capture the life of a secret. [It]
  /data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/test/e2e/resource_quota.go:133

  Expected error:
      <*errors.errorString | 0xc8200ce0d0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/test/e2e/resource_quota.go:110
@smarterclayton
Copy link
Contributor Author

Any update on this?

@derekwaynecarr
Copy link
Member

I was out all last week, and have not had a chance to debug further.

On Wed, Jul 6, 2016 at 9:52 PM, Clayton Coleman [email protected]
wrote:

Any update on this?


You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
#9414 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AF8dbOF4JnOw1_CQyWHN8jjZlsfxhwSOks5qTFvagaJpZM4I40Aq
.

@smarterclayton
Copy link
Contributor Author

Looks fixed now, re-enabling.

@marun
Copy link
Contributor

marun commented Jan 17, 2017

@bparees
Copy link
Contributor

bparees commented Jan 19, 2017

@bparees
Copy link
Contributor

bparees commented Jan 20, 2017

@smarterclayton
Copy link
Contributor Author

smarterclayton commented Jan 20, 2017 via email

@smarterclayton
Copy link
Contributor Author

We capture the count at the beginning of the test, but more secrets are still being created. Add a loop and stabilize (will move this upstream if this nails the problem).

@smarterclayton
Copy link
Contributor Author

See #12605

@bparees
Copy link
Contributor

bparees commented Apr 11, 2017

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 13, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 16, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/kubernetes kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

8 participants