Skip to content

flake: DownwardAPI volume "open /etc/cpu_request: permission denied" #14569

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
smarterclayton opened this issue Jun 11, 2017 · 4 comments
Closed
Assignees
Labels
component/storage kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P1

Comments

@smarterclayton
Copy link
Contributor

https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/944/testReport/junit/(root)/Extended/_k8s_io__Downward_API_volume_should_provide_container_s_cpu_request__Conformance___Volume_/

Extended.[k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume] (from (empty))

Failing for the past 1 build (Since Failed#944 )
Took 32 sec.
add description
Stacktrace

/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originECgCBB/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:181
Expected error:
    <*errors.errorString | 0xc42059bd00>: {
        s: "expected pod \"downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874\" success: <nil>",
    }
    expected pod "downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874" success: <nil>
not to have occurred
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originECgCBB/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:2183
Standard Output

[BeforeEach] [Top Level]
  /openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originECgCBB/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Downward API volume
  /openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originECgCBB/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120
STEP: Creating a kubernetes client
Jun 10 00:50:00.381: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Jun 10 00:50:03.330: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Downward API volume
  /openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originECgCBB/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [Conformance] [Volume]
  /openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originECgCBB/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:181
STEP: Creating a pod to test downward API volume plugin
Jun 10 00:50:04.394: INFO: Waiting up to 5m0s for pod downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874 status to be success or failure
Jun 10 00:50:04.434: INFO: Waiting for pod downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874 in namespace 'e2e-tests-downward-api-hbf0z' status to be 'success or failure'(found phase: "Pending", readiness: false) (40.766635ms elapsed)
Jun 10 00:50:06.567: INFO: Waiting for pod downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874 in namespace 'e2e-tests-downward-api-hbf0z' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.173542024s elapsed)
Jun 10 00:50:08.653: INFO: Waiting for pod downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874 in namespace 'e2e-tests-downward-api-hbf0z' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.258915179s elapsed)
Jun 10 00:50:10.698: INFO: Waiting for pod downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874 in namespace 'e2e-tests-downward-api-hbf0z' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.303869203s elapsed)
Jun 10 00:50:12.739: INFO: Waiting for pod downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874 in namespace 'e2e-tests-downward-api-hbf0z' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.345715058s elapsed)
Jun 10 00:50:14.770: INFO: Waiting for pod downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874 in namespace 'e2e-tests-downward-api-hbf0z' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.376330838s elapsed)
Jun 10 00:50:16.940: INFO: Output of node "ci-prtest-5a37c28-2942-ig-n-3dpz" pod "downwardapi-volume-44d3cf5a-4d98-11e7-b509-0eab3fb1a874" container "client-container": error reading file content for "/etc/cpu_request": open /etc/cpu_request: permission denied
@0xmichalis
Copy link
Contributor

0xmichalis commented Aug 8, 2017

/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:181
Expected error:
    <*errors.errorString | 0xc420d63b90>: {
        s: "expected pod \"downwardapi-volume-fbad1ed4-7bf5-11e7-9731-0e568e383e42\" success: pod \"downwardapi-volume-fbad1ed4-7bf5-11e7-9731-0e568e383e42\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-08 00:56:47 -0400 EDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-08 00:56:47 -0400 EDT Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-08 00:56:47 -0400 EDT Reason: Message:}] Message: Reason: HostIP:10.128.0.3 PodIP:172.16.2.88 StartTime:2017-08-08 00:56:47 -0400 EDT InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"open /sys/fs/cgroup/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb019e6_7bf5_11e7_97a1_42010a800004.slice/docker-c47beb3e3d80d8627f34b7cfd038aa0de56e26efb91c902c03808eaa2193cf23.scope/cgroup.procs: no such file or directory\\\\\\\"\\\"\\n\",StartedAt:2017-08-08 00:56:49 -0400 EDT,FinishedAt:2017-08-08 00:56:49 -0400 EDT,ContainerID:docker://c47beb3e3d80d8627f34b7cfd038aa0de56e26efb91c902c03808eaa2193cf23,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/mounttest:0.8 ImageID:docker-pullable://gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 ContainerID:docker://c47beb3e3d80d8627f34b7cfd038aa0de56e26efb91c902c03808eaa2193cf23}] QOSClass:Burstable}",
    }
    expected pod "downwardapi-volume-fbad1ed4-7bf5-11e7-9731-0e568e383e42" success: pod "downwardapi-volume-fbad1ed4-7bf5-11e7-9731-0e568e383e42" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-08 00:56:47 -0400 EDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-08 00:56:47 -0400 EDT Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-08 00:56:47 -0400 EDT Reason: Message:}] Message: Reason: HostIP:10.128.0.3 PodIP:172.16.2.88 StartTime:2017-08-08 00:56:47 -0400 EDT InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"open /sys/fs/cgroup/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb019e6_7bf5_11e7_97a1_42010a800004.slice/docker-c47beb3e3d80d8627f34b7cfd038aa0de56e26efb91c902c03808eaa2193cf23.scope/cgroup.procs: no such file or directory\\\"\"\n",StartedAt:2017-08-08 00:56:49 -0400 EDT,FinishedAt:2017-08-08 00:56:49 -0400 EDT,ContainerID:docker://c47beb3e3d80d8627f34b7cfd038aa0de56e26efb91c902c03808eaa2193cf23,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/mounttest:0.8 ImageID:docker-pullable://gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 ContainerID:docker://c47beb3e3d80d8627f34b7cfd038aa0de56e26efb91c902c03808eaa2193cf23}] QOSClass:Burstable}
not to have occurred
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:2224

https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended_conformance_gce/5511/

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 19, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/storage kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P1
Projects
None yet
Development

No branches or pull requests

6 participants