Skip to content

[k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected #9397

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mfojtik opened this issue Jun 17, 2016 · 8 comments
Assignees
Labels
area/tests component/kubernetes kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/P2

Comments

@mfojtik
Copy link
Contributor

mfojtik commented Jun 17, 2016

Seen: https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_conformance/2224/consoleText

(it seems like a bug in error parsing).

[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/test/e2e/framework/framework.go:505
  validates that a pod with an invalid NodeAffinity is rejected [It]
  /data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:437

  Jun 17 00:50:24.043: Expect error of invalid, got : invalid character '}' looking for beginning of object key string

  /data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:428
@mfojtik mfojtik added priority/P2 area/tests kind/test-flake Categorizes issue or PR as related to test flakes. labels Jun 17, 2016
@mfojtik
Copy link
Contributor Author

mfojtik commented Jun 17, 2016

@smarterclayton I wonder if we need this test when it is not marked as [Conformance] in upstream.

@smarterclayton
Copy link
Contributor

Is NodeAffinity new in 1.3?

On Jun 17, 2016, at 6:27 AM, Michal Fojtik [email protected] wrote:

@smarterclayton https://github.com/smarterclayton I wonder if we need
this test when it is not marked as [Conformance] in upstream.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#9397 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABG_pwD1ninwLcXziPEFinq82x_UKcxbks5qMnaggaJpZM4I4FS0
.

@mfojtik
Copy link
Contributor Author

mfojtik commented Jun 30, 2016

@smarterclayton I think it is is (kubernetes/kubernetes#22985)

@ncdc ncdc assigned ingvagabund and unassigned ncdc Aug 25, 2016
@ingvagabund
Copy link
Member

@mfojtik is it still reproducible?

@mfojtik
Copy link
Contributor Author

mfojtik commented Aug 25, 2016

@ingvagabund I don't think so

@enj
Copy link
Contributor

enj commented Mar 21, 2017

Possibly seen in https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/158/#showFailuresLink for #13466

Regression

Extended.[k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected (from (empty))
Failing for the past 1 build (Since Failed#158 )
Took 10 sec.
Stacktrace

/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b549f0>: {
        s: "Namespace e2e-tests-emptydir-qjphb is active",
    }
    Namespace e2e-tests-emptydir-qjphb is active
not to have occurred
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Standard Output

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar 21 00:41:29.750: INFO: >>> kubeConfig: /etc/origin/master/admin.kubeconfig

STEP: Building a namespace api object
Mar 21 00:41:29.856: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar 21 00:41:30.002: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 21 00:41:30.005: INFO: Waiting for terminating namespaces to be deleted...
Mar 21 00:41:30.007: INFO: Unexpected error occurred: Namespace e2e-tests-emptydir-qjphb is active
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-sched-pred-xxpnl".
STEP: Found 0 events.
Mar 21 00:41:30.014: INFO: POD                       NODE                         PHASE    GRACE  CONDITIONS
Mar 21 00:41:30.014: INFO: docker-registry-1-wrj69   ip-172-18-1-91.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:06:27 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:06:33 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:06:27 -0400 EDT  }]
Mar 21 00:41:30.014: INFO: registry-console-1-clwv9  ip-172-18-1-91.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:06:32 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:06:52 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:06:32 -0400 EDT  }]
Mar 21 00:41:30.014: INFO: router-1-qm5v1            ip-172-18-1-91.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:05:44 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:06:04 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-21 00:05:44 -0400 EDT  }]
Mar 21 00:41:30.014: INFO: 
Mar 21 00:41:30.016: INFO: 
Logging node info for node ip-172-18-1-91.ec2.internal
Mar 21 00:41:30.018: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Mar 21 00:41:30.018: INFO: 
Logging kubelet events for node ip-172-18-1-91.ec2.internal
Mar 21 00:41:30.019: INFO: 
Logging pods the kubelet thinks is on node ip-172-18-1-91.ec2.internal
Mar 21 00:41:30.024: INFO: registry-console-1-clwv9 started at 2017-03-21 00:06:32 -0400 EDT (0+1 container statuses recorded)
Mar 21 00:41:30.024: INFO: 	Container registry-console ready: true, restart count 0
Mar 21 00:41:30.024: INFO: router-1-qm5v1 started at 2017-03-21 00:05:44 -0400 EDT (0+1 container statuses recorded)
Mar 21 00:41:30.024: INFO: 	Container router ready: true, restart count 0
Mar 21 00:41:30.024: INFO: docker-registry-1-wrj69 started at 2017-03-21 00:06:27 -0400 EDT (0+1 container statuses recorded)
Mar 21 00:41:30.024: INFO: 	Container registry ready: true, restart count 0
Mar 21 00:41:30.119: INFO: 
Latency metrics for node ip-172-18-1-91.ec2.internal
Mar 21 00:41:30.119: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:38.841105s}
Mar 21 00:41:30.119: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.158729s}
Mar 21 00:41:30.119: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:25.916558s}
Mar 21 00:41:30.119: INFO: {Operation:SyncPod Method:container_manager_latency_microseconds Quantile:0.99 Latency:11.764217s}
STEP: Dumping a list of prepulled images on each node
Mar 21 00:41:30.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-xxpnl" for this suite.
Mar 21 00:41:40.202: INFO: namespace: e2e-tests-sched-pred-xxpnl, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 9, 2018
@0xmichalis
Copy link
Contributor

Haven't seen for some time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/tests component/kubernetes kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/P2
Projects
None yet
Development

No branches or pull requests

9 participants