Skip to content

Make bootstrapping opt out and remove the legacy master install path #7486

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Mar 11, 2018

Change the defaults for node bootstrapping to true, all nodes will bootstrap unless opted out. During setup, we pre-configure all nodes that elect for bootstrapping before the master is configured, then install the control plane, then configure any nodes that opt out of bootstrapping. I'd like to complete remove the old node path, or perhaps move it to a "add new nodes to the cluster" sort of config until we know whether users are ready for it to be removed.

Remove the openshift_master role - it is dead. Copied in a few changes that happened in master before the role was killed. Copied upgrades over, although nothing has been done there.

Follow on to #6916

@openshift-ci-robot openshift-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Mar 11, 2018
@smarterclayton
Copy link
Contributor Author

/retest

@smarterclayton smarterclayton changed the title Switch the master to always run with bootstrapping on Make bootstrapping opt out and remove the legacy master install path Mar 11, 2018
@smarterclayton smarterclayton force-pushed the remove_non_static branch 5 times, most recently from 2aede15 to 18aede1 Compare March 11, 2018 22:30
@smarterclayton
Copy link
Contributor Author

/retest

@derekwaynecarr
Copy link
Member

not an expert in this code space yet, but this looks good to me.

@smarterclayton
Copy link
Contributor Author

Interesting - looks like when the kubelet creates the api mirror pod seeing the new pod from the api server causes the containers to get restarted:

Mar 12 05:28:19 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:19.511178   27452 eviction_manager.go:325] eviction manager: no resources are starved
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.141070   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.146134   27452 generic.go:147] GenericPLEG: 1d09845b8ef487606708b4edca6f4bf5/6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829: running -> exited
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.146161   27452 generic.go:147] GenericPLEG: 72f8e7515178a553fbac43fd4098194e/42d966e51525475a5cc97f5df8f23761ab7d2f848fe26adf5e0ed3abc6500be1: running -> exited
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.148708   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829"] for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.155833   27452 generic.go:380] PLEG: Write status for master-controllers-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"1d09845b8ef487606708b4edca6f4bf5", Name:"master-controllers-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42209a540)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421be5f40)}} (err: <nil>)
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.155933   27452 kubelet.go:1882] SyncLoop (PLEG): "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)", event: &pleg.PodLifecycleEvent{ID:"1d09845b8ef487606708b4edca6f4bf5", Type:"ContainerDied", Data:"6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829"}
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.155973   27452 kubelet_pods.go:1363] Generating status for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:20.156034   27452 pod_container_deletor.go:77] Container "6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829" not found in pod's containers
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.156080   27452 kubelet_pods.go:1363] Generating status for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.156233   27452 status_manager.go:353] Ignoring same status for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:27:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:}] Message: Reason: HostIP:172.18.15.75 PodIP:172.18.15.75 StartTime:2018-03-12 05:26:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:controllers State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2018-03-12 05:27:35 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:docker.io/openshift/origin:v3.10.0 ImageID:docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 ContainerID:docker://94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47}] QOSClass:BestEffort}
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.156459   27452 kubelet.go:1606] Creating a mirror pod for static pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.163901   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["42d966e51525475a5cc97f5df8f23761ab7d2f848fe26adf5e0ed3abc6500be1"] for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.165389   27452 volume_manager.go:343] Waiting for volumes to attach and mount for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.165527   27452 config.go:297] Setting pods for source api
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.165573   27452 config.go:405] Receiving a new pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(2cdabc48-25b6-11e8-884d-0ed4495c5ab4)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.165638   27452 kubelet.go:1837] SyncLoop (ADD, "api"): "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(2cdabc48-25b6-11e8-884d-0ed4495c5ab4)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.174887   27452 generic.go:380] PLEG: Write status for master-api-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"72f8e7515178a553fbac43fd4098194e", Name:"master-api-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42209a9a0)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4215ad900)}} (err: <nil>)
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.174952   27452 kubelet.go:1882] SyncLoop (PLEG): "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)", event: &pleg.PodLifecycleEvent{ID:"72f8e7515178a553fbac43fd4098194e", Type:"ContainerDied", Data:"42d966e51525475a5cc97f5df8f23761ab7d2f848fe26adf5e0ed3abc6500be1"}
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.174988   27452 kubelet_pods.go:1363] Generating status for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:20.175048   27452 pod_container_deletor.go:77] Container "42d966e51525475a5cc97f5df8f23761ab7d2f848fe26adf5e0ed3abc6500be1" not found in pod's containers
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.175096   27452 kubelet_pods.go:1363] Generating status for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.175436   27452 status_manager.go:353] Ignoring same status for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:27:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:}] Message: Reason: HostIP:172.18.15.75 PodIP:172.18.15.75 StartTime:2018-03-12 05:26:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:api State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2018-03-12 05:27:36 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:docker.io/openshift/origin:v3.10.0 ImageID:docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 ContainerID:docker://423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71}] QOSClass:BestEffort}
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.175681   27452 kubelet.go:1606] Creating a mirror pod for static pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.180682   27452 config.go:297] Setting pods for source api
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.180720   27452 config.go:405] Receiving a new pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(2cdd3f47-25b6-11e8-884d-0ed4495c5ab4)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.180794   27452 kubelet.go:1837] SyncLoop (ADD, "api"): "master-api-ip-172-18-15-75.ec2.internal_kube-system(2cdd3f47-25b6-11e8-884d-0ed4495c5ab4)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.180805   27452 volume_manager.go:343] Waiting for volumes to attach and mount for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.186931   27452 desired_state_of_world_populator.go:299] Added volume "master-config" (volSpec="master-config") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.186963   27452 desired_state_of_world_populator.go:299] Added volume "master-cloud-provider" (volSpec="master-cloud-provider") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.186985   27452 desired_state_of_world_populator.go:299] Added volume "master-data" (volSpec="master-data") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.187040   27452 desired_state_of_world_populator.go:299] Added volume "master-config" (volSpec="master-config") for pod "1d09845b8ef487606708b4edca6f4bf5" to desired state.
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.187062   27452 desired_state_of_world_populator.go:299] Added volume "master-cloud-provider" (volSpec="master-cloud-provider") for pod "1d09845b8ef487606708b4edca6f4bf5" to desired state.
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.465723   27452 volume_manager.go:371] All volumes are attached and mounted for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.465767   27452 kuberuntime_manager.go:442] Syncing Pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:master-controllers-ip-172-18-15-75.ec2.internal,GenerateName:,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-18-15-75.ec2.internal,UID:1d09845b8ef487606708b4edca6f4bf5,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{openshift.io/component: controllers,openshift.io/control-plane: true,},Annotations:map[string]string{kubernetes.io/config.hash: 1d09845b8ef487606708b4edca6f4bf5,kubernetes.io/config.seen: 2018-03-12T05:26:07.044046856Z,kubernetes.io/config.source: file,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{master-config {HostPathVolumeSource{Path:/etc/origin/master/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-cloud-provider {&HostPathVolumeSource{Path:/etc/origin/cloudprovider,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{controllers openshift/origin:v3.10.0 [/bin/bash -c] [#!/bin/bash
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: set -euo pipefail
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: if [[ -f /etc/origin/master/master.env ]]; then
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: set -o allexport
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: source /etc/origin/master/master.env
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: fi
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: exec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: ]  [] [] [] {map[] map[]} [{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>}] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8444,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:ip-172-18-15-75.ec2.internal,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{ Exists  NoExecute <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12.209688919 +0000 UTC m=+5.401157891  }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:,InitContainerStatuses:[],},}
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.466097   27452 kuberuntime_manager.go:403] No ready sandbox for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)" can be found. Need to start a new one
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.466117   27452 kuberuntime_manager.go:571] computePodActions got {KillPod:true CreateSandbox:true SandboxID:6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829 Attempt:1 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.466172   27452 kuberuntime_manager.go:589] Stopping PodSandbox for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)", will start new one
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.466210   27452 kuberuntime_container.go:578] Killing container "docker://94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47" with 30 second grace period
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.466711   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-controllers-ip-172-18-15-75.ec2.internal", UID:"1d09845b8ef487606708b4edca6f4bf5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SandboxChanged' Pod sandbox changed, it will be killed and re-created.
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.481071   27452 volume_manager.go:371] All volumes are attached and mounted for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.481102   27452 kuberuntime_manager.go:442] Syncing Pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:master-api-ip-172-18-15-75.ec2.internal,GenerateName:,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/master-api-ip-172-18-15-75.ec2.internal,UID:72f8e7515178a553fbac43fd4098194e,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{openshift.io/component: api,openshift.io/control-plane: true,},Annotations:map[string]string{kubernetes.io/config.hash: 72f8e7515178a553fbac43fd4098194e,kubernetes.io/config.seen: 2018-03-12T05:26:07.044025775Z,kubernetes.io/config.source: file,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{master-config {HostPathVolumeSource{Path:/etc/origin/master/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-cloud-provider {&HostPathVolumeSource{Path:/etc/origin/cloudprovider,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-data {&HostPathVolumeSource{Path:/var/lib/origin,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{api openshift/origin:v3.10.0 [/bin/bash -c] [#!/bin/bash
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: set -euo pipefail
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: if [[ -f /etc/origin/master/master.env ]]; then
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: set -o allexport
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: source /etc/origin/master/master.env
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: fi
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: exec openshift start master api --config=/etc/origin/master/master-config.yaml
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: ]  [] [] [] {map[] map[]} [{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>} {master-data false /var/lib/origin/  <nil>}] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:ip-172-18-15-75.ec2.internal,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{ Exists  NoExecute <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12.201401127 +0000 UTC m=+5.392870154  }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:,InitContainerStatuses:[],},}
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.481363   27452 kuberuntime_manager.go:403] No ready sandbox for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)" can be found. Need to start a new one
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.481382   27452 kuberuntime_manager.go:571] computePodActions got {KillPod:true CreateSandbox:true SandboxID:42d966e51525475a5cc97f5df8f23761ab7d2f848fe26adf5e0ed3abc6500be1 Attempt:1 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.481419   27452 kuberuntime_manager.go:589] Stopping PodSandbox for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)", will start new one
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.481440   27452 kuberuntime_container.go:578] Killing container "docker://423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71" with 30 second grace period
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.481851   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-api-ip-172-18-15-75.ec2.internal", UID:"72f8e7515178a553fbac43fd4098194e", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SandboxChanged' Pod sandbox changed, it will be killed and re-created.
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.530829   27452 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.530876   27452 reflector.go:428] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Watch close - *v1.Service total 0 items received
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:20.531464   27452 event.go:209] Unable to write event: 'Post https://ip-172-18-15-75.ec2.internal:8443/api/v1/namespaces/kube-system/events: unexpected EOF; some request body already written' (may retry after sleeping)
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.531492   27452 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.531512   27452 reflector.go:428] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Watch close - *v1.Pod total 2 items received
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.531884   27452 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:20.531911   27452 reflector.go:428] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Watch close - *v1.Node total 2 items received
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:20.534961   27452 reflector.go:322] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Failed to watch *v1.Node: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-18-15-75.ec2.internal&resourceVersion=849&timeoutSeconds=383&watch=true: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:20.535017   27452 reflector.go:322] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Failed to watch *v1.Service: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/services?resourceVersion=46&timeoutSeconds=560&watch=true: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:20 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:20.535083   27452 reflector.go:322] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to watch *v1.Pod: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-18-15-75.ec2.internal&resourceVersion=856&timeoutSeconds=569&watch=true: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.169876   27452 kuberuntime_container.go:602] Container "docker://94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47" exited normally
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.170034   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-controllers-ip-172-18-15-75.ec2.internal", UID:"1d09845b8ef487606708b4edca6f4bf5", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{controllers}"}): type: 'Normal' reason: 'Killing' Killing container with id docker://controllers:Need to kill Pod
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.175097   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.196404   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.196439   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.525019   27452 kuberuntime_manager.go:641] Creating sandbox for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.526961   27452 docker_service.go:441] Setting cgroup parent to: "kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice"
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.528334   27452 kuberuntime_container.go:602] Container "docker://423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71" exited normally
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.528455   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-api-ip-172-18-15-75.ec2.internal", UID:"72f8e7515178a553fbac43fd4098194e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{api}"}): type: 'Normal' reason: 'Killing' Killing container with id docker://api:Need to kill Pod
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.532442   27452 generic.go:147] GenericPLEG: 72f8e7515178a553fbac43fd4098194e/423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71: running -> exited
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.533280   27452 generic.go:147] GenericPLEG: 1d09845b8ef487606708b4edca6f4bf5/94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47: running -> exited
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.535318   27452 reflector.go:240] Listing and watching *v1.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.536791   27452 reflector.go:240] Listing and watching *v1.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.538040   27452 reflector.go:240] Listing and watching *v1.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.538771   27452 kuberuntime_manager.go:641] Creating sandbox for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:21.539160   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Failed to list *v1.Node: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:21.541695   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:21.541704   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Failed to list *v1.Service: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.554075   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["42d966e51525475a5cc97f5df8f23761ab7d2f848fe26adf5e0ed3abc6500be1"] for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.855017   27452 docker_service.go:441] Setting cgroup parent to: "kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice"
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.862017   27452 generic.go:380] PLEG: Write status for master-api-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"72f8e7515178a553fbac43fd4098194e", Name:"master-api-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc4210be540)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc42212a1e0)}} (err: <nil>)
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.862114   27452 kubelet.go:1882] SyncLoop (PLEG): "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)", event: &pleg.PodLifecycleEvent{ID:"72f8e7515178a553fbac43fd4098194e", Type:"ContainerDied", Data:"423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71"}
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.862143   27452 kubelet_pods.go:1363] Generating status for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.864805   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829"] for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.869505   27452 helpers.go:125] Unable to get network stats from pid 27848: couldn't read network stats: failure opening /proc/27848/net/dev: open /proc/27848/net/dev: no such file or directory
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.871852   27452 generic.go:380] PLEG: Write status for master-controllers-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"1d09845b8ef487606708b4edca6f4bf5", Name:"master-controllers-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc4210be700)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4209281e0)}} (err: <nil>)
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.871914   27452 kubelet.go:1882] SyncLoop (PLEG): "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)", event: &pleg.PodLifecycleEvent{ID:"1d09845b8ef487606708b4edca6f4bf5", Type:"ContainerDied", Data:"94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47"}
Mar 12 05:28:21 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:21.871942   27452 kubelet_pods.go:1363] Generating status for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:22 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:22.539377   27452 reflector.go:240] Listing and watching *v1.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475
Mar 12 05:28:22 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:22.540792   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Failed to list *v1.Node: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:22 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:22.541893   27452 reflector.go:240] Listing and watching *v1.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Mar 12 05:28:22 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:22.542793   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:22 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:22.544927   27452 reflector.go:240] Listing and watching *v1.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466
Mar 12 05:28:22 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:22.545771   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Failed to list *v1.Service: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:22 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:22.872093   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.196414   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.196472   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.239746   27452 generic.go:147] GenericPLEG: 1d09845b8ef487606708b4edca6f4bf5/6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e: non-existent -> exited
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.409550   27452 factory.go:112] Using factory "docker" for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice/docker-6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e.scope"
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.501639   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e" "6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829"] for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.502594   27452 manager.go:970] Added container: "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice/docker-6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e.scope" (aliases: [k8s_POD_master-controllers-ip-172-18-15-75.ec2.internal_kube-system_1d09845b8ef487606708b4edca6f4bf5_1 6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e], namespace: "docker")
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.502848   27452 handler.go:325] Added event &{/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice/docker-6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e.scope 2018-03-12 05:28:23.1496998 +0000 UTC containerCreation {<nil>}}
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.502894   27452 container.go:448] Start housekeeping for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice/docker-6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e.scope"
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.503996   27452 docker_sandbox.go:658] Will attempt to re-write config file /var/lib/docker/containers/6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e/resolv.conf with:
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: [nameserver 172.18.15.75 search cluster.local ec2.internal]
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.508776   27452 kuberuntime_manager.go:655] Created PodSandbox "6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e" for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.513833   27452 kuberuntime_manager.go:725] Creating container &Container{Name:controllers,Image:openshift/origin:v3.10.0,Command:[/bin/bash -c],Args:[#!/bin/bash
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: set -euo pipefail
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: if [[ -f /etc/origin/master/master.env ]]; then
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: set -o allexport
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: source /etc/origin/master/master.env
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: fi
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: exec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: ],WorkingDir:,Ports:[],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8444,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,VolumeDevices:[],} in pod master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.521506   27452 kuberuntime_container.go:101] Generating ref for container controllers: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-controllers-ip-172-18-15-75.ec2.internal", UID:"1d09845b8ef487606708b4edca6f4bf5", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{controllers}"}
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.521572   27452 kubelet_pods.go:174] container: kube-system/master-controllers-ip-172-18-15-75.ec2.internal/controllers podIP: "172.18.15.75" creating hosts mount: true
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.521592   27452 kubelet_pods.go:250] Pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)" container "controllers" mount "master-config" has propagation "PROPAGATION_PRIVATE"
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.521629   27452 kubelet_pods.go:250] Pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)" container "controllers" mount "master-cloud-provider" has propagation "PROPAGATION_PRIVATE"
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.521905   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-controllers-ip-172-18-15-75.ec2.internal", UID:"1d09845b8ef487606708b4edca6f4bf5", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{controllers}"}): type: 'Normal' reason: 'Pulled' Container image "openshift/origin:v3.10.0" already present on machine
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.525622   27452 generic.go:380] PLEG: Write status for master-controllers-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"1d09845b8ef487606708b4edca6f4bf5", Name:"master-controllers-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc4210bed20)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc42013a8c0), (*runtime.PodSandboxStatus)(0xc42013b590)}} (err: <nil>)
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.525683   27452 kubelet.go:1882] SyncLoop (PLEG): "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)", event: &pleg.PodLifecycleEvent{ID:"1d09845b8ef487606708b4edca6f4bf5", Type:"ContainerDied", Data:"6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e"}
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.525720   27452 kubelet_pods.go:1363] Generating status for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:23.525779   27452 pod_container_deletor.go:77] Container "6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e" not found in pod's containers
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.527106   27452 docker_service.go:441] Setting cgroup parent to: "kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice"
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.540993   27452 reflector.go:240] Listing and watching *v1.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:23.542305   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Failed to list *v1.Node: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.542993   27452 reflector.go:240] Listing and watching *v1.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:23.545717   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:23.545995   27452 reflector.go:240] Listing and watching *v1.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466
Mar 12 05:28:23 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:23.546977   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Failed to list *v1.Service: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:24.439059   27452 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.439221   27452 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:24.439242   27452 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.485496   27452 docker_sandbox.go:658] Will attempt to re-write config file /var/lib/docker/containers/e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17/resolv.conf with:
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: [nameserver 172.18.15.75 search cluster.local ec2.internal]
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.485799   27452 kuberuntime_manager.go:655] Created PodSandbox "e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17" for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.485285   27452 factory.go:112] Using factory "docker" for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice/docker-e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17.scope"
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.489398   27452 kuberuntime_manager.go:725] Creating container &Container{Name:api,Image:openshift/origin:v3.10.0,Command:[/bin/bash -c],Args:[#!/bin/bash
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: set -euo pipefail
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: if [[ -f /etc/origin/master/master.env ]]; then
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: set -o allexport
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: source /etc/origin/master/master.env
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: fi
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: exec openshift start master api --config=/etc/origin/master/master-config.yaml
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: ],WorkingDir:,Ports:[],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>} {master-data false /var/lib/origin/  <nil>}],LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,VolumeDevices:[],} in pod master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.489951   27452 manager.go:970] Added container: "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice/docker-e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17.scope" (aliases: [k8s_POD_master-api-ip-172-18-15-75.ec2.internal_kube-system_72f8e7515178a553fbac43fd4098194e_1 e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17], namespace: "docker")
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.490262   27452 handler.go:325] Added event &{/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice/docker-e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17.scope 2018-03-12 05:28:24.308689341 +0000 UTC containerCreation {<nil>}}
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.490305   27452 container.go:448] Start housekeeping for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice/docker-e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17.scope"
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.525789   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.530880   27452 generic.go:147] GenericPLEG: 72f8e7515178a553fbac43fd4098194e/e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17: non-existent -> running
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.530911   27452 generic.go:147] GenericPLEG: 1d09845b8ef487606708b4edca6f4bf5/6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e: exited -> running
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.533091   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17" "42d966e51525475a5cc97f5df8f23761ab7d2f848fe26adf5e0ed3abc6500be1"] for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.542513   27452 reflector.go:240] Listing and watching *v1.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:24.543931   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Failed to list *v1.Node: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.545944   27452 reflector.go:240] Listing and watching *v1.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:24.546784   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.547471   27452 reflector.go:240] Listing and watching *v1.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:24.548254   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Failed to list *v1.Service: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.774600   27452 kuberuntime_container.go:101] Generating ref for container api: &v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-api-ip-172-18-15-75.ec2.internal", UID:"72f8e7515178a553fbac43fd4098194e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{api}"}
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.774687   27452 kubelet_pods.go:174] container: kube-system/master-api-ip-172-18-15-75.ec2.internal/api podIP: "172.18.15.75" creating hosts mount: true
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.774707   27452 kubelet_pods.go:250] Pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)" container "api" mount "master-config" has propagation "PROPAGATION_PRIVATE"
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.774721   27452 kubelet_pods.go:250] Pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)" container "api" mount "master-cloud-provider" has propagation "PROPAGATION_PRIVATE"
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.774731   27452 kubelet_pods.go:250] Pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)" container "api" mount "master-data" has propagation "PROPAGATION_PRIVATE"
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.774984   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-api-ip-172-18-15-75.ec2.internal", UID:"72f8e7515178a553fbac43fd4098194e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{api}"}): type: 'Normal' reason: 'Pulled' Container image "openshift/origin:v3.10.0" already present on machine
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.905795   27452 docker_service.go:441] Setting cgroup parent to: "kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice"
Mar 12 05:28:24 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:24.911189   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-controllers-ip-172-18-15-75.ec2.internal", UID:"1d09845b8ef487606708b4edca6f4bf5", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{controllers}"}): type: 'Normal' reason: 'Created' Created container
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.196398   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.196448   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.364249   27452 generic.go:380] PLEG: Write status for master-api-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"72f8e7515178a553fbac43fd4098194e", Name:"master-api-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc420695420)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421b24140), (*runtime.PodSandboxStatus)(0xc4219212c0)}} (err: <nil>)
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.544122   27452 reflector.go:240] Listing and watching *v1.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.545553   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Failed to list *v1.Node: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.546975   27452 reflector.go:240] Listing and watching *v1.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.547843   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.548682   27452 reflector.go:240] Listing and watching *v1.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.549588   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Failed to list *v1.Service: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.565149   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e" "6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829"] for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.609475   27452 factory.go:112] Using factory "docker" for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice/docker-9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611.scope"
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.610416   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-controllers-ip-172-18-15-75.ec2.internal", UID:"1d09845b8ef487606708b4edca6f4bf5", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{controllers}"}): type: 'Normal' reason: 'Started' Started container
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.611792   27452 kubelet.go:1882] SyncLoop (PLEG): "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)", event: &pleg.PodLifecycleEvent{ID:"72f8e7515178a553fbac43fd4098194e", Type:"ContainerStarted", Data:"e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17"}
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.615928   27452 manager.go:970] Added container: "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice/docker-9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611.scope" (aliases: [k8s_controllers_master-controllers-ip-172-18-15-75.ec2.internal_kube-system_1d09845b8ef487606708b4edca6f4bf5_1 9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611], namespace: "docker")
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.616177   27452 handler.go:325] Added event &{/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice/docker-9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611.scope 2018-03-12 05:28:25.401679478 +0000 UTC containerCreation {<nil>}}
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:25.616221   27452 container.go:448] Start housekeeping for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d09845b8ef487606708b4edca6f4bf5.slice/docker-9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611.scope"
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.710100   27452 kubelet_node_status.go:406] Error updating node status, will retry: error getting node "ip-172-18-15-75.ec2.internal": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes/ip-172-18-15-75.ec2.internal?resourceVersion=0&timeout=10s: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.711269   27452 kubelet_node_status.go:406] Error updating node status, will retry: error getting node "ip-172-18-15-75.ec2.internal": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes/ip-172-18-15-75.ec2.internal?timeout=10s: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.711936   27452 kubelet_node_status.go:406] Error updating node status, will retry: error getting node "ip-172-18-15-75.ec2.internal": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes/ip-172-18-15-75.ec2.internal?timeout=10s: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.713624   27452 kubelet_node_status.go:406] Error updating node status, will retry: error getting node "ip-172-18-15-75.ec2.internal": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes/ip-172-18-15-75.ec2.internal?timeout=10s: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.720290   27452 kubelet_node_status.go:406] Error updating node status, will retry: error getting node "ip-172-18-15-75.ec2.internal": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes/ip-172-18-15-75.ec2.internal?timeout=10s: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:25 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:25.720303   27452 kubelet_node_status.go:398] Unable to update node status: update node status exceeds retry count
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.328183   27452 generic.go:380] PLEG: Write status for master-controllers-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"1d09845b8ef487606708b4edca6f4bf5", Name:"master-controllers-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc4206957a0), (*container.ContainerStatus)(0xc420695880)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421c860a0), (*runtime.PodSandboxStatus)(0xc421d6e3c0)}} (err: <nil>)
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.328246   27452 kubelet.go:1882] SyncLoop (PLEG): "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)", event: &pleg.PodLifecycleEvent{ID:"1d09845b8ef487606708b4edca6f4bf5", Type:"ContainerStarted", Data:"6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e"}
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.329511   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-api-ip-172-18-15-75.ec2.internal", UID:"72f8e7515178a553fbac43fd4098194e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{api}"}): type: 'Normal' reason: 'Created' Created container
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.545671   27452 reflector.go:240] Listing and watching *v1.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:26.546926   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Failed to list *v1.Node: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.548052   27452 reflector.go:240] Listing and watching *v1.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:26.548926   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.549720   27452 reflector.go:240] Listing and watching *v1.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:26.550654   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Failed to list *v1.Service: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.616287   27452 factory.go:112] Using factory "docker" for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice/docker-45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807.scope"
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.620452   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-api-ip-172-18-15-75.ec2.internal", UID:"72f8e7515178a553fbac43fd4098194e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{api}"}): type: 'Normal' reason: 'Started' Started container
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.621220   27452 manager.go:970] Added container: "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice/docker-45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807.scope" (aliases: [k8s_api_master-api-ip-172-18-15-75.ec2.internal_kube-system_72f8e7515178a553fbac43fd4098194e_1 45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807], namespace: "docker")
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.621452   27452 handler.go:325] Added event &{/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice/docker-45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807.scope 2018-03-12 05:28:26.542669181 +0000 UTC containerCreation {<nil>}}
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.621497   27452 container.go:448] Start housekeeping for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72f8e7515178a553fbac43fd4098194e.slice/docker-45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807.scope"
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.768095   27452 prober.go:184] TCP-Probe Host: 172.18.15.75, Port: 2379, Timeout: 1s
Mar 12 05:28:26 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:26.768606   27452 prober.go:118] Liveness probe for "master-etcd-ip-172-18-15-75.ec2.internal_kube-system(39915aa1d5d31a51c82e89d70a4fb919):etcd" succeeded
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.196022   27452 status_manager.go:419] Static pod "39915aa1d5d31a51c82e89d70a4fb919" (master-etcd-ip-172-18-15-75.ec2.internal/kube-system) does not have a corresponding mirror pod; skipping
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.196056   27452 status_manager.go:438] Status Manager: syncPod in syncbatch. pod UID: "72f8e7515178a553fbac43fd4098194e"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.196403   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.196442   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:27.200201   27452 status_manager.go:459] Failed to get status for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/namespaces/kube-system/pods/master-api-ip-172-18-15-75.ec2.internal: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.200226   27452 status_manager.go:438] Status Manager: syncPod in syncbatch. pod UID: "1d09845b8ef487606708b4edca6f4bf5"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:27.201451   27452 status_manager.go:459] Failed to get status for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-18-15-75.ec2.internal: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.328446   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.333792   27452 generic.go:147] GenericPLEG: 72f8e7515178a553fbac43fd4098194e/45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807: non-existent -> running
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.333835   27452 generic.go:147] GenericPLEG: 1d09845b8ef487606708b4edca6f4bf5/9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611: non-existent -> running
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.336028   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17" "42d966e51525475a5cc97f5df8f23761ab7d2f848fe26adf5e0ed3abc6500be1"] for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.347508   27452 generic.go:380] PLEG: Write status for master-api-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"72f8e7515178a553fbac43fd4098194e", Name:"master-api-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42142afc0), (*container.ContainerStatus)(0xc42142b0a0)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc42133b220), (*runtime.PodSandboxStatus)(0xc421c9a460)}} (err: <nil>)
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.347574   27452 kubelet_pods.go:1363] Generating status for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.347617   27452 kubelet.go:1882] SyncLoop (PLEG): "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)", event: &pleg.PodLifecycleEvent{ID:"72f8e7515178a553fbac43fd4098194e", Type:"ContainerStarted", Data:"45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807"}
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.347766   27452 status_manager.go:367] Status Manager: adding pod: "72f8e7515178a553fbac43fd4098194e", with status: ('\x03', {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:27:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12 +0000 UTC  }]   172.18.15.75 172.18.15.75 2018-03-12 05:26:12 +0000 UTC [] [{api {nil &ContainerStateRunning{StartedAt:2018-03-12 05:28:26 +0000 UTC,} nil} {nil nil &ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2018-03-12 05:27:36 +0000 UTC,FinishedAt:2018-03-12 05:28:20 +0000 UTC,ContainerID:docker://423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71,}} true 1 docker.io/openshift/origin:v3.10.0 docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 docker://45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807}] BestEffort}) to podStatusChannel
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.347980   27452 status_manager.go:146] Status Manager: syncing pod: "72f8e7515178a553fbac43fd4098194e", with status: (3, {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:27:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12 +0000 UTC  }]   172.18.15.75 172.18.15.75 2018-03-12 05:26:12 +0000 UTC [] [{api {nil &ContainerStateRunning{StartedAt:2018-03-12 05:28:26 +0000 UTC,} nil} {nil nil &ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2018-03-12 05:27:36 +0000 UTC,FinishedAt:2018-03-12 05:28:20 +0000 UTC,ContainerID:docker://423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71,}} true 1 docker.io/openshift/origin:v3.10.0 docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 docker://45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807}] BestEffort}) from podStatusChannel
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.348054   27452 volume_manager.go:343] Waiting for volumes to attach and mount for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:27.349048   27452 status_manager.go:459] Failed to get status for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/namespaces/kube-system/pods/master-api-ip-172-18-15-75.ec2.internal: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.352049   27452 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e" "6bde4d4ac5b19e0ff27f23ddbf840d1599fda87f2c279e3c21033558691fa829"] for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.363278   27452 generic.go:380] PLEG: Write status for master-controllers-ip-172-18-15-75.ec2.internal/kube-system: &container.PodStatus{ID:"1d09845b8ef487606708b4edca6f4bf5", Name:"master-controllers-ip-172-18-15-75.ec2.internal", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42142b340), (*container.ContainerStatus)(0xc42142b420)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421c9b040), (*runtime.PodSandboxStatus)(0xc42133b810)}} (err: <nil>)
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.363333   27452 kubelet.go:1882] SyncLoop (PLEG): "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)", event: &pleg.PodLifecycleEvent{ID:"1d09845b8ef487606708b4edca6f4bf5", Type:"ContainerStarted", Data:"9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611"}
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.363341   27452 kubelet_pods.go:1363] Generating status for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.363483   27452 status_manager.go:367] Status Manager: adding pod: "1d09845b8ef487606708b4edca6f4bf5", with status: ('\x03', {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:27:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12 +0000 UTC  }]   172.18.15.75 172.18.15.75 2018-03-12 05:26:12 +0000 UTC [] [{controllers {nil &ContainerStateRunning{StartedAt:2018-03-12 05:28:25 +0000 UTC,} nil} {nil nil &ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2018-03-12 05:27:35 +0000 UTC,FinishedAt:2018-03-12 05:28:20 +0000 UTC,ContainerID:docker://94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47,}} true 1 docker.io/openshift/origin:v3.10.0 docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 docker://9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611}] BestEffort}) to podStatusChannel
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.363676   27452 volume_manager.go:343] Waiting for volumes to attach and mount for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.363697   27452 status_manager.go:146] Status Manager: syncing pod: "1d09845b8ef487606708b4edca6f4bf5", with status: (3, {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:27:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12 +0000 UTC  }]   172.18.15.75 172.18.15.75 2018-03-12 05:26:12 +0000 UTC [] [{controllers {nil &ContainerStateRunning{StartedAt:2018-03-12 05:28:25 +0000 UTC,} nil} {nil nil &ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2018-03-12 05:27:35 +0000 UTC,FinishedAt:2018-03-12 05:28:20 +0000 UTC,ContainerID:docker://94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47,}} true 1 docker.io/openshift/origin:v3.10.0 docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 docker://9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611}] BestEffort}) from podStatusChannel
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:27.364519   27452 status_manager.go:459] Failed to get status for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)": Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-18-15-75.ec2.internal: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.407501   27452 desired_state_of_world_populator.go:299] Added volume "master-config" (volSpec="master-config") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.407540   27452 desired_state_of_world_populator.go:299] Added volume "master-cloud-provider" (volSpec="master-cloud-provider") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.407554   27452 desired_state_of_world_populator.go:299] Added volume "master-data" (volSpec="master-data") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.407590   27452 desired_state_of_world_populator.go:299] Added volume "master-config" (volSpec="master-config") for pod "1d09845b8ef487606708b4edca6f4bf5" to desired state.
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.407623   27452 desired_state_of_world_populator.go:299] Added volume "master-cloud-provider" (volSpec="master-cloud-provider") for pod "1d09845b8ef487606708b4edca6f4bf5" to desired state.
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.547123   27452 reflector.go:240] Listing and watching *v1.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:27.548396   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475: Failed to list *v1.Node: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.549119   27452 reflector.go:240] Listing and watching *v1.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:27.549840   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-18-15-75.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.550827   27452 reflector.go:240] Listing and watching *v1.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:27.551543   27452 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466: Failed to list *v1.Service: Get https://ip-172-18-15-75.ec2.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.18.15.75:8443: getsockopt: connection refused
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.648275   27452 volume_manager.go:371] All volumes are attached and mounted for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.648315   27452 kuberuntime_manager.go:442] Syncing Pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:master-api-ip-172-18-15-75.ec2.internal,GenerateName:,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/master-api-ip-172-18-15-75.ec2.internal,UID:72f8e7515178a553fbac43fd4098194e,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{openshift.io/component: api,openshift.io/control-plane: true,},Annotations:map[string]string{kubernetes.io/config.hash: 72f8e7515178a553fbac43fd4098194e,kubernetes.io/config.seen: 2018-03-12T05:26:07.044025775Z,kubernetes.io/config.source: file,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{master-config {HostPathVolumeSource{Path:/etc/origin/master/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-cloud-provider {&HostPathVolumeSource{Path:/etc/origin/cloudprovider,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-data {&HostPathVolumeSource{Path:/var/lib/origin,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{api openshift/origin:v3.10.0 [/bin/bash -c] [#!/bin/bash
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: set -euo pipefail
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: if [[ -f /etc/origin/master/master.env ]]; then
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: set -o allexport
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: source /etc/origin/master/master.env
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: fi
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: exec openshift start master api --config=/etc/origin/master/master-config.yaml
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: ]  [] [] [] {map[] map[]} [{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>} {master-data false /var/lib/origin/  <nil>}] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:ip-172-18-15-75.ec2.internal,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{ Exists  NoExecute <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12.201401127 +0000 UTC m=+5.392870154  }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:,InitContainerStatuses:[],},}
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.648735   27452 kuberuntime_manager.go:571] computePodActions got {KillPod:false CreateSandbox:false SandboxID:e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17 Attempt:1 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.663898   27452 volume_manager.go:371] All volumes are attached and mounted for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.663932   27452 kuberuntime_manager.go:442] Syncing Pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:master-controllers-ip-172-18-15-75.ec2.internal,GenerateName:,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-18-15-75.ec2.internal,UID:1d09845b8ef487606708b4edca6f4bf5,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{openshift.io/component: controllers,openshift.io/control-plane: true,},Annotations:map[string]string{kubernetes.io/config.hash: 1d09845b8ef487606708b4edca6f4bf5,kubernetes.io/config.seen: 2018-03-12T05:26:07.044046856Z,kubernetes.io/config.source: file,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{master-config {HostPathVolumeSource{Path:/etc/origin/master/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-cloud-provider {&HostPathVolumeSource{Path:/etc/origin/cloudprovider,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{controllers openshift/origin:v3.10.0 [/bin/bash -c] [#!/bin/bash
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: set -euo pipefail
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: if [[ -f /etc/origin/master/master.env ]]; then
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: set -o allexport
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: source /etc/origin/master/master.env
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: fi
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: exec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: ]  [] [] [] {map[] map[]} [{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>}] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8444,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:ip-172-18-15-75.ec2.internal,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{ Exists  NoExecute <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12.209688919 +0000 UTC m=+5.401157891  }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:,InitContainerStatuses:[],},}
Mar 12 05:28:27 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:27.664297   27452 kuberuntime_manager.go:571] computePodActions got {KillPod:false CreateSandbox:false SandboxID:6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e Attempt:1 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.363532   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.376910   27452 kubelet_pods.go:1363] Generating status for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.378072   27452 status_manager.go:353] Ignoring same status for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:27:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:}] Message: Reason: HostIP:172.18.15.75 PodIP:172.18.15.75 StartTime:2018-03-12 05:26:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:controllers State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2018-03-12 05:28:25 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2018-03-12 05:27:35 +0000 UTC,FinishedAt:2018-03-12 05:28:20 +0000 UTC,ContainerID:docker://94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47,}} Ready:true RestartCount:1 Image:docker.io/openshift/origin:v3.10.0 ImageID:docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 ContainerID:docker://9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611}] QOSClass:BestEffort}
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.378356   27452 volume_manager.go:343] Waiting for volumes to attach and mount for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.378393   27452 kubelet_pods.go:1363] Generating status for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.378515   27452 status_manager.go:353] Ignoring same status for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:27:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:}] Message: Reason: HostIP:172.18.15.75 PodIP:172.18.15.75 StartTime:2018-03-12 05:26:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:api State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2018-03-12 05:28:26 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2018-03-12 05:27:36 +0000 UTC,FinishedAt:2018-03-12 05:28:20 +0000 UTC,ContainerID:docker://423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71,}} Ready:true RestartCount:1 Image:docker.io/openshift/origin:v3.10.0 ImageID:docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 ContainerID:docker://45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807}] QOSClass:BestEffort}
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.378751   27452 volume_manager.go:343] Waiting for volumes to attach and mount for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.409721   27452 desired_state_of_world_populator.go:299] Added volume "master-config" (volSpec="master-config") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.409768   27452 desired_state_of_world_populator.go:299] Added volume "master-cloud-provider" (volSpec="master-cloud-provider") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.409793   27452 desired_state_of_world_populator.go:299] Added volume "master-data" (volSpec="master-data") for pod "72f8e7515178a553fbac43fd4098194e" to desired state.
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.409834   27452 desired_state_of_world_populator.go:299] Added volume "master-config" (volSpec="master-config") for pod "1d09845b8ef487606708b4edca6f4bf5" to desired state.
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.409855   27452 desired_state_of_world_populator.go:299] Added volume "master-cloud-provider" (volSpec="master-cloud-provider") for pod "1d09845b8ef487606708b4edca6f4bf5" to desired state.
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.516248   27452 prober.go:165] HTTP-Probe Host: https://172.18.15.75, Port: 8444, Path: healthz
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.516294   27452 prober.go:168] HTTP-Probe Headers: map[]
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.524641   27452 http.go:96] Probe succeeded for https://172.18.15.75:8444/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Mon, 12 Mar 2018 05:28:28 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4216f43e0 2 [] true false map[] 0xc4224bfc00 0xc4213fea50}
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.524697   27452 prober.go:118] Liveness probe for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5):controllers" succeeded
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.548577   27452 reflector.go:240] Listing and watching *v1.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:475
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.550028   27452 reflector.go:240] Listing and watching *v1.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.551647   27452 reflector.go:240] Listing and watching *v1.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:466
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.678612   27452 volume_manager.go:371] All volumes are attached and mounted for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.679133   27452 kuberuntime_manager.go:442] Syncing Pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:master-controllers-ip-172-18-15-75.ec2.internal,GenerateName:,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-18-15-75.ec2.internal,UID:1d09845b8ef487606708b4edca6f4bf5,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{openshift.io/component: controllers,openshift.io/control-plane: true,},Annotations:map[string]string{kubernetes.io/config.hash: 1d09845b8ef487606708b4edca6f4bf5,kubernetes.io/config.seen: 2018-03-12T05:26:07.044046856Z,kubernetes.io/config.source: file,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{master-config {HostPathVolumeSource{Path:/etc/origin/master/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-cloud-provider {&HostPathVolumeSource{Path:/etc/origin/cloudprovider,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{controllers openshift/origin:v3.10.0 [/bin/bash -c] [#!/bin/bash
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: set -euo pipefail
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: if [[ -f /etc/origin/master/master.env ]]; then
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: set -o allexport
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: source /etc/origin/master/master.env
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: fi
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: exec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: ]  [] [] [] {map[] map[]} [{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>}] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8444,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:ip-172-18-15-75.ec2.internal,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{ Exists  NoExecute <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12.209688919 +0000 UTC m=+5.401157891  }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:,InitContainerStatuses:[],},}
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.679517   27452 kuberuntime_manager.go:571] computePodActions got {KillPod:false CreateSandbox:false SandboxID:6386bb2d98afae529f76d87e9fa946796a32db912d7304526ccaf1f59d55254e Attempt:1 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.679729   27452 volume_manager.go:371] All volumes are attached and mounted for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.679755   27452 kuberuntime_manager.go:442] Syncing Pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:master-api-ip-172-18-15-75.ec2.internal,GenerateName:,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/master-api-ip-172-18-15-75.ec2.internal,UID:72f8e7515178a553fbac43fd4098194e,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{openshift.io/component: api,openshift.io/control-plane: true,},Annotations:map[string]string{kubernetes.io/config.hash: 72f8e7515178a553fbac43fd4098194e,kubernetes.io/config.seen: 2018-03-12T05:26:07.044025775Z,kubernetes.io/config.source: file,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{master-config {HostPathVolumeSource{Path:/etc/origin/master/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-cloud-provider {&HostPathVolumeSource{Path:/etc/origin/cloudprovider,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {master-data {&HostPathVolumeSource{Path:/var/lib/origin,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{api openshift/origin:v3.10.0 [/bin/bash -c] [#!/bin/bash
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: set -euo pipefail
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: if [[ -f /etc/origin/master/master.env ]]; then
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: set -o allexport
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: source /etc/origin/master/master.env
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: fi
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: exec openshift start master api --config=/etc/origin/master/master-config.yaml
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: ]  [] [] [] {map[] map[]} [{master-config false /etc/origin/master/  <nil>} {master-cloud-provider false /etc/origin/cloudprovider/  <nil>} {master-data false /var/lib/origin/  <nil>}] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:ip-172-18-15-75.ec2.internal,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{ Exists  NoExecute <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-12 05:26:12.201401127 +0000 UTC m=+5.392870154  }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:,InitContainerStatuses:[],},}
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.680132   27452 kuberuntime_manager.go:571] computePodActions got {KillPod:false CreateSandbox:false SandboxID:e4c269e68b5a43182f03756bc995a651242b8865a5ded25a19cfdce4c4b52b17 Attempt:1 NextInitContainerToStart:nil ContainersToStart:[] ContainersToKill:map[]} for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e)"
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.713050   27452 helpers.go:125] Unable to get network stats from pid 28924: couldn't read network stats: failure opening /proc/28924/net/dev: open /proc/28924/net/dev: no such file or directory
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.796360   27452 prober.go:165] HTTP-Probe Host: https://172.18.15.75, Port: 8443, Path: healthz
Mar 12 05:28:28 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:28.796392   27452 prober.go:168] HTTP-Probe Headers: map[]
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.196382   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.196498   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.377066   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:29.441079   27452 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.441207   27452 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:29.441224   27452 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.511362   27452 eviction_manager.go:221] eviction manager: synchronize housekeeping
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:29.542896   27452 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:29.544189   27452 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.544939   27452 helpers.go:827] eviction manager: observations: signal=memory.available, available: 12407760Ki, capacity: 16266564Ki, time: 2018-03-12 05:28:18.823515636 +0000 UTC m=+132.014984685
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.544978   27452 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 57865076Ki, capacity: 78630892Ki, time: 2018-03-12 05:28:18.823515636 +0000 UTC m=+132.014984685
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.544994   27452 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 38838212, capacity: 39320512, time: 2018-03-12 05:28:18.823515636 +0000 UTC m=+132.014984685
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.545011   27452 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 8490496Ki, capacity: 17080Mi, time: 2018-03-12 05:28:18.823515636 +0000 UTC m=+132.014984685
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.545025   27452 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 16125200Ki, capacity: 16266564Ki
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.545052   27452 eviction_manager.go:325] eviction manager: no resources are starved
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.796626   27452 prober.go:111] Liveness probe for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e):api" failed (failure): Get https://172.18.15.75:8443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Mar 12 05:28:29 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:29.797258   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-api-ip-172-18-15-75.ec2.internal", UID:"72f8e7515178a553fbac43fd4098194e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{api}"}): type: 'Warning' reason: 'Unhealthy' Liveness probe failed: Get https://172.18.15.75:8443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Mar 12 05:28:30 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:30.389706   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:31 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:31.196390   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:31 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:31.196441   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:31 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:31.399053   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:32 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:32.404673   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:32 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:32.797762   27452 helpers.go:125] Unable to get network stats from pid 28808: couldn't read network stats: failure opening /proc/28808/net/dev: open /proc/28808/net/dev: no such file or directory
Mar 12 05:28:33 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:33.026790   27452 helpers.go:125] Unable to get network stats from pid 27772: couldn't read network stats: failure opening /proc/27772/net/dev: open /proc/27772/net/dev: no such file or directory
Mar 12 05:28:33 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:33.196362   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:33 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:33.196405   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:33 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:33.411791   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:34 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:34.417902   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:34 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:34.442853   27452 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 05:28:34 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:34.443453   27452 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:34 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:34.443484   27452 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:35 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:35.196370   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:35 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:35.196414   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:35 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:35.424070   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:36 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:36.212892   27452 config.go:297] Setting pods for source api
Mar 12 05:28:36 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:36.430543   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:36 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:36.768090   27452 prober.go:184] TCP-Probe Host: 172.18.15.75, Port: 2379, Timeout: 1s
Mar 12 05:28:36 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:36.768361   27452 prober.go:118] Liveness probe for "master-etcd-ip-172-18-15-75.ec2.internal_kube-system(39915aa1d5d31a51c82e89d70a4fb919):etcd" succeeded
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.196015   27452 status_manager.go:419] Static pod "39915aa1d5d31a51c82e89d70a4fb919" (master-etcd-ip-172-18-15-75.ec2.internal/kube-system) does not have a corresponding mirror pod; skipping
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.196048   27452 status_manager.go:438] Status Manager: syncPod in syncbatch. pod UID: "72f8e7515178a553fbac43fd4098194e"
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.197084   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.197120   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.432933   27452 config.go:297] Setting pods for source api
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.433193   27452 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "master-api-ip-172-18-15-75.ec2.internal_kube-system(2cdd3f47-25b6-11e8-884d-0ed4495c5ab4)"
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.435477   27452 status_manager.go:479] Status for pod "master-api-ip-172-18-15-75.ec2.internal_kube-system(2cdd3f47-25b6-11e8-884d-0ed4495c5ab4)" updated successfully: (3, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:27:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:}] Message: Reason: HostIP:172.18.15.75 PodIP:172.18.15.75 StartTime:2018-03-12 05:26:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:api State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2018-03-12 05:28:26 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2018-03-12 05:27:36 +0000 UTC,FinishedAt:2018-03-12 05:28:20 +0000 UTC,ContainerID:docker://423cd20aa4a1c837cf3672fddc055734657a3c13abfb7e18b93437332c8d1f71,}} Ready:true RestartCount:1 Image:docker.io/openshift/origin:v3.10.0 ImageID:docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 ContainerID:docker://45d523ce603b362c11755b9b31604b15ef3643bc5ee3753bcb738cfc44ba6807}] QOSClass:BestEffort})
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.435576   27452 status_manager.go:438] Status Manager: syncPod in syncbatch. pod UID: "1d09845b8ef487606708b4edca6f4bf5"
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.436537   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.448091   27452 status_manager.go:479] Status for pod "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(2cdabc48-25b6-11e8-884d-0ed4495c5ab4)" updated successfully: (3, {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:27:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-12 05:26:12 +0000 UTC Reason: Message:}] Message: Reason: HostIP:172.18.15.75 PodIP:172.18.15.75 StartTime:2018-03-12 05:26:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:controllers State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2018-03-12 05:28:25 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2018-03-12 05:27:35 +0000 UTC,FinishedAt:2018-03-12 05:28:20 +0000 UTC,ContainerID:docker://94418638d998da3239264c653bed87140dae171978b1971a576e8fc1daa58f47,}} Ready:true RestartCount:1 Image:docker.io/openshift/origin:v3.10.0 ImageID:docker-pullable://docker.io/openshift/origin@sha256:a0d0b22425acdb4601fcf4586abb042415e7c5d741535fcfabfa844d788ba2b3 ContainerID:docker://9420285b8c8b7801b74900782818f93587a768d8623d2249a2e6f2747719e611}] QOSClass:BestEffort})
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.448241   27452 config.go:297] Setting pods for source api
Mar 12 05:28:37 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:37.448555   27452 kubelet.go:1850] SyncLoop (RECONCILE, "api"): "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(2cdabc48-25b6-11e8-884d-0ed4495c5ab4)"
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.446879   27452 generic.go:183] GenericPLEG: Relisting
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.516204   27452 prober.go:165] HTTP-Probe Host: https://172.18.15.75, Port: 8444, Path: healthz
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.516237   27452 prober.go:168] HTTP-Probe Headers: map[]
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.522062   27452 http.go:96] Probe succeeded for https://172.18.15.75:8444/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Date:[Mon, 12 Mar 2018 05:28:38 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] 0xc4221e2c20 2 [] true false map[] 0xc4213a8d00 0xc420ebf080}
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.522101   27452 prober.go:118] Liveness probe for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5):controllers" succeeded
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.725474   27452 fs.go:406] got devicemapper fs capacity stats: capacity: 17909678080 free: 8600420352 available: 8600420352:
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.796350   27452 prober.go:165] HTTP-Probe Host: https://172.18.15.75, Port: 8443, Path: healthz
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.796386   27452 prober.go:168] HTTP-Probe Headers: map[]
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.798760   27452 http.go:99] Probe failed for https://172.18.15.75:8443/healthz with request headers map[User-Agent:[kube-probe/.]], response body: [+]ping ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]etcd ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/generic-apiserver-start-informers ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/start-apiextensions-informers ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/start-apiextensions-controllers ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/project.openshift.io-projectcache ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/security.openshift.io-bootstrapscc ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/bootstrap-controller ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/ca-registration ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/start-kube-aggregator-informers ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/apiservice-registration-controller ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/apiservice-status-available-controller ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/apiservice-openapi-controller ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/kube-apiserver-autoregistration ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]autoregister-completion ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/template.openshift.io-sharednamespace ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/authorization.openshift.io-ensureSARolesDefault ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/authorization.openshift.io-ensureopenshift-infra ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/openshift.io-AdmissionInit ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/openshift.io-StartInformers ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: [+]poststarthook/oauth.openshift.io-StartOAuthClientsBootstrapping ok
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: healthz check failed
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.798794   27452 prober.go:111] Liveness probe for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e):api" failed (failure): HTTP probe failed with statuscode: 500
Mar 12 05:28:38 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:38.798842   27452 server.go:286] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"master-api-ip-172-18-15-75.ec2.internal", UID:"72f8e7515178a553fbac43fd4098194e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{api}"}): type: 'Warning' reason: 'Unhealthy' Liveness probe failed: HTTP probe failed with statuscode: 500
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.196375   27452 config.go:99] Looking for [api file], have seen map[file:{} api:{}]
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.196423   27452 kubelet.go:1924] SyncLoop (housekeeping)
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:39.445440   27452 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.445633   27452 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:39.445661   27452 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.545258   27452 eviction_manager.go:221] eviction manager: synchronize housekeeping
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:39.567355   27452 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.574840   27452 helpers.go:827] eviction manager: observations: signal=memory.available, available: 12590232Ki, capacity: 16266564Ki, time: 2018-03-12 05:28:38.720769075 +0000 UTC m=+151.912238128
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.574879   27452 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 57858000Ki, capacity: 78630892Ki, time: 2018-03-12 05:28:38.720769075 +0000 UTC m=+151.912238128
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.574896   27452 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 38838072, capacity: 39320512, time: 2018-03-12 05:28:38.720769075 +0000 UTC m=+151.912238128
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.574911   27452 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 8202Mi, capacity: 17080Mi, time: 2018-03-12 05:28:38.720769075 +0000 UTC m=+151.912238128
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.574924   27452 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 16125200Ki, capacity: 16266564Ki
Mar 12 05:28:39 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:39.574950   27452 eviction_manager.go:325] eviction manager: no resources are starved
Mar 12 05:28:40 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:40.385950   27452 helpers.go:125] Unable to get network stats from pid 27848: couldn't read network stats: failure opening /proc/27848/net/dev: open /proc/27848/net/dev: no such file or directory
Mar 12 05:28:40 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:40.408849   27452 helpers.go:125] Unable to get network stats from pid 28924: couldn't read network stats: failure opening /proc/28924/net/dev: open /proc/28924/net/dev: no such file or directory
Mar 12 05:28:44 ip-172-18-15-75.ec2.internal origin-node[27452]: W0312 05:28:44.447317   27452 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 05:28:44 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:44.447488   27452 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:44 ip-172-18-15-75.ec2.internal origin-node[27452]: E0312 05:28:44.447518   27452 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 05:28:46 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:46.768096   27452 prober.go:184] TCP-Probe Host: 172.18.15.75, Port: 2379, Timeout: 1s
Mar 12 05:28:46 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:46.768397   27452 prober.go:118] Liveness probe for "master-etcd-ip-172-18-15-75.ec2.internal_kube-system(39915aa1d5d31a51c82e89d70a4fb919):etcd" succeeded
Mar 12 05:28:47 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:47.196167   27452 status_manager.go:419] Static pod "39915aa1d5d31a51c82e89d70a4fb919" (master-etcd-ip-172-18-15-75.ec2.internal/kube-system) does not have a corresponding mirror pod; skipping
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.281747   27452 helpers.go:125] Unable to get network stats from pid 27772: couldn't read network stats: failure opening /proc/27772/net/dev: open /proc/27772/net/dev: no such file or directory
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.516258   27452 prober.go:165] HTTP-Probe Host: https://172.18.15.75, Port: 8444, Path: healthz
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.516300   27452 prober.go:168] HTTP-Probe Headers: map[]
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.525064   27452 http.go:96] Probe succeeded for https://172.18.15.75:8444/healthz, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/plain; charset=utf-8] Date:[Mon, 12 Mar 2018 05:28:48 GMT] Content-Length:[2]] 0xc421a97240 2 [] true false map[] 0xc420b2e800 0xc421109ef0}
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.525102   27452 prober.go:118] Liveness probe for "master-controllers-ip-172-18-15-75.ec2.internal_kube-system(1d09845b8ef487606708b4edca6f4bf5):controllers" succeeded
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.796351   27452 prober.go:165] HTTP-Probe Host: https://172.18.15.75, Port: 8443, Path: healthz
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.796388   27452 prober.go:168] HTTP-Probe Headers: map[]
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.804358   27452 http.go:96] Probe succeeded for https://172.18.15.75:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Audit-Id:[159a7ac9-ff0c-4baf-8f4c-c46206990df1] Cache-Control:[no-store] Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Mon, 12 Mar 2018 05:28:48 GMT]] 0xc421a97780 2 [] false false map[] 0xc421075200 0xc42085bc30}
Mar 12 05:28:48 ip-172-18-15-75.ec2.internal origin-node[27452]: I0312 05:28:48.804401   27452 prober.go:118] Liveness probe for "master-api-ip-172-18-15-75.ec2.internal_kube-system(72f8e7515178a553fbac43fd4098194e):api" succeeded

@smarterclayton
Copy link
Contributor Author

This is what is causing the install job to choke - we poll waiting for the api to come up, then we continue on initializing things. The kubelet is able to create the mirror pod, that causes the kubelet to get a new sync event, then it looks like the triggers a restart of the api container, which is almost exactly when the first CLI call is made and fails.

@smarterclayton
Copy link
Contributor Author

Job fails right around 05:28:23 which is while the api is stopped.

@michaelgugino
Copy link
Contributor

What happened to the atomic bot? We're going to run into lots of trouble without that bot.

Copy link
Contributor

@michaelgugino michaelgugino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is missing the logic we implemented for osm_default_node_selector

osm_default_node_selector should match what we implemented in 3.9.

@michaelgugino
Copy link
Contributor

    "stdout": "-- Logs begin at Mon 2018-03-12 16:29:00 UTC, end at Mon 2018-03-12 16:36:41 UTC. --\nMar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: origin-node.service: Failed to load environment files: No such file or directory\nMar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: origin-node.service: Failed to run 'start-pre' task: No such file or directory\nMar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: Failed to start origin-node.service.\nMar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: origin-node.service: Unit entered failed state.\nMar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: origin-node.service: Failed with result 'resources'.",
    "stdout_lines": [
        "-- Logs begin at Mon 2018-03-12 16:29:00 UTC, end at Mon 2018-03-12 16:36:41 UTC. --",
        "Mar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: origin-node.service: Failed to load environment files: No such file or directory",
        "Mar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: origin-node.service: Failed to run 'start-pre' task: No such file or directory",
        "Mar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: Failed to start origin-node.service.",
        "Mar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: origin-node.service: Unit entered failed state.",
        "Mar 12 16:36:41 ip-172-18-12-1.ec2.internal systemd[1]: origin-node.service: Failed with result 'resources'."
    ]

Fedora atomic host, single master, v3.9.0 image tag.

@michaelgugino
Copy link
Contributor

Task: openshift_control_plane : create service account kubeconfig with csr rights

fails with

The full traceback is:
  File "/tmp/ansible_9dfpdoj9/ansible_modlib.zip/ansible/module_utils/basic.py", line 2736, in run_command
    cmd = subprocess.Popen(args, **kwargs)
  File "/usr/lib64/python3.6/subprocess.py", line 709, in __init__
    restore_signals, start_new_session)
  File "/usr/lib64/python3.6/subprocess.py", line 1344, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)

fatal: [ec2-54-175-14-73.compute-1.amazonaws.com]: FAILED! => {
    "attempts": 24,
    "changed": false,
    "cmd": "oc serviceaccounts create-kubeconfig node-bootstrapper -n openshift-infra",
    "failed": true,
    "invocation": {
        "module_args": {
            "_raw_params": "oc serviceaccounts create-kubeconfig node-bootstrapper -n openshift-infra",
            "_uses_shell": false,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "warn": true
        }
    },
    "msg": "[Errno 2] No such file or directory: 'oc': 'oc'",
    "rc": 2
}

On Fedora Atomic Host, single master.

@michaelgugino
Copy link
Contributor

openshift_bootstrap_autoapprover : Create auto-approver on cluster

fails due to oc command.

@michaelgugino
Copy link
Contributor

/hold

We need to get fedora atomic bot back online before we continue down this road. We're going to have a lot of drift.

@michaelgugino michaelgugino added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 12, 2018
@vrutkovs
Copy link
Member

On RHEL Atomic I get:

fatal: [vrutkovs_tmp.T9hg2LFNjB-master-1]: FAILED! => {
    "attempts": 24, 
    "changed": true, 
    "cmd": [
        "/usr/local/bin/oc", 
        "serviceaccounts", 
        "create-kubeconfig", 
        "node-bootstrapper", 
        "-n", 
        "openshift-infra"
    ], 
    "delta": "0:00:00.181017", 
    "end": "2018-03-12 18:57:06.691336", 
    "invocation": {
        "module_args": {
            "_raw_params": "/usr/local/bin/oc serviceaccounts create-kubeconfig node-bootstrapper -n openshift-infra", 
            "_uses_shell": false, 
            "chdir": null, 
            "creates": null, 
            "executable": null, 
            "removes": null, 
            "stdin": null, 
            "warn": true
        }
    }, 
    "msg": "non-zero return code", 
    "rc": 1, 
    "start": "2018-03-12 18:57:06.510319", 
    "stderr": "Unable to connect to the server: x509: certificate signed by unknown authority", 
    "stderr_lines": [
        "Unable to connect to the server: x509: certificate signed by unknown authority"
    ], 
    "stdout": "", 
    "stdout_lines": []
}

A similar story with rpm-based install for origin 3.9. Maybe Origin 3.10 is required to make it pass?

@michaelgugino
Copy link
Contributor

Seeing the following on Fedora Atomic Host, origin-node service not starting:

Mar 12 19:45:58 ip-172-18-10-39.ec2.internal origin-node[26381]: I0312 19:45:58.092754   26462 start_node.go:309] Reading node configuration from /etc/origin/node/node-config.yaml
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal origin-node[26381]: Invalid NodeConfig /etc/origin/node/node-config.yaml
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal origin-node[26381]:   servingInfo.certFile: Invalid value: "/etc/origin/node/server.crt": could not read file: stat /etc/origin/node/server.crt: no such file or directory
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal origin-node[26381]:   servingInfo.keyFile: Invalid value: "/etc/origin/node/server.key": could not read file: stat /etc/origin/node/server.key: no such file or directory
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal origin-node[26381]:   servingInfo.clientCA: Invalid value: "/etc/origin/node/ca.crt": could not read file: stat /etc/origin/node/ca.crt: no such file or directory
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal origin-node[26381]:   masterKubeConfig: Invalid value: "/etc/origin/node/system:node:ip-172-18-10-39.ec2.internal.kubeconfig": could not read file: stat /etc/origin/node/system:node:ip-172-18-10-39.ec2.internal.kubeconfig: no such file or directory
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal systemd[1]: docker-43c6b0e23eaf9e63e9deee0e7fa2a465274d8347b72759225986891fd605d109.scope: Consumed 319ms CPU time
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal oci-systemd-hook[26501]: systemdhook <debug>: 43c6b0e23eaf: Skipping as container command is /usr/local/bin/openshift-node, not init or systemd
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal oci-umount[26502]: umounthook <debug>: 43c6b0e23eaf: only runs in prestart stage, ignoring
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal docker-containerd-current[1028]: time="2018-03-12T19:45:58.179543299Z" level=error msg="containerd: deleting container" error="exit status 1: \"container 43c6b0e23eaf9e63e9deee0e7fa2a465274d8347b72759225986891fd605d109 does not exist\\none or more of the container deletions failed\\n\""
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal dockerd-current[2672]: time="2018-03-12T19:45:58.220459881Z" level=warning msg="43c6b0e23eaf9e63e9deee0e7fa2a465274d8347b72759225986891fd605d109 cleanup: failed to unmount secrets: invalid argument"
Mar 12 19:45:58 ip-172-18-10-39.ec2.internal systemd[1]: origin-node.service: Main process exited, code=exited, status=255/n/a

This may be due to trying to use 3.9.

@michaelgugino
Copy link
Contributor

No dice with openshift_image_tag=v3.10.0

[root@ip-172-18-13-148 ~]# oc get nodes
NAME                            STATUS     ROLES     AGE       VERSION
ip-172-18-13-148.ec2.internal   NotReady   <none>    6m        v1.9.1+a0ce1bc657
[root@ip-172-18-13-148 ~]# journalctl -f
-- Logs begin at Mon 2018-03-12 19:14:12 UTC. --
Mar 12 20:11:39 ip-172-18-13-148.ec2.internal origin-node[10260]: W0312 20:11:39.473186   10290 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 20:11:39 ip-172-18-13-148.ec2.internal origin-node[10260]: E0312 20:11:39.473330   10290 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 20:11:44 ip-172-18-13-148.ec2.internal origin-node[10260]: W0312 20:11:44.474651   10290 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 20:11:44 ip-172-18-13-148.ec2.internal origin-node[10260]: E0312 20:11:44.474796   10290 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 20:11:49 ip-172-18-13-148.ec2.internal origin-node[10260]: W0312 20:11:49.476303   10290 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 20:11:49 ip-172-18-13-148.ec2.internal origin-node[10260]: E0312 20:11:49.476460   10290 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 20:11:54 ip-172-18-13-148.ec2.internal origin-node[10260]: W0312 20:11:54.477969   10290 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 20:11:54 ip-172-18-13-148.ec2.internal origin-node[10260]: E0312 20:11:54.478119   10290 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 20:11:59 ip-172-18-13-148.ec2.internal origin-node[10260]: W0312 20:11:59.479803   10290 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 20:11:59 ip-172-18-13-148.ec2.internal origin-node[10260]: E0312 20:11:59.479998   10290 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 20:12:04 ip-172-18-13-148.ec2.internal origin-node[10260]: W0312 20:12:04.481382   10290 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 20:12:04 ip-172-18-13-148.ec2.internal origin-node[10260]: E0312 20:12:04.481530   10290 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 12 20:12:09 ip-172-18-13-148.ec2.internal origin-node[10260]: W0312 20:12:09.482953   10290 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 20:12:09 ip-172-18-13-148.ec2.internal origin-node[10260]: E0312 20:12:09.483111   10290 kubelet.go:2106] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

@michaelgugino
Copy link
Contributor

Master services don't come back after reboot.

@vrutkovs
Copy link
Member

bootstrap-autoapprover-0 is stuck in Pending:

message: '0/1 nodes are available: 1 CheckServiceAffinity, 1 MatchNodeSelector,
      1 NodeNotReady, 1 NodeOutOfDisk.'

@smarterclayton
Copy link
Contributor Author

smarterclayton commented Mar 12, 2018 via email

@smarterclayton
Copy link
Contributor Author

@jlebon can you speak to why the f27 bot is dead?

@smarterclayton
Copy link
Contributor Author

This PR is missing the logic we implemented for osm_default_node_selector. osm_default_node_selector should match what we implemented in 3.9.

Yeah.

missing 'oc'

was there a system container hack to extract this onto disk somewhere?

@smarterclayton
Copy link
Contributor Author

Oh. Join wasn't starting the node. Sigh.

@smarterclayton smarterclayton force-pushed the remove_non_static branch 2 times, most recently from 7e74967 to 1c55072 Compare March 29, 2018 19:51
@smarterclayton
Copy link
Contributor Author

Atomic f27 is now failing because the hostname value reported by ansible in openshift.node.nodename is not the same as what the kubelet picks (which is cat /proc/sys/kernel/hostname aka hostname). Ansible is setting hostname -f which is not correct. For bootstrapping mode I'm going to have to change the fact, but that will not be backwards compatible for old users.

@smarterclayton smarterclayton force-pushed the remove_non_static branch 3 times, most recently from 0883e40 to 77758ed Compare March 30, 2018 02:01
@smarterclayton
Copy link
Contributor Author

F27 atomic passed!!!!!!!!!!1!!!!!

@smarterclayton
Copy link
Contributor Author

Last set of changes was that bootstrapped nodes need to have in Ansible the correct calculated nodename that the kubelet uses during bootstrapping (which is hostname, not hostname -f). That got f27 to pass, and the crio fix is in the origin merge queue. This is ready for final review.

@smarterclayton
Copy link
Contributor Author

/skip

@smarterclayton
Copy link
Contributor Author

/test logging

@smarterclayton smarterclayton removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 30, 2018
@smarterclayton
Copy link
Contributor Author

If you'd prefer to review the most recent changes to openshift facts instead of merging the yet outstanding PRs that may simplify things.

@smarterclayton
Copy link
Contributor Author

/test crio

@jlebon
Copy link
Member

jlebon commented Mar 30, 2018

F27 atomic passed!!!!!!!!!!1!!!!!

\o/ Awesome!

@smarterclayton
Copy link
Contributor Author

Initial upgrade implementation in #7723

@smarterclayton
Copy link
Contributor Author

/skip logging

@smarterclayton smarterclayton force-pushed the remove_non_static branch 2 times, most recently from cd7dd23 to a97c508 Compare April 2, 2018 05:24
@openshift-ci-robot
Copy link

openshift-ci-robot commented Apr 2, 2018

@smarterclayton: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/openshift-jenkins/containerized e51c757 link /test containerized
ci/openshift-jenkins/extended_conformance_install_crio 1c55072 link /test crio

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Change the defaults for node bootstrapping to true, all nodes will
bootstrap unless opted out. Remove containerized node artifacts

Remove the openshift_master role - it is dead.
@smarterclayton
Copy link
Contributor Author

openshift/origin#19190 is what is blocking atomic

@michaelgugino
Copy link
Contributor

@smarterclayton
Copy link
Contributor Author

bot, retest this please

@smarterclayton
Copy link
Contributor Author

This job is now green. Going to stick the label on to unblock the queue based on prior approvals.

@smarterclayton smarterclayton added the lgtm Indicates that a PR is ready to be merged. label Apr 3, 2018
@openshift-merge-robot openshift-merge-robot merged commit 8e2eda9 into openshift:master Apr 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.