Skip to content

Kubernetes issues: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory #13865

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ganesh3 opened this issue Apr 23, 2017 · 30 comments
Assignees
Labels
component/auth kind/bug Categorizes issue or PR as related to a bug. priority/P2

Comments

@ganesh3
Copy link

ganesh3 commented Apr 23, 2017

I am following the https://github.com/redhat-iot/summit2017/ URL to setup the redhat IoT solution for fleet management and getting the below errorstar as the pods have not started are in ERROR state. I have installed oc in Ubuntu 16.10.

Current Result
NAME READY STATUS RESTARTS AGE
dashboard-1-build 0/1 Error 0 19m
datastore-1-deploy 0/1 Error 0 4m
datastore-proxy-1-build 0/1 Error 0 19m
elasticsearch-1-deploy 0/1 Error 0 19m
kapua-api-1-deploy 0/1 Error 0 19m
kapua-broker-1-deploy 0/1 Error 0 19m
kapua-console-1-deploy 0/1 Error 0 19m
simulator-1-deploy 0/1 Error 0 19m
sql-1-deploy 0/1 Error 0 19m
The command below lists the error
oc status -v
In project Red Hat IoT Demo (redhat-iot) on server https://192.168.225.169:8443

http://dashboard-redhat-iot.192.168.225.169.xip.io to pod port 8080-tcp (svc/dashboard)
dc/dashboard deploys istag/dashboard:latest <-
bc/dashboard source builds https://github.com/redhat-iot/summit2017#master on openshift/nodejs:4
build #1 failed 19 minutes ago
deployment #1 waiting on image or update

svc/datastore-hotrod - 172.30.4.196:11333
dc/datastore deploys openshift/jboss-datagrid65-openshift:1.2
deployment #1 failed 5 minutes ago: image change

http://datastore-proxy-redhat-iot.192.168.225.169.xip.io to pod port 8080-tcp (svc/datastore-proxy)
dc/datastore-proxy deploys istag/datastore-proxy:latest <-
bc/datastore-proxy source builds https://github.com/redhat-iot/summit2017#master on openshift/wildfly:10.1
build #1 failed 19 minutes ago
deployment #1 waiting on image or update

http://search-redhat-iot.192.168.225.169.xip.io to pod port http (svc/elasticsearch)
dc/elasticsearch deploys docker.io/library/elasticsearch:2.4
deployment #1 failed 20 minutes ago: config change

http://api-redhat-iot.192.168.225.169.xip.io to pod port http (svc/kapua-api)
dc/kapua-api deploys docker.io/redhatiot/kapua-api-jetty:2017-04-08
deployment #1 failed 20 minutes ago: config change

http://broker-redhat-iot.192.168.225.169.xip.io to pod port mqtt-websocket-tcp (svc/kapua-broker)
dc/kapua-broker deploys docker.io/redhatiot/kapua-broker:2017-04-08
deployment #1 failed 20 minutes ago: config change

http://console-redhat-iot.192.168.225.169.xip.io to pod port http (svc/kapua-console)
dc/kapua-console deploys docker.io/redhatiot/kapua-console-jetty:2017-04-08
deployment #1 failed 20 minutes ago: config change

svc/sql - 172.30.56.47 ports 3306, 8181
dc/sql deploys docker.io/redhatiot/kapua-sql:2017-04-08
deployment #1 failed 20 minutes ago: config change

dc/simulator deploys docker.io/redhatiot/kura-simulator:2017-04-08
deployment #1 failed 20 minutes ago: config change

Detailed errors for each pod:

build/dashboard-1 has failed.
try: Inspect the build failure with 'oc logs -f bc/dashboard'
build/datastore-proxy-1 has failed.
try: Inspect the build failure with 'oc logs -f bc/datastore-proxy'
route/api is routing traffic to svc/kapua-api, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/broker is routing traffic to svc/kapua-broker, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/console is routing traffic to svc/kapua-console, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/dashboard is routing traffic to svc/dashboard, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/datastore-proxy is routing traffic to svc/datastore-proxy, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
route/search is routing traffic to svc/elasticsearch, but either the administrator has not installed a router or the router is not selecting this route.
try: oc adm router -h
Warnings:

The image trigger for dc/dashboard will have no effect until istag/dashboard:latest is imported or created by a build.
The image trigger for dc/datastore-proxy will have no effect until istag/datastore-proxy:latest is imported or created by a build.
Info:

pod/dashboard-1-build has no liveness probe to verify pods are still running.
try: oc set probe pod/dashboard-1-build --liveness ...
pod/datastore-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/datastore-1-deploy --liveness ...
pod/datastore-proxy-1-build has no liveness probe to verify pods are still running.
try: oc set probe pod/datastore-proxy-1-build --liveness ...
pod/elasticsearch-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/elasticsearch-1-deploy --liveness ...
pod/kapua-api-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/kapua-api-1-deploy --liveness ...
pod/kapua-broker-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/kapua-broker-1-deploy --liveness ...
pod/kapua-console-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/kapua-console-1-deploy --liveness ...
pod/simulator-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/simulator-1-deploy --liveness ...
pod/sql-1-deploy has no liveness probe to verify pods are still running.
try: oc set probe pod/sql-1-deploy --liveness ...
dc/datastore has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/datastore --readiness ...
dc/datastore has no liveness probe to verify pods are still running.
try: oc set probe dc/datastore --liveness ...
dc/elasticsearch has no liveness probe to verify pods are still running.
try: oc set probe dc/elasticsearch --liveness ...
dc/kapua-api has no liveness probe to verify pods are still running.
try: oc set probe dc/kapua-api --liveness ...
dc/kapua-broker has no liveness probe to verify pods are still running.
try: oc set probe dc/kapua-broker --liveness ...
dc/kapua-console has no liveness probe to verify pods are still running.
try: oc set probe dc/kapua-console --liveness ...
dc/simulator has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
try: oc set probe dc/simulator --readiness ...
dc/simulator has no liveness probe to verify pods are still running.
try: oc set probe dc/simulator --liveness ...
dc/sql has no liveness probe to verify pods are still running.
try: oc set probe dc/sql --liveness ...
View details with 'oc describe /' or list everything with 'oc get all'.

pod specific Errors:

oc logs -f bc/dashboard
error: cannot connect to the server: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
ganesh@ganesh-Lenovo-ideapad-100-14IBD:~/summit2017$ oc logs -f bc/datastore-proxy
error: cannot connect to the server: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
oc adm router -o yaml
error: router could not be created; service account "router" is not allowed to access the host network on nodes, grant access with oadm policy add-scc-to-user hostnetwork -z router
apiVersion: v1
items:

apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: router
apiVersion: v1
groupNames: null
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: router-router-role
roleRef:
kind: ClusterRole
name: system:router
subjects:
kind: ServiceAccount
name: router
namespace: redhat-iot
userNames:
system:serviceaccount:redhat-iot:router
apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
labels:
router: router
name: router
spec:
replicas: 1
selector:
router: router
strategy:
resources: {}
rollingParams:
maxSurge: 0
maxUnavailable: 25%
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
router: router
spec:
containers:
- env:
- name: DEFAULT_CERTIFICATE_DIR
value: /etc/pki/tls/private
- name: ROUTER_EXTERNAL_HOST_HOSTNAME
- name: ROUTER_EXTERNAL_HOST_HTTPS_VSERVER
- name: ROUTER_EXTERNAL_HOST_HTTP_VSERVER
- name: ROUTER_EXTERNAL_HOST_INSECURE
value: "false"
- name: ROUTER_EXTERNAL_HOST_INTERNAL_ADDRESS
- name: ROUTER_EXTERNAL_HOST_PARTITION_PATH
- name: ROUTER_EXTERNAL_HOST_PASSWORD
- name: ROUTER_EXTERNAL_HOST_PRIVKEY
value: /etc/secret-volume/router.pem
- name: ROUTER_EXTERNAL_HOST_USERNAME
- name: ROUTER_EXTERNAL_HOST_VXLAN_GW_CIDR
- name: ROUTER_SERVICE_HTTPS_PORT
value: "443"
- name: ROUTER_SERVICE_HTTP_PORT
value: "80"
- name: ROUTER_SERVICE_NAME
value: router
- name: ROUTER_SERVICE_NAMESPACE
value: redhat-iot
- name: ROUTER_SUBDOMAIN
- name: STATS_PASSWORD
value: PjMPJuiol6
- name: STATS_PORT
value: "1936"
- name: STATS_USERNAME
value: admin
image: openshift/origin-haproxy-router:v1.5.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
host: localhost
path: /healthz
port: 1936
initialDelaySeconds: 10
name: router
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1936
name: stats
protocol: TCP
readinessProbe:
httpGet:
host: localhost
path: /healthz
port: 1936
initialDelaySeconds: 10
resources:
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- mountPath: /etc/pki/tls/private
name: server-certificate
readOnly: true
hostNetwork: true
securityContext: {}
serviceAccount: router
serviceAccountName: router
volumes:
- name: server-certificate
secret:
secretName: router-certs
test: false
triggers:
type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.openshift.io/serving-cert-secret-name: router-certs
creationTimestamp: null
labels:
router: router
name: router
spec:
ports:
name: 80-tcp
port: 80
targetPort: 80
name: 443-tcp
port: 443
targetPort: 443
name: 1936-tcp
port: 1936
protocol: TCP
targetPort: 1936
selector:
router: router
status:
loadBalancer: {}
kind: List
metadata: {}
##### Version
[provide output of the `openshift version` or `oc version` command]
oc version
oc v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

docker version
Client:
 Version:      17.03.1-ce
 API version:  1.24 (downgraded from 1.27)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:17:43 2017
 OS/Arch:      linux/amd64
Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)
Expected Result

To run the application correctly

Additional Information
oc adm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/home/ganesh/.kube/config'
[Note] Could not configure a client, so client diagnostics are limited to testing configuration and connection
[Note] Could not configure a client with cluster-admin permissions for the current server, so cluster diagnostics will be skipped

[Note] Running diagnostic: ConfigContexts[redhat-iot/192-168-225-169:8443/system:admin]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0015 from diagnostic ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
       The current client config context is 'redhat-iot/192-168-225-169:8443/system:admin':
       The server URL is 'https://192.168.225.169:8443'
       The user authentication is 'system:admin/192-168-225-169:8443'
       The current project is 'redhat-iot'
       (*url.Error) Get https://192.168.225.169:8443/api: dial tcp 192.168.225.169:8443: getsockopt: connection refused
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.
       
[Note] Running diagnostic: ConfigContexts[/192-168-225-169:8443/developer]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0015 from diagnostic ConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]
       For client config context '/192-168-225-169:8443/developer':
       The server URL is 'https://192.168.225.169:8443'
       The user authentication is 'developer/192-168-225-169:8443'
       The current project is 'default'
       (*url.Error) Get https://192.168.225.169:8443/api: dial tcp 192.168.225.169:8443: getsockopt: connection refused
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.
@ganesh3 ganesh3 changed the title Kubernetes issues Kubernetes issues: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory Apr 23, 2017
@LTheobald
Copy link

LTheobald commented Apr 24, 2017

Also seeing the same error when trying to run my code:

oc logs -f bc/sop-starter-pack
error: cannot connect to the server: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

It's also seen in oc adm diagnostics:

[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint
       
ERROR: [DCli2012 from diagnostic DiagnosticPod@openshift/origin/pkg/diagnostics/client/run_diagnostics_pod.go:155]
       See the errors below in the output from the diagnostic pod:
       [Note] Running diagnostic: PodCheckAuth
              Description: Check that service account credentials authenticate as expected
              
       ERROR: [DP1001 from diagnostic PodCheckAuth@openshift/origin/pkg/diagnostics/pod/auth.go:53]
              could not read the service account token: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
              
       [Note] Running diagnostic: PodCheckDns
              Description: Check that DNS within a pod works as expected
              
       [Note] Summary of diagnostics execution (version v1.5.0+031cbe4):
       [Note] Errors seen: 1

Docker version:

Client:
 Version:      17.04.0-ce
 API version:  1.28
 Go version:   go1.7.5
 Git commit:   4845c56
 Built:        Mon Apr  3 18:14:53 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.04.0-ce
 API version:  1.28 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   4845c56
 Built:        Mon Apr  3 18:14:53 2017
 OS/Arch:      linux/amd64
 Experimental: false

OC version

oc v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://10.121.5.36:8443
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4

Would appreciate any workaround.

@pweil- pweil- added component/auth kind/bug Categorizes issue or PR as related to a bug. priority/P2 labels Apr 24, 2017
@pweil-
Copy link

pweil- commented Apr 24, 2017

@soltysh @enj this looks like a case of the token controller not creating the secret which I think you were already looking in to.

@soltysh
Copy link
Contributor

soltysh commented Apr 25, 2017

@ganesh3 @LTheobald which openshift version are you using? What are the exact steps that lead you to this error?
I've just tried following the steps and it worked, the only problem I had was that I needed to re-deploy elasticsearch and datastore with oc deploy dc/elasticsearch --retry all the rest worked as expected.

@soltysh
Copy link
Contributor

soltysh commented Apr 26, 2017

Additionally it would be nice to know which openshift flavor your're using: oc cluster up, minishift, online preview, binaries, else?

@LTheobald
Copy link

I'm using the OpenShift version found on the releases page of this repo. V1.5.0 to be exact. I'm calling ’oc cluster up’ (although with proxy settings), using the web GUI to build a simple project using some Java & Soring Boot & getting the error on the very first build.

I'll see if I can recreate it in a way I can distribute it as-is.

@soltysh
Copy link
Contributor

soltysh commented Apr 26, 2017 via email

@LTheobald
Copy link

@soltysh Latest master is 3.6.0-alpha.1 isn't it? I get a different error when I run master. It complains over my Docker version:

Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: docker: failed to parse docker version \"17.04.0-ce\": illegal zero-prefixed version component \"04\" in \"17.04.0-ce\""

What Docker version are you running? Is my problem that I don't want to rollback to an old version of Docker? Although I thought I saw that Docker version was fixed in v1.5 so a shame it's regressed potentially in v3.6

@soltysh
Copy link
Contributor

soltysh commented Apr 27, 2017

@LTheobald which docker version you're running, what's your operating system, where did you get your docker version from?

@LTheobald
Copy link

Linux Mint 18.1 (it's based on Ubuntu Xenial 16.04)
Docker version is the latest: 17.04.0-ce (full details in my original reply). I get it from the official Docker package repo.

@ganesh3
Copy link
Author

ganesh3 commented Apr 27, 2017

I have provided the OC version in my issue log above.

@LTheobald
Copy link

Docker version might be adding to the issue. I'm attempting to downgrade to an older Docker version to see if that removes the issue. Although the docs state:

The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice.
(https://kubernetes.io/docs/getting-started-guides/scratch/#docker)

Various release notes & other bits in Github mention that anything after 1.13 isn't supported.

@soltysh
Copy link
Contributor

soltysh commented Apr 27, 2017

Yes, downgrade if you can, see #13281.

@dwiden
Copy link

dwiden commented Apr 27, 2017

I've run into this buy while trying to run the Administrators: Setting Up a Cluster tutorial.

I created a Fedora 25 x64 node in Digital Ocean. I installed Docker, and followed the tutorial. The tutorial app fails to deploy with the following error:

error: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

Version Information

~]# oc version
oc v3.6.0-alpha.1+decce00-130
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server **REDACTED**
openshift v3.6.0-alpha.1+decce00-130
kubernetes v1.5.2+43a9be4

~]# docker --version
Docker version 1.13.1-cs3, build 95c9d22

~]# docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
fd3529b350b5        openshift/origin    "/usr/bin/openshif..."   28 minutes ago      Up 28 minutes                           origin

~]# docker image ls
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
openshift/origin            latest              bff03461648d        2 hours ago         669 MB
openshift/origin-deployer   v3.6.0-alpha.1      367295b9d0a5        2 weeks ago         635 MB
openshift/origin-pod        v3.6.0-alpha.1      853986e79d22        2 weeks ago         1.14 MB

Hope this helps, looks like it might be #13281

@abdelhegazi
Copy link

Has any of you got oc working on Ubuntu even with an older release of the oc?

@soltysh
Copy link
Contributor

soltysh commented May 2, 2017

Sorry, Fedora 25 here. I'll try to get latest ubuntu on a vm and look into the problem.

@ibidani
Copy link

ibidani commented May 3, 2017

same issue here
oc v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.1.50:8443
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4

@zafinxueqian
Copy link

zafinxueqian commented May 8, 2017

Same issue here. When I try to create an integrated registry, there is an error: error: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory. Also same error for creating router.

oc version
oc v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://10.0.2.15:8443
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4

@newtonkishore
Copy link

Same issue here, when i try to run Kubernetes go client, hitting the same issue:

[root@robot-vm-dev home]# ./client
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

goroutine 1 [running]:
main.main()
/root/go/src/client/client.go:16 +0x2d6

Is it the same issue or any ways to avoid this?

Thanks,

@JohannesBertens
Copy link

JohannesBertens commented May 11, 2017

So, I'm also getting this on a fresh Centos 7 system and 'oc cluster up'

[root@openshift ~]# oc version
oc v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://xxx.xxx.xxx.xxx:8443
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
[root@openshift ~]# docker info                                
Containers: 858                                                
 Running: 1                                                    
 Paused: 0                                                     
 Stopped: 857                                                  
Images: 5                                                      
Server Version: 17.05.0-ce                                     
Storage Driver: overlay                                        
 Backing Filesystem: extfs                                     
 Supports d_type: true                                         
Logging Driver: json-file                                      
Cgroup Driver: cgroupfs                                        
Plugins:                                                       
 Volume: local                                                 
 Network: bridge host macvlan null overlay                     
Swarm: inactive                                                
Runtimes: runc                                                 
Default Runtime: runc                                          
Init Binary: docker-init                                       
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145   
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228         
init version: 949e6fa                                          
Security Options:                                              
 seccomp                                                       
  Profile: default                                             
Kernel Version: 3.10.0-327.22.2.el7.x86_64                     
Operating System: CentOS Linux 7 (Core)                        
OSType: linux                                                  
Architecture: x86_64                                           
CPUs: 8                                                        
Total Memory: 31.25GiB                                         
Name: openshift                                                
ID: NXPI:YAZC:MNH6:H3UR:SB3I:OS7D:SDSI:L57M:MXMH:IMJE:CSLB:AGYJ
Docker Root Dir: /var/lib/docker                               
Debug Mode (client): false                                     
Debug Mode (server): false                                     
Registry: https://index.docker.io/v1/                          
Experimental: false                                            
Insecure Registries:                                           
 172.30.0.0/16                                                 
 127.0.0.0/8                                                   
Live Restore Enabled: false                                    
                                                               
WARNING: bridge-nf-call-ip6tables is disabled                  

@andrejmaya
Copy link

Its working with the docker version 1.13.0. To install an older docker version see: https://forums.docker.com/t/how-can-i-install-a-specific-version-of-the-docker-engine/1993/5

@frankruizhi
Copy link

frankruizhi commented May 12, 2017

#若忘记修改可能会报错:No API token found for service account "default"
vi /etc/kubernetes/controller-manager
update the configuration of controller manager:
KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/var/run/kubernetes/apiserver.key --root-ca-file=/var/run/kubernetes/apiserver.crt"

and,You need to config admission-control flag on apiserver. Sometihng like --admission-control=ServiceAccount.

@enj
Copy link
Contributor

enj commented May 22, 2017

@soltysh so this seems to be consistently caused by using a newer version of docker. WDYT?

@enj
Copy link
Contributor

enj commented May 23, 2017

Not working for me with Docker 1.13 (archlinux).

open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

@krims0n32 were you able to resolve the issue? Perhaps by downgrading to docker 1.12.x?

@JohannesBertens
Copy link

Following this thread - having the same issue with
oc cluster up --version=latest

@soltysh
Copy link
Contributor

soltysh commented Jun 7, 2017

I just run this using ubuntu 16.04 and docker 1.12.6 and everything worked as expected. I'm currently trying with docker 17.03.1-ce.

@soltysh
Copy link
Contributor

soltysh commented Jun 7, 2017

Now, I'm pretty confident this is connected with newer docker version. Apparently all the fixes that are provided so far are not sufficient, yet. I'm closing this in favor of #14279.

@toschneck
Copy link

Downgrade to version 1.13.1 fixed the issue on oc cluster up:

apt-get install docker-engine=1.13.1-0~ubuntu-xenial

@toschneck
Copy link

The new release v3.6.0-rc.0 resolved for me the issue: Documentation: https://github.com/openshift/origin/blob/release-3.6/docs/cluster_up_down.md#linux

@acossette1979
Copy link

same thing :(
error: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
Amelies-MacBook-Pro:cadc myname$ oc version
oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth

Server https://127.0.0.1:8443
openshift v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7

@acossette1979
Copy link

error: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
MacBook-Pro:cadc myname$ oc version
oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server https://127.0.0.1:8443
openshift v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/auth kind/bug Categorizes issue or PR as related to a bug. priority/P2
Projects
None yet
Development

No branches or pull requests