Skip to content

Absolute path in docker-compose's 'build' not supported #9815

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
surajssd opened this issue Jul 13, 2016 · 0 comments · Fixed by #9816
Closed

Absolute path in docker-compose's 'build' not supported #9815

surajssd opened this issue Jul 13, 2016 · 0 comments · Fixed by #9816

Comments

@surajssd
Copy link
Contributor

When doing oc import docker-compose using a docker-compose file that has build field in a service, and if build value is absolute, its not handled well. Rather than accepting the absolute path it is appended with path of base directory of docker-compose file.

Version

$ oc version
oc v1.3.0-alpha.0+aa6e2a6
kubernetes v1.3.0+57fb9ac
features: Basic-Auth

Steps To Reproduce
  1. create a docker-compose file like the following
$ cat docker-compose.yml
version: "2"

services:
  mariadb:
    image: centos/mariadb
    ports:
      - "3306"
    environment:
      MYSQL_ROOT_PASSWORD: etherpad
      MYSQL_DATABASE: etherpad
      MYSQL_PASSWORD: etherpad
      MYSQL_USER: etherpad

  etherpad:
    build: "/home/vagrant/originbug/myphp"
    ports:
      - "80:9001"
    depends_on:
      - mariadb
    environment:
      DB_HOST: mariadb
      DB_DBID: etherpad
      DB_PASS: etherpad
      DB_USER: etherpad
  1. Clone myphp repo
$ git clone https://github.com/surajssd/myphp
  1. Create a Dockerfile in it
$ cat myphp/Dockerfile 
FROM somefoo

WORKDIR /app
COPY . /app

CMD "xyz"
  1. My current directory
$ pwd
/home/vagrant/originbug

$ ll myphp/
$ ll myphp/
total 12
-rw-rw-r--. 1 vagrant vagrant 51 Jul 13 06:07 Dockerfile
-rw-rw-r--. 1 vagrant vagrant 98 Jul 13 06:07 index.php
-rw-rw-r--. 1 vagrant vagrant  8 Jul 13 06:07 README.md
  1. Then run the oc command:
$ oc import docker-compose -f ./docker-compose.yml --as-template=foo -o yaml
ERROR: ssh git clone spec error: unable to parse ssh git clone specification:  /home/vagrant/originbug/home/vagrant/originbug/myphp
unable to parse ssh git clone specification:  /home/vagrant/originbug/home/vagrant/originbug/myphp
Current Result
$ oc import docker-compose -f ./docker-compose.yml --as-template=foo -o yaml
ERROR: ssh git clone spec error: unable to parse ssh git clone specification:  /home/vagrant/originbug/home/vagrant/originbug/myphp
unable to parse ssh git clone specification:  /home/vagrant/originbug/home/vagrant/originbug/myphp
Expected Result

Expected it to create a openshift template out of the input.

Additional Information

oadm diagnostics

$ oadm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/home/vagrant/openshift.local.config/master/admin.kubeconfig'
Info:  Using context for cluster-admin access: 'default/192-168-121-248:8443/system:admin'

[Note] Running diagnostic: ConfigContexts[default/192-168-121-248:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

Info:  The current client config context is 'default/192-168-121-248:8443/system:admin':
       The server URL is 'https://192.168.121.248:8443'
       The user authentication is 'system:admin/192-168-121-248:8443'
       The current project is 'default'
       Successfully requested project list; has access to project(s):
         [openshift-infra default kube-system openshift]

[Note] Running diagnostic: ConfigContexts[default/localhost:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'default/localhost:8443/system:admin':
       The server URL is 'https://localhost:8443'
       The user authentication is 'system:admin/192-168-121-248:8443'
       The current project is 'default'
       Successfully requested project list; has access to project(s):
         [default kube-system openshift openshift-infra]

[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint

Info:  Output from the diagnostic pod (image openshift/origin-deployer:v1.3.0-alpha.0):
       [Note] Running diagnostic: PodCheckAuth
              Description: Check that service account credentials authenticate as expected

       Info:  Service account token successfully authenticated to master
       Info:  Service account token was authenticated by the integrated registry.
       [Note] Running diagnostic: PodCheckDns
              Description: Check that DNS within a pod works as expected

       [Note] Summary of diagnostics execution (version v1.3.0-alpha.0):
       [Note] Completed with no errors or warnings seen.

[Note] Running diagnostic: ClusterRegistry
       Description: Check that there is a working Docker registry

[Note] Running diagnostic: ClusterRoleBindings
       Description: Check that the default ClusterRoleBindings are present and contain the expected subjects

[Note] Running diagnostic: ClusterRoles
       Description: Check that the default ClusterRoles are present and contain the expected permissions

[Note] Running diagnostic: ClusterRouterName
       Description: Check there is a working router

WARN:  [DClu2001 from diagnostic ClusterRouter@openshift/origin/pkg/diagnostics/cluster/router.go:129]
       There is no "router" DeploymentConfig. The router may have been named
       something different, in which case this warning may be ignored.

       A router is not strictly required; however it is needed for accessing
       pods from external networks and its absence likely indicates an incomplete
       installation of the cluster.

       Use the 'oadm router' command to create a router.

[Note] Running diagnostic: MasterNode
       Description: Check if master is also running node (for Open vSwitch)

Info:  Found a node with same IP as master: localhost.localdomain

[Note] Skipping diagnostic: MetricsApiProxy
       Description: Check the integrated heapster metrics can be reached via the API proxy
       Because: The heapster service does not exist in the openshift-infra project at this time,
       so it is not available for the Horizontal Pod Autoscaler to use as a source of metrics.

[Note] Running diagnostic: NodeDefinitions
       Description: Check node records on master

[Note] Skipping diagnostic: ServiceExternalIPs
       Description: Check for existing services with ExternalIPs that are disallowed by master config
       Because: No master config file was detected

[Note] Summary of diagnostics execution (version v1.3.0-alpha.0+aa6e2a6):
[Note] Warnings seen: 1
surajssd added a commit to surajssd/origin that referenced this issue Jul 13, 2016
So when build path is resolved, or when it is being converted from
relative path to absolute path, a check has been added which checks
if the path is already absolute, if it is, no further operations
performed and value of Build path is kept unaltered.

Fixes openshift#9815
surajssd added a commit to surajssd/origin that referenced this issue Jul 13, 2016
So when build path is resolved, or when it is being converted from
relative path to absolute path, a check has been added which checks
if the path is already absolute, if it is, no further operations
performed and value of Build path is kept unaltered.

Fixes openshift#9815
surajssd added a commit to surajssd/origin that referenced this issue Jul 18, 2016
So when build path is resolved, or when it is being converted from
relative path to absolute path, a check has been added which checks
if the path is already absolute, if it is, no further operations
performed and value of Build path is kept unaltered.

Fixes openshift#9815
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants