Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit 2a093a8

Browse files
lins05ash211
authored andcommitted
Improved the example commands in running-on-k8s document. (#25)
* Improved the example commands in running-on-k8s document. * Fixed more example commands. * Fixed typo.
1 parent 4c24d9b commit 2a093a8

File tree

1 file changed

+42
-42
lines changed

1 file changed

+42
-42
lines changed

docs/running-on-kubernetes.md

Lines changed: 42 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -31,16 +31,16 @@ For example, if the registry host is `registry-host` and the registry is listeni
3131
Kubernetes applications can be executed via `spark-submit`. For example, to compute the value of pi, assuming the images
3232
are set up as described above:
3333

34-
bin/spark-submit
35-
--deploy-mode cluster
36-
--class org.apache.spark.examples.SparkPi
37-
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port>
38-
--kubernetes-namespace default
39-
--conf spark.executor.instances=5
40-
--conf spark.app.name=spark-pi
41-
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest
42-
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
43-
examples/jars/spark_2.11-2.2.0.jar
34+
bin/spark-submit \
35+
--deploy-mode cluster \
36+
--class org.apache.spark.examples.SparkPi \
37+
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
38+
--kubernetes-namespace default \
39+
--conf spark.executor.instances=5 \
40+
--conf spark.app.name=spark-pi \
41+
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \
42+
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
43+
examples/jars/spark_examples_2.11-2.2.0.jar
4444

4545
<!-- TODO master should default to https if no scheme is specified -->
4646
The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
@@ -75,53 +75,53 @@ examples of providing application dependencies.
7575

7676
To submit an application with both the main resource and two other jars living on the submitting user's machine:
7777

78-
bin/spark-submit
79-
--deploy-mode cluster
80-
--class com.example.applications.SampleApplication
81-
--master k8s://https://192.168.99.100
82-
--kubernetes-namespace default
83-
--upload-jars /home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar
84-
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest
85-
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
78+
bin/spark-submit \
79+
--deploy-mode cluster \
80+
--class com.example.applications.SampleApplication \
81+
--master k8s://https://192.168.99.100 \
82+
--kubernetes-namespace default \
83+
--upload-jars /home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar \
84+
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \
85+
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
8686
/home/exampleuser/exampleapplication/main.jar
8787
8888
Note that since passing the jars through the `--upload-jars` command line argument is equivalent to setting the
8989
`spark.kubernetes.driver.uploads.jars` Spark property, the above will behave identically to this command:
9090

91-
bin/spark-submit
92-
--deploy-mode cluster
93-
--class com.example.applications.SampleApplication
94-
--master k8s://https://192.168.99.100
95-
--kubernetes-namespace default
96-
--conf spark.kubernetes.driver.uploads.jars=/home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar
97-
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest
98-
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
91+
bin/spark-submit \
92+
--deploy-mode cluster \
93+
--class com.example.applications.SampleApplication \
94+
--master k8s://https://192.168.99.100 \
95+
--kubernetes-namespace default \
96+
--conf spark.kubernetes.driver.uploads.jars=/home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar \
97+
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \
98+
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
9999
/home/exampleuser/exampleapplication/main.jar
100100

101101
To specify a main application resource that can be downloaded from an HTTP service, and if a plugin for that application
102102
is located in the jar `/opt/spark-plugins/app-plugin.jar` on the docker image's disk:
103103

104-
bin/spark-submit
105-
--deploy-mode cluster
106-
--class com.example.applications.PluggableApplication
107-
--master k8s://https://192.168.99.100
108-
--kubernetes-namespace default
109-
--jars /opt/spark-plugins/app-plugin.jar
110-
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest
111-
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
104+
bin/spark-submit \
105+
--deploy-mode cluster \
106+
--class com.example.applications.PluggableApplication \
107+
--master k8s://https://192.168.99.100 \
108+
--kubernetes-namespace default \
109+
--jars /opt/spark-plugins/app-plugin.jar \
110+
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest \
111+
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
112112
http://example.com:8080/applications/sparkpluggable/app.jar
113113
114114
Note that since passing the jars through the `--jars` command line argument is equivalent to setting the `spark.jars`
115115
Spark property, the above will behave identically to this command:
116116

117-
bin/spark-submit
118-
--deploy-mode cluster
119-
--class com.example.applications.PluggableApplication
120-
--master k8s://https://192.168.99.100
121-
--kubernetes-namespace default
122-
--conf spark.jars=file:///opt/spark-plugins/app-plugin.jar
123-
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest
124-
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
117+
bin/spark-submit \
118+
--deploy-mode cluster \
119+
--class com.example.applications.PluggableApplication \
120+
--master k8s://https://192.168.99.100 \
121+
--kubernetes-namespace default \
122+
--conf spark.jars=file:///opt/spark-plugins/app-plugin.jar \
123+
--conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest \
124+
--conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
125125
http://example.com:8080/applications/sparkpluggable/app.jar
126126
127127
### Spark Properties

0 commit comments

Comments
 (0)