@@ -31,16 +31,16 @@ For example, if the registry host is `registry-host` and the registry is listeni
31
31
Kubernetes applications can be executed via ` spark-submit ` . For example, to compute the value of pi, assuming the images
32
32
are set up as described above:
33
33
34
- bin/spark-submit
35
- --deploy-mode cluster
36
- --class org.apache.spark.examples.SparkPi
37
- --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port>
38
- --kubernetes-namespace default
39
- --conf spark.executor.instances=5
40
- --conf spark.app.name=spark-pi
41
- --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest
42
- --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
43
- examples/jars/spark_2 .11-2.2.0.jar
34
+ bin/spark-submit \
35
+ --deploy-mode cluster \
36
+ --class org.apache.spark.examples.SparkPi \
37
+ --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
38
+ --kubernetes-namespace default \
39
+ --conf spark.executor.instances=5 \
40
+ --conf spark.app.name=spark-pi \
41
+ --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \
42
+ --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
43
+ examples/jars/spark_examples_2 .11-2.2.0.jar
44
44
45
45
<!-- TODO master should default to https if no scheme is specified -->
46
46
The Spark master, specified either via passing the ` --master ` command line argument to ` spark-submit ` or by setting
@@ -75,53 +75,53 @@ examples of providing application dependencies.
75
75
76
76
To submit an application with both the main resource and two other jars living on the submitting user's machine:
77
77
78
- bin/spark-submit
79
- --deploy-mode cluster
80
- --class com.example.applications.SampleApplication
81
- --master k8s://https://192.168.99.100
82
- --kubernetes-namespace default
83
- --upload-jars /home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar
84
- --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest
85
- --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
78
+ bin/spark-submit \
79
+ --deploy-mode cluster \
80
+ --class com.example.applications.SampleApplication \
81
+ --master k8s://https://192.168.99.100 \
82
+ --kubernetes-namespace default \
83
+ --upload-jars /home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar \
84
+ --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \
85
+ --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
86
86
/home/exampleuser/exampleapplication/main.jar
87
87
88
88
Note that since passing the jars through the ` --upload-jars ` command line argument is equivalent to setting the
89
89
` spark.kubernetes.driver.uploads.jars ` Spark property, the above will behave identically to this command:
90
90
91
- bin/spark-submit
92
- --deploy-mode cluster
93
- --class com.example.applications.SampleApplication
94
- --master k8s://https://192.168.99.100
95
- --kubernetes-namespace default
96
- --conf spark.kubernetes.driver.uploads.jars=/home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar
97
- --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest
98
- --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
91
+ bin/spark-submit \
92
+ --deploy-mode cluster \
93
+ --class com.example.applications.SampleApplication \
94
+ --master k8s://https://192.168.99.100 \
95
+ --kubernetes-namespace default \
96
+ --conf spark.kubernetes.driver.uploads.jars=/home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar \
97
+ --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \
98
+ --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
99
99
/home/exampleuser/exampleapplication/main.jar
100
100
101
101
To specify a main application resource that can be downloaded from an HTTP service, and if a plugin for that application
102
102
is located in the jar ` /opt/spark-plugins/app-plugin.jar ` on the docker image's disk:
103
103
104
- bin/spark-submit
105
- --deploy-mode cluster
106
- --class com.example.applications.PluggableApplication
107
- --master k8s://https://192.168.99.100
108
- --kubernetes-namespace default
109
- --jars /opt/spark-plugins/app-plugin.jar
110
- --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest
111
- --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
104
+ bin/spark-submit \
105
+ --deploy-mode cluster \
106
+ --class com.example.applications.PluggableApplication \
107
+ --master k8s://https://192.168.99.100 \
108
+ --kubernetes-namespace default \
109
+ --jars /opt/spark-plugins/app-plugin.jar \
110
+ --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest \
111
+ --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
112
112
http://example.com:8080/applications/sparkpluggable/app.jar
113
113
114
114
Note that since passing the jars through the ` --jars ` command line argument is equivalent to setting the ` spark.jars `
115
115
Spark property, the above will behave identically to this command:
116
116
117
- bin/spark-submit
118
- --deploy-mode cluster
119
- --class com.example.applications.PluggableApplication
120
- --master k8s://https://192.168.99.100
121
- --kubernetes-namespace default
122
- --conf spark.jars=file:///opt/spark-plugins/app-plugin.jar
123
- --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest
124
- --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest
117
+ bin/spark-submit \
118
+ --deploy-mode cluster \
119
+ --class com.example.applications.PluggableApplication \
120
+ --master k8s://https://192.168.99.100 \
121
+ --kubernetes-namespace default \
122
+ --conf spark.jars=file:///opt/spark-plugins/app-plugin.jar \
123
+ --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest \
124
+ --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \
125
125
http://example.com:8080/applications/sparkpluggable/app.jar
126
126
127
127
### Spark Properties
0 commit comments