Spark on k8s 新手入门教程(全)
Spark on k8s 全新手入门教程
Spark on k8s 新手入门教程
**本文内容为搭建k8s单点集群、安装docker、spark2.4.3、以及用spark向k8s集群主节点提交计算pi任务。**
前提条件
1.你要有一台自己的虚拟机或者服务器,本人使用vmware虚拟机,系统是centos7 64位,版本请往上拉满,越高越好。
2.机器需要能联网
3.安装了wget 如果没有安装,请输入以下指令来安装
yum -y install wget
开始我们的配置
配置yum
[root@bogon ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@bogon ~]#yum clean all
[root@bogon ~]#yum makecache
用weget下两个东东
[root@bogon ~]#wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@bogon ~]#wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装dokcer
[root@bogon ~]#sudo yum install -y yum-utils device-mapper-persistent-data lvm2
[root@bogon ~]#sudo yum-config-manager
--add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@bogon ~]#yum-config-manager --enable docker-ce-edge
[root@bogon ~]#yum-config-manager --enable docker-ce-test
[root@bogon ~]#yum list docker-ce --showduplicates | sort -r
//这句是查看docker版本列表 下一句用来安装docker 版本尽量选择高一点
[root@bogon ~]#yum install docker-ce-18.09.0-3.el7
接着修改docker源
[root@bogon ~]#cd /etc/
[root@bogon ~]#mkdir docker
[root@bogon ~]#vi daemon.json
输入以下内容
{
"registry-mirrors":["https://pee6w651.mirror.aliyuncs.com"]
}
保存并退出
执行素质三连
[root@bogon ~]#systemctl daemon-reload
[root@bogon ~]#systemctl restart docker
[root@bogon ~]#systemctl enable docker
至此 docker安装部分完成 你需要看看docker是否正常运行
[root@bogon ~]#systemctl status docker
看到running即可
安装kubectl kubeadm kubelet
[root@bogon ~]#yum install kubelet kubeadm kubectl -y
设置开机自启
[root@bogon ~]#systemctl enable kubelet
这是我的hosts文件
自己按照自己的情况修改一下 ,bogon是主机名 ,192.168.116.159是ip
接着执行下列指令
[root@bogon ~]#systemctl stop firewalld
[root@bogon ~]#setenforce 0
[root@bogon ~]#swapoff -a
[root@bogon ~]#echo "1">/proc/sys/net/bridge/bridge-nf-call-iptables
搭建k8s单点集群
初始化集群 需要等两分钟左右
[root@bogon ~]#kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.1 --apiserver-advertise-address 192.168.116.159 --pod-network-cidr=10.244.0.0/16
这里面的192.168.116.159是你自己机器的ip
完成之后看到如图内容
后续我按照官方文档,执行下列语句
[root@bogon ~]#mkdir -p $HOME/.kube
[root@bogon ~]#cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@bogon ~]#chown $(id -u):$(id -g) $HOME/.kube/config
[root@bogon ~]#echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@bogon ~]#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@bogon ~]#systemctl restart kubelet
可以看一下集群健康状态了
kubectl get cs
执行删除污点命令
[root@bogon ~]#kubectl taint node --all node-role.kubernetes.io/master-
查看所有系统pod状态
如果coredns一直不成功
执行下列几条指令即可解决
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
等所有的pod状态都处于running之后
查看节点状态
kubectl get nodes
至此 k8s单点部署完成
安装spark2.4.3
对了 做这个之前忘了一点,你的虚拟机或服务器上必须提前装好了java
下面我们先装一下java 下载一个jdk1.8的压缩包
mkdir /usr/java
cd /usr/java
把那个压缩包放到usr/java路径下 然后解压
tar zxvf jdk-8u221-linux-x64.tar.gz
完成之后编辑/etc/profile文件 加入下列内容后保存退出
JAVA_HOME=/usr/java/jdk1.8.0_221
JRE_HOME=/usr/java/jdk1.8.0_221/jre
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME CLASS_PATH PATH
让环境变量立即生效
source /etc/profile
查看java信息
java -version
这样说明配置完成。
接着安装spark
先下载包spark-2.4.3-bin-hadoop2.7.tgz
wget https://archive.apache.org/dist/spark/spark-2.4.3/spark-2.4.3-bin-hadoop2.7.tgz
把它放到/opt/目录下之后再解压
tar -zxvf spark-2.4.3-bin-hadoop2.7.tgz
cd spark-2.4.3-bin-hadoop2.7
使用docker build命令制作spark基础镜像
docker build -t registry/spark:2.4.3 -f kubernetes/dockerfiles/spark/Dockerfile .
或者使用官方方法
制作完成后查看镜像
docker images
接着就可以运行一些例子了
cd /opt/spark-2.4.3-bin-hadoop2.7
//执行
./bin/pyspark
sc.parallelize(range(1000)).count()
有如下结果
接着ctrl + z 退出 再执行
./bin/spark-shell
sc.parallelize(1 to 1000).count()
有如下结果
好,至此,你的spark安装没问题了
接着我们要向k8s集群上提交任务
首先确保集群开着 各项运行正常 执行以下指令查看状态:
确保所有NAMESPACE为kube-system的pod都运行正常即可。
接着
获取集群信息,确认Kubernetes master地址
执行
kubectl cluster-info
如下结果 记录如下图的ip地址和端口https://192.168.116.159:6443
等会提交任务要用到的
接着为Spark创建一个RBAC的role
kubectl create serviceaccount spark
kubectl create clusterrolebinding
spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default
结果如下
接着准备jar包
cd /opt
mkdir spark
cp -Rf /opt/spark-2.4.3-bin-hadoop2.7/examples /opt/spark/
可以开始提交任务了
cd /opt/spark-2.4.3-bin-hadoop2.7
bin/spark-submit \
--master k8s://https://192.168.116.159:6443 \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.container.image=registry/spark:2.4.3 \
local:///opt/spark/examples/jars/spark-examples_2.11-2.4.3.jar
结果
[root@bogon spark-2.4.3-bin-hadoop2.7]#
bin/spark-submit \
> --master
k8s://https://192.168.116.159:6443 \
> --deploy-mode cluster \
> --name spark-pi \
> --class
org.apache.spark.examples.SparkPi \
> --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
> --conf spark.executor.instances=2 \
> --conf
spark.kubernetes.container.image=registry/spark:2.4.3 \
>local:///opt/spark/examples/jars/spark-examples_2.11-2.4.3.jar
19/08/02 17:46:11 WARN Utils: Your hostname,
bogon resolves to a loopback address: 127.0.0.1; using 192.168.116.159 instead
(on interface ens33)
19/08/02 17:46:11 WARN Utils: Set
SPARK_LOCAL_IP if you need to bind to another address
log4j:WARN No appenders could be found for
logger (io.fabric8.kubernetes.client.Config).
log4j:WARN Please initialize the log4j
system properly.
log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
19/08/02 17:46:16 INFO
LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-1564739172051-driver
namespace: default
labels: spark-app-selector ->
spark-3281d72f8f2f4556901cb37b74bcc6eb, spark-role -> driver
pod uid: 2d554d99-fb19-4ab7-a2fa-76305467b180
creation time: 2019-08-02T09:46:15Z
service account name: spark
volumes: spark-local-dir-1, spark-conf-volume,
spark-token-wxqsb
node name: N/A
start time: N/A
container images: N/A
phase: Pending
status: []
19/08/02 17:46:16 INFO
LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-1564739172051-driver
namespace: default
labels: spark-app-selector ->
spark-3281d72f8f2f4556901cb37b74bcc6eb, spark-role -> driver
pod uid: 2d554d99-fb19-4ab7-a2fa-76305467b180
creation time: 2019-08-02T09:46:15Z
service account name: spark
volumes: spark-local-dir-1, spark-conf-volume,
spark-token-wxqsb
node name: bogon
start time: N/A
container images: N/A
phase: Pending
status: []
19/08/02 17:46:16 INFO
LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-1564739172051-driver
namespace: default
labels: spark-app-selector ->
spark-3281d72f8f2f4556901cb37b74bcc6eb, spark-role -> driver
pod uid: 2d554d99-fb19-4ab7-a2fa-76305467b180
creation time: 2019-08-02T09:46:15Z
service account name: spark
volumes: spark-local-dir-1, spark-conf-volume,
spark-token-wxqsb
node name: bogon
start time: 2019-08-02T09:46:15Z
container images: registry/spark:2.4.3
phase: Pending
status: [ContainerStatus(containerID=null,
image=registry/spark:2.4.3, imageID=, lastState=ContainerState(running=null,
terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver,
ready=false, restartCount=0, state=ContainerState(running=null,
terminated=null, waiting=ContainerStateWaiting(message=null,
reason=ContainerCreating, additionalProperties={}), additionalProperties={}),
additionalProperties={})]
19/08/02 17:46:17 INFO Client: Waiting for
application spark-pi to finish...
19/08/02 17:46:22 INFO
LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-1564739172051-driver
namespace: default
labels: spark-app-selector ->
spark-3281d72f8f2f4556901cb37b74bcc6eb, spark-role -> driver
pod uid: 2d554d99-fb19-4ab7-a2fa-76305467b180
creation time: 2019-08-02T09:46:15Z
service account name: spark
volumes: spark-local-dir-1, spark-conf-volume,
spark-token-wxqsb
node name: bogon
start time: 2019-08-02T09:46:15Z
container images: registry/spark:2.4.3
phase: Running
status:
[ContainerStatus(containerID=docker://f0dbc99e5df33e40f75603ac99f28b864438c5652c32bb0c3ff8b80bf4a1e8e3,
image=registry/spark:2.4.3, imageID=docker://sha256:2a4bbf7bbf0de6c67e791544074bb1f935d3bf86ddca2017507a20b29228d13f,
lastState=ContainerState(running=null, terminated=null, waiting=null,
additionalProperties={}), name=spark-kubernetes-driver, ready=true,
restartCount=0, state=ContainerState(running=ContainerStateRunning(startedAt=2019-08-02T09:46:21Z,
additionalProperties={}), terminated=null, waiting=null,
additionalProperties={}), additionalProperties={})]
19/08/02 17:47:11 INFO
LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-pi-1564739172051-driver
namespace: default
labels: spark-app-selector ->
spark-3281d72f8f2f4556901cb37b74bcc6eb, spark-role -> driver
pod uid: 2d554d99-fb19-4ab7-a2fa-76305467b180
creation time: 2019-08-02T09:46:15Z
service account name: spark
volumes: spark-local-dir-1, spark-conf-volume,
spark-token-wxqsb
node name: bogon
start time: 2019-08-02T09:46:15Z
container images: registry/spark:2.4.3
phase: Succeeded
status:
[ContainerStatus(containerID=docker://f0dbc99e5df33e40f75603ac99f28b864438c5652c32bb0c3ff8b80bf4a1e8e3,
image=registry/spark:2.4.3,
imageID=docker://sha256:2a4bbf7bbf0de6c67e791544074bb1f935d3bf86ddca2017507a20b29228d13f,
lastState=ContainerState(running=null, terminated=null, waiting=null,
additionalProperties={}), name=spark-kubernetes-driver, ready=false,
restartCount=0, state=ContainerState(running=null,
terminated=ContainerStateTerminated(containerID=docker://f0dbc99e5df33e40f75603ac99f28b864438c5652c32bb0c3ff8b80bf4a1e8e3,
exitCode=0, finishedAt=2019-08-02T09:47:10Z, message=null, reason=Completed,
signal=null, startedAt=2019-08-02T09:46:21Z, additionalProperties={}),
waiting=null, additionalProperties={}), additionalProperties={})]
19/08/02 17:47:11 INFO
LoggingPodStatusWatcherImpl: Container final statuses:
Container name: spark-kubernetes-driver
Container image: registry/spark:2.4.3
Container state: Terminated
Exit code: 0
19/08/02 17:47:11 INFO Client: Application
spark-pi finished.
19/08/02 17:47:11 INFO ShutdownHookManager:
Shutdown hook called
19/08/02 17:47:11 INFO ShutdownHookManager:
Deleting directory /tmp/spark-febfa550-babc-4034-b784-1660bb33dae
看到了succeed
接着我们去查看以下创建完成后的结果
kubectl get pods --all-namespaces
可以看到有了一个新的pod 名为spark-pi-1564739172051-driver
//执行
kubectl logs spark-pi-1564739172051-driver
可以看到结果
[root@bogon /]# kubectl logs spark-pi-1564739172051-driver
++ id -u
+ myuid=0
++ id -g
+ mygid=0
+ set +e
++ getent passwd 0
+ uidentry=root:x:0:0:root:/root:/bin/ash
+ set -e
+ '[' -z root:x:0:0:root:/root:/bin/ash ']'
+ SPARK_K8S_CMD=driver
+ case "$SPARK_K8S_CMD" in
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -n '' ']'
+ PYSPARK_ARGS=
+ '[' -n '' ']'
+ R_ARGS=
+ '[' -n '' ']'
+ '[' '' == 2 ']'
+ '[' '' == 3 ']'
+ case "$SPARK_K8S_CMD" in
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
+ exec /sbin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=10.244.0.16 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.examples.SparkPi spark-internal
19/08/02 09:46:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/08/02 09:46:33 INFO SparkContext: Running Spark version 2.4.3
19/08/02 09:46:33 INFO SparkContext: Submitted application: Spark Pi
19/08/02 09:46:33 INFO SecurityManager: Changing view acls to: root
19/08/02 09:46:33 INFO SecurityManager: Changing modify acls to: root
19/08/02 09:46:33 INFO SecurityManager: Changing view acls groups to:
19/08/02 09:46:33 INFO SecurityManager: Changing modify acls groups to:
19/08/02 09:46:33 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
19/08/02 09:46:34 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
19/08/02 09:46:34 INFO SparkEnv: Registering MapOutputTracker
19/08/02 09:46:34 INFO SparkEnv: Registering BlockManagerMaster
19/08/02 09:46:34 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/08/02 09:46:34 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/08/02 09:46:34 INFO DiskBlockManager: Created local directory at /var/data/spark-f7d9f074-c1e9-45bd-9bfa-35e6156da240/blockmgr-7eabfabf-0e5e-43bd-a74a-9b7a24162904
19/08/02 09:46:34 INFO MemoryStore: MemoryStore started with capacity 413.9 MB
19/08/02 09:46:34 INFO SparkEnv: Registering OutputCommitCoordinator
19/08/02 09:46:35 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/08/02 09:46:35 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://spark-pi-1564739172051-driver-svc.default.svc:4040
19/08/02 09:46:35 INFO SparkContext: Added JAR file:///opt/spark/examples/jars/spark-examples_2.11-2.4.3.jar at spark://spark-pi-1564739172051-driver-svc.default.svc:7078/jars/spark-examples_2.11-2.4.3.jar with timestamp 1564739195505
19/08/02 09:46:38 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
19/08/02 09:46:38 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7079.
19/08/02 09:46:38 INFO NettyBlockTransferService: Server created on spark-pi-1564739172051-driver-svc.default.svc:7079
19/08/02 09:46:38 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/08/02 09:46:38 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, spark-pi-1564739172051-driver-svc.default.svc, 7079, None)
19/08/02 09:46:38 INFO BlockManagerMasterEndpoint: Registering block manager spark-pi-1564739172051-driver-svc.default.svc:7079 with 413.9 MB RAM, BlockManagerId(driver, spark-pi-1564739172051-driver-svc.default.svc, 7079, None)
19/08/02 09:46:38 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, spark-pi-1564739172051-driver-svc.default.svc, 7079, None)
19/08/02 09:46:38 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, spark-pi-1564739172051-driver-svc.default.svc, 7079, None)
19/08/02 09:46:45 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.244.0.17:35570) with ID 1
19/08/02 09:46:45 INFO BlockManagerMasterEndpoint: Registering block manager 10.244.0.17:34980 with 413.9 MB RAM, BlockManagerId(1, 10.244.0.17, 34980, None)
19/08/02 09:47:08 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
19/08/02 09:47:08 INFO SparkContext: Starting job: reduce at SparkPi.scala:38
19/08/02 09:47:08 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 2 output partitions
19/08/02 09:47:08 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
19/08/02 09:47:08 INFO DAGScheduler: Parents of final stage: List()
19/08/02 09:47:08 INFO DAGScheduler: Missing parents: List()
19/08/02 09:47:08 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
19/08/02 09:47:08 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1936.0 B, free 413.9 MB)
19/08/02 09:47:08 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1256.0 B, free 413.9 MB)
19/08/02 09:47:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on spark-pi-1564739172051-driver-svc.default.svc:7079 (size: 1256.0 B, free: 413.9 MB)
19/08/02 09:47:09 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1161
19/08/02 09:47:09 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1))
19/08/02 09:47:09 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
19/08/02 09:47:09 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 10.244.0.17, executor 1, partition 0, PROCESS_LOCAL, 7885 bytes)
19/08/02 09:47:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.244.0.17:34980 (size: 1256.0 B, free: 413.9 MB)
19/08/02 09:47:10 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 10.244.0.17, executor 1, partition 1, PROCESS_LOCAL, 7885 bytes)
19/08/02 09:47:10 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 997 ms on 10.244.0.17 (executor 1) (1/2)
19/08/02 09:47:10 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 47 ms on 10.244.0.17 (executor 1) (2/2)
19/08/02 09:47:10 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/08/02 09:47:10 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 1.302 s
19/08/02 09:47:10 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 1.394196 s
Pi is roughly 3.1404557022785116
19/08/02 09:47:10 INFO SparkUI: Stopped Spark web UI at http://spark-pi-1564739172051-driver-svc.default.svc:4040
19/08/02 09:47:10 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
19/08/02 09:47:10 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
19/08/02 09:47:10 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
19/08/02 09:47:10 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/08/02 09:47:10 INFO MemoryStore: MemoryStore cleared
19/08/02 09:47:10 INFO BlockManager: BlockManager stopped
19/08/02 09:47:10 INFO BlockManagerMaster: BlockManagerMaster stopped
19/08/02 09:47:10 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/08/02 09:47:10 INFO SparkContext: Successfully stopped SparkContext
19/08/02 09:47:10 INFO ShutdownHookManager: Shutdown hook called
19/08/02 09:47:10 INFO ShutdownHookManager: Deleting directory /tmp/spark-c2fa234c-36d3-44e5-a708-eeeeb8cf5b67
19/08/02 09:47:10 INFO ShutdownHookManager: Deleting directory /var/data/spark-f7d9f074-c1e9-45bd-9bfa-35e6156da240/spark-f58ab9b9-19a5-4388-89db-725d0b2ca482
至此 完成!
更多推荐
所有评论(0)