docker-compose up:ERROR: Encountered errors while bringing up the project.錯誤及解決方式
本文是筆者在使用docker-blog-example這個項目時,按照README.md的說明下docker-compose up這個指令時所出現的錯誤。在經過一番查找後,才發現原來發生原因只是因為前一次docker-compose沒有被關掉,因此問題的根源並不在這個repo本身。這個問題的解決方式出乎意料地簡單,在docker/compose這個項目的I am having this “Enco
前言
本文是筆者在使用docker-blog-example這個項目時,按照README.md的說明下docker-compose up
這個指令時所出現的錯誤。
發生原因
在經過一番查找後,才發現原來發生原因只是因為前一次docker-compose沒有被關掉,因此問題的根源並不在這個repo本身。
錯誤訊息
Creating network “docker-blog-example_default” with the default driver
Starting docker-blog-example_spark-master_1_1fd5b135c19a … error
ERROR: for docker-blog-example_spark-master_1_1fd5b135c19a Cannot start service spark-master: network 9d738b7c5d36e6c6c36c34171e48fd6c280b0b756ace1dd3b341a9be16972466 not found
ERROR: for spark-master Cannot start service spark-master: network 9d738b7c5d36e6c6c36c34171e48fd6c280b0b756ace1dd3b341a9be16972466 not found
ERROR: Encountered errors while bringing up the project.
解決方式
這個問題的解決方式出乎意料地簡單,在docker/compose這個項目的I am having this “Encountered errors while bringing up the project.” error這個issue中,shin-提出了他的解決方式:
docker-compose down
也就是先關掉docker-compose。
之後再次執行
docker-compse up
原來的錯誤就不見了。但是卻發現了以下錯誤:
Creating network “docker-blog-example_default” with the default driver
Creating docker-blog-example_spark-master_1_8c5cd0aeceed … done
Creating docker-blog-example_spark-worker-1_1_8b8a3516abd6 … error
ERROR: for docker-blog-example_spark-worker-1_1_8b8a3516abd6 Cannot start service spark-worker-1: driver failed programming external connectivity on endpoint docker-blog-example_spark-worker-1_1_57c519acf8ef (ea6215a4dbd59e11e2a78127da6f849ecb5490a22e52b4cd44ebb301aecb1c88): Bind for 0.0.0.0:8881 failed: port is already allocated
ERROR: for spark-worker-1 Cannot start service spark-worker-1: driver failed programming external connectivity on endpoint docker-blog-example_spark-worker-1_1_57c519acf8ef (ea6215a4dbd59e11e2a78127da6f849ecb5490a22e52b4cd44ebb301aecb1c88): Bind for 0.0.0.0:8881 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
這個錯誤訊息中的線索比較明顯:它說明了8881port己被佔用。
解決辦法很簡單,就是把docker-compose.yml這個檔案中用到8881這個port的都改為另外一個未被佔用的port。
如:
- 將
services->spark-worker-1->environment->SPARK_WORKER_PORT
下的8881改為8889
services:
...
spark-worker-1:
...
environment:
...
SPARK_WORKER_PORT: 8889 #8881
- 將
services->spark-worker-1->expose
下的8881改為8889
services:
...
spark-worker-1:
...
expose:
- 7012
- 7013
- 7014
- 7015
- 7016
- 8889 #8881
- 將
services->spark-worker-1->ports
下的8881改為8889
services:
...
spark-worker-1:
...
ports:
- 8081:8081
- 8889:8889 #8881:8881
- 7012:7012
- 7013:7013
- 7014:7014
- 7015:7015
- 7016:7016
之後再下docker-compose up
便可成功運行!
Creating network “docker-blog-example_default” with the default driver
Creating docker-blog-example_spark-master_1_a6bc79cb4a66 … done
Creating docker-blog-example_spark-worker-1_1_4641b50edb56 … done
Creating docker-blog-example_jupyter-debian_1_513e482df427 … done
Attaching to docker-blog-example_spark-master_1_2662f09fb875, docker-blog-example_spark-worker-1_1_46297c556d03, docker-blog-example_jupyter-debian_1_6589946a5603
spark-master_1_2662f09fb875 | Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
spark-master_1_2662f09fb875 | 18/12/01 01:51:01 INFO Master: Started daemon with process name: 1@spark-master
spark-master_1_2662f09fb875 | 18/12/01 01:51:01 INFO SignalUtils: Registered signal handler for TERM
spark-master_1_2662f09fb875 | 18/12/01 01:51:01 INFO SignalUtils: Registered signal handler for HUP
spark-master_1_2662f09fb875 | 18/12/01 01:51:01 INFO SignalUtils: Registered signal handler for INT
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO SecurityManager: Changing view acls to: root
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO SecurityManager: Changing modify acls to: root
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO SecurityManager: Changing view acls groups to:
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO SecurityManager: Changing modify acls groups to:
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
spark-worker-1_1_46297c556d03 | Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO Worker: Started daemon with process name: 1@spark-worker-1
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO SignalUtils: Registered signal handler for TERM
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO SignalUtils: Registered signal handler for HUP
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO SignalUtils: Registered signal handler for INT
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO Utils: Successfully started service ‘sparkMaster’ on port 7077.
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO Master: Starting Spark master at spark://spark-master:7077
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO Master: Running Spark version 2.2.0
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO SecurityManager: Changing view acls to: root
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO SecurityManager: Changing modify acls to: root
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO SecurityManager: Changing view acls groups to:
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO SecurityManager: Changing modify acls groups to:
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO Utils: Successfully started service ‘MasterUI’ on port 8080.
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO MasterWebUI: Bound MasterWebUI to 0.0.0.0, and started at http://192.168.2.160:8080
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO Utils: Successfully started service on port 6066.
spark-master_1_2662f09fb875 | 18/12/01 01:51:02 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066
jupyter-debian_1_6589946a5603 | [I 01:51:03.055 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
jupyter-debian_1_6589946a5603 | [W 01:51:03.070 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
jupyter-debian_1_6589946a5603 | [I 01:51:03.075 NotebookApp] Serving notebooks from local directory: /notebooks
jupyter-debian_1_6589946a5603 | [I 01:51:03.075 NotebookApp] 0 active kernels
jupyter-debian_1_6589946a5603 | [I 01:51:03.075 NotebookApp] The Jupyter Notebook is running at: http://[all ip addresses on your system]:8888/?token=6eb24b6ac3f56f05f6586a1d5b5240495edf264a189f0bd4
jupyter-debian_1_6589946a5603 | [I 01:51:03.075 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
jupyter-debian_1_6589946a5603 | [C 01:51:03.076 NotebookApp]
jupyter-debian_1_6589946a5603 |
jupyter-debian_1_6589946a5603 | Copy/paste this URL into your browser when you connect for the first time,
jupyter-debian_1_6589946a5603 | to login with a token:
jupyter-debian_1_6589946a5603 | http://localhost:8888/?token=6eb24b6ac3f56f05f6586a1d5b5240495edf264a189f0bd4
spark-master_1_2662f09fb875 | 18/12/01 01:51:03 INFO Master: I have been elected leader! New state: ALIVE
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO Utils: Successfully started service ‘sparkWorker’ on port 8889.
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO Worker: Starting Spark worker 172.18.0.3:8889 with 2 cores, 2.0 GB RAM
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO Worker: Running Spark version 2.2.0
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO Worker: Spark home: /usr/local/spark-2.2.0-bin-hadoop2.7
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO Utils: Successfully started service ‘WorkerUI’ on port 8081.
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO WorkerWebUI: Bound WorkerWebUI to 0.0.0.0, and started at http://192.168.2.160:8081
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO Worker: Connecting to master spark-master:7077…
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO TransportClientFactory: Successfully created connection to spark-master/172.18.0.2:7077 after 21 ms (0 ms spent in bootstraps)
spark-master_1_2662f09fb875 | 18/12/01 01:51:03 INFO Master: Registering worker 172.18.0.3:8889 with 2 cores, 2.0 GB RAM
spark-worker-1_1_46297c556d03 | 18/12/01 01:51:03 INFO Worker: Successfully registered with master spark://spark-master:7077
更多推荐
所有评论(0)