Docker for desktop

符合国情的科学安装指南: https://github.com/AliyunContainerService/k8s-for-docker-desktop

{
  "experimental": false,
  "debug": true,
  "registry-mirrors": [
    "https://docker.mirrors.ustc.edu.cn",
    "http://hub-mirror.c.163.com"
  ]
}

注: 一定要按照步骤一步一步来,就能成功安装

https://docs.docker.com/get-started/

Building the App’s Container Image

use Dockerfile build image(使用 Dockerfile 构建 image)

In order to build the application, we need to use a Dockerfile. A Dockerfile is simply a text-based script of instructions that is used to create a container image. If you’ve created Dockerfiles before, you might see a few flaws in the Dockerfile below. But, don’t worry! We’ll go over them.

为了构建应用image ,我们需要使用 Dockerfile, Dockerfile 就是一个基于 文本的脚本指令,用来创建 container image .

  1. Create a file named Dockerfile in the same folder as the file package.json with the following contents.

    # 设置基础镜像
    FROM node:12-alpine 
    # 设置工作目录
    WORKDIR /app
    # 拷贝主机的当前目录内容到容器的工作目录中
    COPY . .
    # 运行构建命令
    RUN yarn install --production
    # 运行 node 程序
    CMD ["node", "src/index.js"]
    

    Please check that the file Dockerfile has no file extension like .txt. Some editors may append this file extension automatically and this would result in an error in the next step.

  2. If you haven’t already done so, open a terminal and go to the app directory with the Dockerfile. Now build the container image using the docker build command.

    docker build -t getting-started .
    ---
    docker build : docker 构建 image 的命令
    -t :给 image 打标签,标签名为: getting-started
    . 告知 Docker 应该 到 到当前目录寻找 Dockerfile 文件
    

    This command used the Dockerfile to build a new container image. You might have noticed that a lot of “layers” were downloaded. This is because we instructed the builder that we wanted to start from the node:12-alpine image. But, since we didn’t have that on our machine, that image needed to be downloaded.

    After the image was downloaded, we copied in our application and used yarn to install our application’s dependencies. The CMD directive specifies the default command to run when starting a container from this image.

    Finally, the -t flag tags our image. Think of this simply as a human-readable name for the final image. Since we named the image getting-started, we can refer to that image when we run a container.

    The . at the end of the docker build command tells that Docker should look for the Dockerfile in the current directory.

Starting an App Container

docker run -dp 3000:3000 getting-started

docker run : 运行 docker 镜像的命令
-dp : -d -p 命令的缩写, -d :后台运行, -p: 端口映射 (主机端口:容器端口)

stop docker container and remove images

docker ps  查看运行的 docker container
docker stop <the-container-id>  # 停止 docker container
docker rm <the-container-id>  # 删除 container

docker rm -f <the-container-id> # 停止并删除 container

这里写图片描述

Pushing Our Image

  1. docker hub 创建 docker repo --> https://hub.docker.com/

    image-20200921184506691

    如果出现错误,按照下图所示解决即可:

image-20200921185124448

命令行登录 docker :

docker login -u YOUR_USER_NAME  # 登录,输入密码

给 image 打标签:

docker tag getting-started YOUR_USER_NAME/getting-started

把本地 image push 到自己的 docker hub:

docker push YOUR_USER_NAME/getting-started

docker 与 CI

在上面的演示中,我们学习到了如何把我们的 image push 到 都docker hub 中,并在 PWD 中进行从 docker hub 拉取 我们的 image 进行运行.

其实,这就是我们常见的 CI pipeline 流程: pipeline 创建 image 并 push 到 docker 仓库,然后再生产环境拉取 刚刚 push 的 latest 版本的 image 进行运行.

Persisting our DB

The Container’s Filesystem

When a container runs, it uses the various layers from an image for its filesystem. Each container also gets its own “scratch space” to create/update/remove files. Any changes won’t be seen in another container, even if they are using the same image.

当容器运行的时候, 它会使用 image 中的各个层作为其文件系统,每个 container 都会有自己的暂存空间来创建/更新/删除文件, 任何改变在其它容器都不可见, 及时它们使用相同的 image .

在 docker 容器中执行命令:

docker exec <container-id> cat /data.txt  
---
-it : 容器的shell会映射到当前本地的shell,你在本机窗口输入的命令会传入到容器中.

Container Volumes(卷)

With the previous experiment, we saw that each container starts from the image definition each time it starts. While containers can create, update, and delete files, those changes are lost when the container is removed and all changes are isolated to that container. With volumes, we can change all of this.

在上面的实验中,我们看到每次容器启动都从 image 定义开始. 虽然在 container 中可以创建/更新/删除 文件,但是一旦 container 被删除或者停止, 所有的改变都会丢失. 使用 Volumes 我们就能改变这种情况.

Volumes provide the ability to connect specific filesystem paths of the container back to the host machine. If a directory in the container is mounted, changes in that directory are also seen on the host machine. If we mount that same directory across container restarts, we’d see the same files.

There are two main types of volumes. We will eventually use both, but we will start with named volumes.

volumes on the Docker host

Volumes 提供了一种能力 , 连接 container 中的 文件系统到宿主机中的文件系统. 如果一个container 中的目录被 mounted(挂载)到了宿主机中, 那么只要 container 对这个目录进行改变, 宿主机中挂载的那个目录也会相应随之改变.

因此,如果我们再重启容器时挂载一样的目录,那么我们将会看到相同的文件.

  • named volume

As mentioned, we are going to use a named volume. Think of a named volume as simply a bucket of data. Docker maintains the physical location on the disk and you only need to remember the name of the volume. Every time you use the volume, Docker will make sure the correct data is provided.

把 nameed volume 想象成一个数据桶, Docker 会持有一个磁盘上的物理地址,你只需要记住 volume 的名字,每次使用 volume 的时候,Docker 都会确保提供正确的数据

  1. Create a volume by using the docker volume create command.

    docker volume create todo-db
    
  2. Stop the todo app container once again in the Dashboard (or with docker rm -f <id>), as it is still running without using the persistent volume.

  3. Start the todo app container, but add the -v flag to specify a volume mount. We will use the named volume and mount it to /etc/todos, which will capture all files created at the path.

    docker run -dp 3000:3000 -v todo-db:/etc/todos getting-started
    
  4. Once the container starts up, open the app and add a few items to your todo list.

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lBw8Be84-1602581365937)(http://localhost/tutorial/persisting-our-data/items-added.png)]

  5. Remove the container for the todo app. Use the Dashboard or docker ps to get the ID and then docker rm -f <id> to remove it.

  6. Start a new container using the same command from above.

  7. Open the app. You should see your items still in your list!

  8. Go ahead and remove the container when you’re done checking out your list.

Hooray! You’ve now learned how to persist data!

Diving into our Volume (深入了解卷)

A lot of people frequently ask “Where is Docker actually storing my data when I use a named volume?” If you want to know, you can use the docker volume inspect command.

大多数人都会问: 当我使用了一个 named volume 的时候,Docker 把我的数据存到那里去了,如果你想知道,可以使用 docker volume inspect <VOLUME_NAME>

$ docker volume inspect todo-db                                                                         
[
    {
        "CreatedAt": "2020-09-22T02:15:53Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/todo-db/_data",
        "Name": "todo-db",
        "Options": {},
        "Scope": "local"
    }
]

其中, Mountpoint 就是数据存储的真实磁盘位置.

注意: Accessing Volume data directly on Docker Desktop

如果你像我一样 使用 Docker Desktop 进行试验,那么你在你的宿主机中是看不到这个目录存在的.
因为 Docker Destop 相当于一个小型的 虚拟机(VM),docker 命令是在这个 虚拟机运行的,如果你想查看真实的 Mountpoint 目录,你需要进入到这个小型的虚拟机(VM)中.

Recap(回顾)

但是,我们也必要认识到,每次更改代码都要重新构建 image,会花费大量时间, 肯定会有一种更好的方式来改变这种缺点.

我们前面已经暗示过了,可以使用 bind mounts(绑定挂载),这是一种更好的方式 !

Using Bind Mounts

mount 释义: 加载的文件系统;已加载的文件系统;装入

In the previous chapter, we talked about and used a named volume to persist the data in our database. Named volumes are great if we simply want to store data, as we don’t have to worry about where the data is stored.

在上一节中,我们讨论了使用 named volume来持久化我们的数据库, 如果我们想要简单的存储数据并且不太担心数据存储在哪里,那么Named volumes是非常棒的

With bind mounts, we control the exact mountpoint on the host. We can use this to persist data, but is often used to provide additional data into containers. When working on an application, we can use a bind mount to mount our source code into the container to let it see code changes, respond, and let us see the changes right away.

但是, 对于 bind mounts来说,我们可以控制额外的挂载点在主机上, 我们也可以用来持久化数据. 但是, bind volumes经常用来提供数据到容器中.

当应用程序运行的时候,我们可以使用 bind mount 去挂载我们的源代码到容器中,让容器可以看到外部源码的更改.并且可以让我们立即看到改变

  • Quick Volume Type Comparisons

    image-20200922115040545

    Starting a Dev-Mode Container

    To run our container to support a development workflow, we will do the following:

    • Mount our source code into the container
    • Install all dependencies, including the “dev” dependencies
    • Start nodemon to watch for filesystem changes

    为了让我们的 conatiner 支持开发工作流,我们只需要做下面几件事:

    • mount 源代码到 container 中
    • 安装本地开发的依赖环境
    • 开启 nodemon 监听文件系统的改变

    For Node-based applications, nodemon is a great tool to watch for file changes and then restart the application. There are equivalent tools in most other languages and frameworks.

    nodemon 一个 node 应用开发中的可以用来监听文件改变的工具, 其他语言也有响应的支持工具

    开干吧!

    1. Make sure you don’t have any previous getting-started containers running.

    2. Run the following command. We’ll explain what’s going on afterwards:

      docker run -dp 3000:3000 \
          -w /app -v "$(pwd):/app" \
          node:12-alpine \
          sh -c "yarn install && yarn run dev"
      
      • -dp 3000:3000 - same as before. Run in detached (background) mode and create a port mapping

      • -w /app - sets the “working directory” or the current directory that the command will run from

      • -v "$(pwd):/app" - bind mount the current directory from the host in the container into the /app directory

        绑定当前主机的 volume 目录到 container 中的目录

      • node:12-alpine - the image to use. Note that this is the base image for our app from the Dockerfile

      • sh -c "yarn install && yarn run dev" - the command. We’re starting a shell using sh (alpine doesn’t have bash) and running yarn install to install all dependencies and then running yarn run dev. If we look in the package.json, we’ll see that the dev script is starting nodemon.

    3. You can watch the logs using docker logs -f <container-id>. You’ll know you’re ready to go when you see this…

      docker logs -f <container-id>
      $ nodemon src/index.js
      [nodemon] 1.19.2
      [nodemon] to restart at any time, enter `rs`
      [nodemon] watching dir(s): *.*
      [nodemon] starting `node src/index.js`
      Using sqlite database at /etc/todos/todo.db
      Listening on port 3000
      

      When you’re done watching the logs, exit out by hitting Ctrl+C.

    4. Now, let’s make a change to the app. In the src/static/js/app.js file, let’s change the “Add Item” button to simply say “Add”. This change will be on line 109.

      -                         {submitting ? 'Adding...' : 'Add Item'}
      +                         {submitting ? 'Adding...' : 'Add'}
      
    5. Simply refresh the page (or open it) and you should see the change reflected in the browser almost immediately. It might take a few seconds for the Node server to restart, so if you get an error, just try refreshing after a few seconds.

      Screenshot of updated label for Add button

    6. Feel free to make any other changes you’d like to make. When you’re done, stop the container and build your new image using docker build -t getting-started ..

    Using bind mounts is very common for local development setups. The advantage is that the dev machine doesn’t need to have all of the build tools and environments installed. With a single docker run command, the dev environment is pulled and ready to go. We’ll talk about Docker Compose in a future step, as this will help simplify our commands (we’re already getting a lot of flags).

    使用 bind mounts 是非常常见的本地开发方法,好处就是本地机器不需要安装所有的构建工具,只需要一个 docker run命令开发环境就会准备好. 虽然现在看来,命令比较复杂, 在后面的章节,我们会讨论 Docker Compose 的使用,这会帮助我们简化 docker 命令.

    Recap(回顾)

    At this point, we can persist our database and respond rapidly to the needs and demands of our investors and founders. Hooray! But, guess what? We received great news!

    Your project has been selected for future development!

    In order to prepare for production, we need to migrate our database from working in SQLite to something that can scale a little better. For simplicity, we’ll keep with a relational database and switch our application to use MySQL. But, how should we run MySQL? How do we allow the containers to talk to each other? We’ll talk about that next!

    如果我们的应用程序突然很火,需要切换数据库,从 SQLite 到 MySQL, 那么该怎么做呢?

    Multi-Container Apps

    http://localhost/tutorial/multi-container-apps/

    Todo App connected to MySQL container

    Container Networking (容器网络)

    Remember that containers, by default, run in isolation and don’t know anything about other processes or containers on the same machine. So, how do we allow one container to talk to another? The answer is networking. Now, you don’t have to be a network engineer (hooray!). Simply remember this rule…

    If two containers are on the same network, they can talk to each other. If they aren’t, they can’t.

    需要知道的是,默认情况下, conatiner 是隔离运行的,并且对在同一台机器上的其它进程和容器一无所知,.

    那我们怎样才能联系到其它容器呢? 答案就是 networking !

    如果两个容器在同一个 network ,它们就可以通信,否则,便不能 !

    Starting MySQL

    There are two ways to put a container on a network: 1) Assign it at start or 2) connect an existing container. For now, we will create the network first and attach the MySQL container at startup.

    1. Create the network.

      docker network create todo-app
      
    2. Start a MySQL container and attach it to the network. We’re also going to define a few environment variables that the database will use to initialize the database (see the “Environment Variables” section in the MySQL Docker Hub listing).

      docker run -d \
          --network todo-app --network-alias mysql \
          -v todo-mysql-data:/var/lib/mysql \
          -e MYSQL_ROOT_PASSWORD=secret \
          -e MYSQL_DATABASE=todos \
          mysql:5.7
          
       ---
       下载并运行一个 mysql 镜像 !
        --network todo-app --network-alias mysql  # 指定 networking,并且起一个别名: mysql
         -v todo-mysql-data:/var/lib/mysql  # 指定 named volume mounting 到 /var/lib/mysql
          
      

      You’ll also see we specified the --network-alias flag. We’ll come back to that in just a moment.

      Pro-tip

      You’ll notice we’re using a volume named todo-mysql-data here and mounting it at /var/lib/mysql, which is where MySQL stores its data. However, we never ran a docker volume create command. Docker recognizes we want to use a named volume and creates one automatically for us.

      你会注意到我们使用了一个 volume named ,叫 todo-mysql-data, 并且把它 mounting 到了 /var/lib/mysql目录, 也就是 MYSQL 的数据存储位置, 虽然我们从没有使用 docker volume create 命令创建过它, 但是 docker 很聪明,它会为我们自动创建.

    3. To confirm we have the database up and running, connect to the database and verify it connects.

      docker exec -it <mysql-container-id> mysql -p
      

      When the password prompt comes up, type in secret. In the MySQL shell, list the databases and verify you see the todos database.

      mysql> SHOW DATABASES;
      

      You should see output that looks like this:

      +--------------------+
      | Database           |
      +--------------------+
      | information_schema |
      | mysql              |
      | performance_schema |
      | sys                |
      | todos              |
      +--------------------+
      5 rows in set (0.00 sec)
      

      Hooray! We have our todos database and it’s ready for us to use!

    Running our App with MySQL

    The todo app supports the setting of a few environment variables to specify MySQL connection settings. They are:

    我们 todo app 支持外部传入 MYSQL 的环境变量设置

    • MYSQL_HOST - the hostname for the running MySQL server

      MYSQL 主机地址, 这里我们可以使用我们创建的 networking : todo-app

    • MYSQL_USER - the username to use for the connection

    • MYSQL_PASSWORD - the password to use for the connection

    • MYSQL_DB - the database to use once connected

docker run -dp 3000:3000 \
  -w /app \
  -v "$(pwd):/app" \
  --network todo-app \
  -e MYSQL_HOST=mysql \
  -e MYSQL_USER=root \
  -e MYSQL_PASSWORD=secret \
  -e MYSQL_DB=todos \
  node:12-alpine \
  sh -c "npm install && npm run dev"
  
  
  ---
  
-w: 工作目录
-v: copy
--network : 指定 network
-e: 环境变量

Recap(回顾) – network

At this point, we have an application that now stores its data in an external database running in a separate container. We learned a little bit about container networking and saw how service discovery can be performed using DNS.

现在我们的 todo app 可以使用另一个 container 存储数据. 我们了解了一些关于 networking 的知识,并了解了如何使用 DNS 执行服务发现

But, there’s a good chance you are starting to feel a little overwhelmed with everything you need to do to start up this application. We have to create a network, start containers, specify all of the environment variables, expose ports, and more! That’s a lot to remember and it’s certainly making things harder to pass along to someone else.

但是,很有可能您开始对启动该应用程序所需要做的一切感到有些不知所措。我们必须创建网络,启动容器,指定所有的环境变量,公开端口,等等!要记住的东西太多了,这肯定会让事情更难传递给别人。

In the next section, we’ll talk about Docker Compose. With Docker Compose, we can share our application stacks in a much easier way and let others spin them up with a single (and simple) command!

别担心,再下一节中,我们将会介绍 Cocker Compose . 使用 Docker Compose 我们可以更加容易的分享我们的应用程序栈, 可以让其他人一个命令就能运行它们 !

Using Docker Compose

Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down.

Docker Compose 是一个用来定义和分享 multi-container 应用程序的工具. Docker Compose 允许我们使用一个 YAML 文件定义 services ,并且用一行命令运行它,可以方便启动和销毁所有 conatiners

The big advantage of using Compose is you can define your application stack in a file, keep it at the root of your project repo (it’s now version controlled), and easily enable someone else to contribute to your project. Someone would only need to clone your repo and start the compose app. In fact, you might see quite a few projects on GitHub/GitLab doing exactly this now.

使用 Docker Compose 最大的好处就是你可以在一个文件中定义你的应用程序栈, 你需要把这个文件放在你的项目/仓库根目录,可以很方便的让其他人 clone 下来你的项目,并且启动你定义好的 compose app, 实际上,你将会看到,在 GitHub/GitLab 上很多项目都是这样做的 !

So, how do we get started?

Installing Docker Compose

If you installed Docker Desktop/Toolbox for either Windows or Mac, you already have Docker Compose! Play-with-Docker instances already have Docker Compose installed as well. If you are on a Linux machine, you will need to install Docker Compose using the instructions here.

如果你使用 Docker Desktop/Toolbox 等安装的 Docker ,那么就不必再手动安装, Docker Compose 已经被安装好了.

After installation, you should be able to run the following and see version information.

docker-compose version

Creating our Compose File

  1. At the root of the app project, create a file named docker-compose.yml.

    第一步,在项目根目录下创建 docker-compose.yml 文件

  2. In the compose file, we’ll start off by defining the schema version. In most cases, it’s best to use the latest supported version. You can look at the Compose file reference for the current schema versions and the compatibility matrix.

    定义 schema version

    version: "3.7"
    
  3. Next, we’ll define the list of services (or containers) we want to run as part of our application.

    定义我们想要运行的一组 service 或者 conatiner

    version: "3.7"
    
    services:
    

And now, we’ll start migrating a service at a time into the compose file.

version: "3.7"

services:
  todo-app: # service name , 这个 service name 会自动成为 network alias
    image: node:12-alpine # 基础镜像
    command: sh -c "yarn install && yarn run dev" # command 在 conatiner 中执行 shell 命令
    ports:        # 定义端口映射
      - 8080:3000  # Host:Container
    working_dir: /app # 设置 Container 工作目录
    volumes:			# 设置 volume 映射关系
      - ./:/app   # 在 Docker Compose 中,我们可以使用相对路径
    environment: # 定义 Container 环境变量
      MYSQL_HOST: mysql
      MYSQL_USER: root
      MYSQL_PASSWORD: secret
      MYSQL_DB: todos

  mysql:  # service name , 这个 service name 会自动成为 network alias
    image: mysql:5.7 # 基础镜像
    volumes:  # volume 映射到宿主机目录(使用 Dokcker Desktop 自己的就是 虚拟机,在我们本机是看不到的哦 !)
      - todo-mysql-data:/var/lib/mysql
    environment: # 环境换辆
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: todos

# 在 Docker Compose 中, volume 要显示定义,因为 Docker 不能为我们自动创建
volumes:
  todo-mysql-data:

Running our Application Stack

Now that we have our docker-compose.yml file, we can start it up!

  1. Make sure no other copies of the app/db are running first (docker ps and docker rm -f <ids>).

  2. Start up the application stack using the docker-compose up command. We’ll add the -d flag to run everything in the background.

    docker-compose up -d 
    

    When we run this, we should see output like this:

    Creating network "app_default" with the default driver # 创建 network
    Creating volume "app_todo-mysql-data" with default driver # 创建 volume
    Creating app_mysql_1    ... done # 启动一个 container,格式 项目名_容器名_序号  ,序号表示启动的容器数
    Creating app_todo-app_1 ... done
    

    You’ll notice that the volume was created as well as a network! By default, Docker Compose automatically creates a network specifically for the application stack (which is why we didn’t define one in the compose file).

    我们可以看到,volume 和 network 都已经被创建.

    默认情况下, Docker Compose 会为我们自动创建 network ,我们不必手动创建,这也是我们为什么没有在 compse yaml 文件中定义的原因.

  3. Let’s look at the logs using the docker-compose logs -f command. You’ll see the logs from each of the services interleaved into a single stream. This is incredibly useful when you want to watch for timing-related issues. The -f flag “follows” the log, so will give you live output as it’s generated.

    If you don’t already, you’ll see output that looks like this…

    mysql_1  | 2019-10-03T03:07:16.083639Z 0 [Note] mysqld: ready for connections.
    mysql_1  | Version: '5.7.27'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)
    app_1    | Connected to mysql db at host mysql
    app_1    | Listening on port 3000
    

    The service name is displayed at the beginning of the line (often colored) to help distinguish messages. If you want to view the logs for a specific service, you can add the service name to the end of the logs command (for example, docker-compose logs -f app).

    Pro tip - Waiting for the DB before starting the app

    When the app is starting up, it actually sits and waits for MySQL to be up and ready before trying to connect to it. Docker doesn’t have any built-in support to wait for another container to be fully up, running, and ready before starting another container. For Node-based projects, you can use the wait-port dependency. Similar projects exist for other languages/frameworks.

    Docker 没有提供等待另一个 container 运行的功能

  4. At this point, you should be able to open your app and see it running. And hey! We’re down to a single command!

Seeing our App Stack in Docker Dashboard

If we look at the Docker Dashboard, we’ll see that there is a group named app. This is the “project name” from Docker Compose and used to group the containers together. By default, the project name is simply the name of the directory that the docker-compose.yml was located in.

If you twirl down the app, you will see the two containers we defined in the compose file. The names are also a little more descriptive, as they follow the pattern of <project-name>_<service-name>_<replica-number>. So, it’s very easy to quickly see what container is our app and which container is the mysql database.

如果我们使用 Docker Desktop 可以查看 Docker Dashboard ,将会看到一个叫做 app 的容器组。这是 Docker Compose 所在的项目名用作容器组的名字。默认情况下,项目名 就是 docker-compose.yml 所在的目录。

如果你展开 app ,你就会看到我们再 compose file 文件中定义的 container,命名规则遵循:

<project-name>_<service-name>_<replica-number>

image-20200926103958392

Tearing it All Down

When you’re ready to tear it all down, simply run docker-compose down or hit the trash can on the Docker Dashboard for the entire app. The containers will stop and the network will be removed.

➜ guang@guang  ~/app  docker-compose down
Stopping app_todo-app_1 ... done
Stopping app_mysql_1    ... done
Removing app_todo-app_1 ... done
Removing app_mysql_1    ... done
Removing network app_default

运行 docker-compose down 可以停止 容器组,并且移除 相关 container 和 network。

但是,默认情况下, named volume 是不会被移除的,如果想要移除 docker compose 中的 volume ,可以加上 --volume 参数。

当你在 Docker Dashboard 中删除 app 容器组的时候,也不会移除 volume 。

Removing Volumes

By default, named volumes in your compose file are NOT removed when running docker-compose down. If you want to remove the volumes, you will need to add the --volumes flag.

The Docker Dashboard does not remove volumes when you delete the app stack.

Once torn down, you can switch to another project, run docker-compose up and be ready to contribute to that project! It really doesn’t get much simpler than that!

Recap

In this section, we learned about Docker Compose and how it helps us dramatically simplify the defining and sharing of multi-service applications. We created a Compose file by translating the commands we were using into the appropriate compose format.

在本节中,我们学习了Docker组合,以及它如何帮助我们极大地简化多服务应用程序的定义和共享。通过将使用的命令转换为适当的组合格式,我们创建了一个组合文件。

At this point, we’re starting to wrap up the tutorial. However, there are a few best practices about image building we want to cover, as there is a big issue with the Dockerfile we’ve been using. So, let’s take a look!

至此,我们开始结束本教程。但是,由于我们一直在使用的Dockerfile存在很大的问题,因此我们要介绍一些有关映像构建的最佳实践。所以,让我们来看看!

Image Building Best Practices

Image Layering (镜像分层)

Did you know that you can look at what makes up an image? Using the docker image history command, you can see the command that was used to create each layer within an image.

  1. Use the docker image history command to see the layers in the getting-started image you created earlier in the tutorial.

    docker image history getting-started
    

    You should get output that looks something like this (dates/IDs may be different).

    IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
    a78a40cbf866        18 seconds ago      /bin/sh -c #(nop)  CMD ["node" "src/index.j…    0B                  
    f1d1808565d6        19 seconds ago      /bin/sh -c yarn install --production            85.4MB              
    a2c054d14948        36 seconds ago      /bin/sh -c #(nop) COPY dir:5dc710ad87c789593…   198kB               
    9577ae713121        37 seconds ago      /bin/sh -c #(nop) WORKDIR /app                  0B                  
    b95baba1cfdb        13 days ago         /bin/sh -c #(nop)  CMD ["node"]                 0B                  
    <missing>           13 days ago         /bin/sh -c #(nop)  ENTRYPOINT ["docker-entry…   0B                  
    <missing>           13 days ago         /bin/sh -c #(nop) COPY file:238737301d473041…   116B                
    <missing>           13 days ago         /bin/sh -c apk add --no-cache --virtual .bui…   5.35MB              
    <missing>           13 days ago         /bin/sh -c #(nop)  ENV YARN_VERSION=1.21.1      0B                  
    <missing>           13 days ago         /bin/sh -c addgroup -g 1000 node     && addu…   74.3MB              
    <missing>           13 days ago         /bin/sh -c #(nop)  ENV NODE_VERSION=12.14.1     0B                  
    <missing>           13 days ago         /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
    <missing>           13 days ago         /bin/sh -c #(nop) ADD file:e69d441d729412d24…   5.59MB   
    

    Each of the lines represents a layer in the image. The display here shows the base at the bottom with the newest layer at the top. Using this, you can also quickly see the size of each layer, helping diagnose large images.

    使用 docker image history 可以查看一个 image 被构建出来经历了哪些步骤. 并且我们知道, 每一行命令都会作为 image 中的一层. 最基础的 层 会显示在最底部,最新的层会显示在最顶部.

  2. You’ll notice that several of the lines are truncated. If you add the --no-trunc flag, you’ll get the full output (yes… funny how you use a truncated flag to get untruncated output, huh?)

    docker image history --no-trunc getting-started
    

Layer Caching( 层缓存)— 加快 image 构建

Now that you’ve seen the layering in action, there’s an important lesson to learn to help decrease build times for your container images.

Once a layer changes, all downstream layers have to be recreated as well

image 层更改后,所有下层也必须重新构建

Let’s look at the Dockerfile we were using one more time…

FROM node:12-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]

Going back to the image history output, we see that each command in the Dockerfile becomes a new layer in the image. You might remember that when we made a change to the image, the yarn dependencies had to be reinstalled. Is there a way to fix this? It doesn’t make much sense to ship around the same dependencies every time we build, right?

查看上面我们的 node 项目, 我们发现每次我们修改完源代码后,技术没有添加任何 node 依赖,还是会执行 yarn install 命令下载所有依赖,每次构建都会下载一样的依赖,如何解决这个问题呢?

To fix this, we need to restructure our Dockerfile to help support the caching of the dependencies. For Node-based applications, those dependencies are defined in the package.json file. So, what if we copied only that file in first, install the dependencies, and then copy in everything else? Then, we only recreate the yarn dependencies if there was a change to the package.json. Make sense?

我们需要重新构造我们的 Dockerfile 文件,把下载依赖这一层作为缓存,由于 node 是根据 package.json 这个文件下载所有依赖的,所以我们可以下先把 package.json 拷贝到容器中,先下载依赖,再拷贝源码进行构建,不就可以重用 安装 node 依赖这一层了吗!

  1. Update the Dockerfile to copy in the package.json first, install dependencies, and then copy everything else in.

    FROM node:12-alpine
    WORKDIR /app
    COPY package.json yarn.lock ./
    RUN yarn install --production
    COPY . .
    CMD ["node", "src/index.js"]
    
  2. Create a file named .dockerignore in the same folder as the Dockerfile with the following contents.

    node_modules
    

    .dockerignore files are an easy way to selectively copy only image relevant files. You can read more about this here. In this case, the node_modules folder should be omitted in the second COPY step because otherwise, it would possibly overwrite files which were created by the command in the RUN step. For further details on why this is recommended for Node.js applications and other best practices, have a look at their guide on Dockerizing a Node.js web app.

    .dockerignore 文件可以帮助在 COPY 时忽略某些文件

  3. Build a new image using docker build.

    docker build -t getting-started .
    

    You should see output like this…

    Sending build context to Docker daemon  219.1kB
    Step 1/6 : FROM node:12-alpine
    ---> b0dc3a5e5e9e
    Step 2/6 : WORKDIR /app
    ---> Using cache
    ---> 9577ae713121
    Step 3/6 : COPY package.json yarn.lock ./
    ---> bd5306f49fc8
    Step 4/6 : RUN yarn install --production
    ---> Running in d53a06c9e4c2
    yarn install v1.17.3
    [1/4] Resolving packages...
    [2/4] Fetching packages...
    info fsevents@1.2.9: The platform "linux" is incompatible with this module.
    info "fsevents@1.2.9" is an optional dependency and failed compatibility check. Excluding it from installation.
    [3/4] Linking dependencies...
    [4/4] Building fresh packages...
    Done in 10.89s.
    Removing intermediate container d53a06c9e4c2
    ---> 4e68fbc2d704
    Step 5/6 : COPY . .
    ---> a239a11f68d8
    Step 6/6 : CMD ["node", "src/index.js"]
    ---> Running in 49999f68df8f
    Removing intermediate container 49999f68df8f
    ---> e709c03bc597
    Successfully built e709c03bc597
    Successfully tagged getting-started:latest
    

    You’ll see that all layers were rebuilt. Perfectly fine since we changed the Dockerfile quite a bit.

  4. Now, make a change to the src/static/index.html file (like change the <title> to say “The Awesome Todo App”).

  5. Build the Docker image now using docker build -t getting-started . again. This time, your output should look a little different.

    Sending build context to Docker daemon  219.1kB
    Step 1/6 : FROM node:12-alpine
    ---> b0dc3a5e5e9e
    Step 2/6 : WORKDIR /app
    ---> Using cache
    ---> 9577ae713121
    Step 3/6 : COPY package.json yarn.lock ./
    ---> Using cache
    ---> bd5306f49fc8
    Step 4/6 : RUN yarn install --production
    ---> Using cache
    ---> 4e68fbc2d704
    Step 5/6 : COPY . .
    ---> cccde25a3d9a
    Step 6/6 : CMD ["node", "src/index.js"]
    ---> Running in 2be75662c150
    Removing intermediate container 2be75662c150
    ---> 458e5c6f080c
    Successfully built 458e5c6f080c
    Successfully tagged getting-started:latest
    

    First off, you should notice that the build was MUCH faster! And, you’ll see that steps 1-4 all have Using cache. So, hooray! We’re using the build cache. Pushing and pulling this image and updates to it will be much faster as well. Hooray!

Multi-Stage Builds (多阶段构建) — 减少 image 体积

While we’re not going to dive into it too much in this tutorial, multi-stage builds are an incredibly powerful tool to help use multiple stages to create an image. There are several advantages for them:

  • Separate build-time dependencies from runtime dependencies
  • Reduce overall image size by shipping only what your app needs to run

多阶段构建: multi-stage build 可以帮助我们在多个阶段中构建 image,有如下优点:

  • 把构建时依赖和运行时依赖分开
  • 通过仅运行应用程序需要运行的内容来减少整体 image 尺寸.
Maven/Tomcat Example

When building Java-based applications, a JDK is needed to compile the source code to Java bytecode. However, that JDK isn’t needed in production. Also, you might be using tools like Maven or Gradle to help build the app. Those also aren’t needed in our final image. Multi-stage builds help.

FROM maven AS build # 定义第一阶段构建叫做 build
WORKDIR /app
COPY . .
RUN mvn package

FROM tomcat # 定义第二阶段的构建
# 从上一步的 build 阶段拷贝文件(/app/target/file.war)到容器的 /usr/local/tomcat/webapps 目录
COPY --from=build /app/target/file.war /usr/local/tomcat/webapps 

In this example, we use one stage (called build) to perform the actual Java build using Maven. In the second stage (starting at FROM tomcat), we copy in files from the build stage. The final image is only the last stage being created (which can be overridden using the --target flag).

React Example

When building React applications, we need a Node environment to compile the JS code (typically JSX), SASS stylesheets, and more into static HTML, JS, and CSS. If we aren’t doing server-side rendering, we don’t even need a Node environment for our production build. Why not ship the static resources in a static nginx container?

FROM node:12 AS build
WORKDIR /app
COPY package* yarn.lock ./
RUN yarn install
COPY public ./public
COPY src ./src
RUN yarn run build

FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html

Here, we are using a node:12 image to perform the build (maximizing layer caching) and then copying the output into an nginx container. Cool, huh?

Recap

By understanding a little bit about how images are structured, we can build images faster and ship fewer changes. Multi-stage builds also help us reduce overall image size and increase final container security by separating build-time dependencies from runtime dependencies.

What Next?

简单的了解了 Docker 的基础知识, 现在我们不会再去深入下去了.我们来看看关于使用容器的其它的问题:

Container Orchestration

Running containers in production is tough. You don’t want to log into a machine and simply run a docker run or docker-compose up. Why not? Well, what happens if the containers die? How do you scale across several machines? Container orchestration solves this problem. Tools like Kubernetes, Swarm, Nomad, and ECS all help solve this problem, all in slightly different ways.

The general idea is that you have “managers” who receive expected state. This state might be “I want to run two instances of my web app and expose port 80.” The managers then look at all of the machines in the cluster and delegate work to “worker” nodes. The managers watch for changes (such as a container quitting) and then work to make actual state reflect the expected state.

Cloud Native Computing Foundation Projects(云原生计算基础项目)

The CNCF is a vendor-neutral home for various open-source projects, including Kubernetes, Prometheus, Envoy, Linkerd, NATS, and more! You can view the graduated and incubated projects here and the entire CNCF Landscape here. There are a LOT of projects to help solve problems around monitoring, logging, security, image registries, messaging, and more!

So, if you’re new to the container landscape and cloud-native application development, welcome! Please connect to the community, ask questions, and keep learning! We’re excited to have you!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐