事发背景

因为学习需要又要捡起K8S,更新的真快,上次用的还是v1.13,现在就v1.17了。
之前每安装一次就查一次教程,经常有些地方对不上,很麻烦,干脆自己记录一次安装过程好了。

温馨提示

拉不了官网镜像的同学可以移步参考博文Ubuntu16.04下部署安装k8s的步骤4中如何手动拉取镜像并改tag,我试过他那样做可行。

下面步骤的设置代理只是为了拉取镜像而已。

执行步骤

设置Polipo终端代理

输入命令安装 Polipo:
sudo apt-get install polipo


修改配置文件:
sudo gedit /etc/polipo/config

将下面的内容整个替换到文件中并保存:

logSyslog = false
logFile = "/var/log/polipo/polipo.log"
socksParentProxy = "127.0.0.1:1080"
socksProxyType = socks5
chunkHighMark = 50331648
objectHighMark = 16384
serverMaxSlots = 64
serverSlots = 16
serverSlots1 = 32
proxyAddress = "0.0.0.0"
proxyPort = 8123

重启 Polipo:
/etc/init.d/polipo restart

验证是否正常工作:
export http_proxy="http://127.0.0.1:8123/"
curl www.google.com

如果正常,就会返回抓取到的网页内容。

安装docker,设置docker代理

  1. 安装docker

    sudo apt-get update
    sudo apt-get install -y docker.io
    
  2. 设置一下daemon中属性为syetemd

    sudo su
    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    
  3. 配置docker server代理
    为docker service创建一个systemd drop-in 目录

    mkdir -p /etc/systemd/system/docker.service.d
    

    使用下面内容创建http文件 /etc/systemd/system/docker.service.d/http-proxy.conf

    [Service]
    	Environment="HTTP_PROXY=http://127.0.0.1:8123/"
    

    同样,再创建https文件 /etc/systemd/system/docker.service.d/https-proxy.conf

    [Service]
    Environment="HTTPS_PROXY=http://127.0.0.1:8123/"
    

    写入改动

    sudo systemctl daemon-reload
    

    重启docker服务

    sudo systemctl restart docker
    

    验证配置已经生效,输入

    systemctl show --property=Environment docker
    

    可以看到输出如下即成功

    Environment=HTTP_PROXY=http://127.0.0.1:8123/ HTTPS_PROXY=http://127.0.0.1:8123/
    

安装K8S最新版本

执行 sudo su进入root权限,下面命令全在root下执行

  1. 关闭swap!!!

    swapoff -a
    

    这只是暂时关闭swap,如果init时还是提示swap失败,可以参考我这篇博客ubuntu 16.04 swapoff -a无效导致kubectl启动失败说的做

  2. 安装kubeadm, kubelet and kubectl

    这里需要用到上面配置的Polipo实现终端代理

    先设置终端代理

    export http_proxy="http://127.0.0.1:8123/"
    export https_proxy="http://127.0.0.1:8123/"
    

    然后执行

    apt-get update && sudo apt-get install -y apt-transport-https curl
    
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    
    cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main
    EOF
    
    apt-get update
    apt-get install -y kubelet kubeadm kubectl
    apt-mark hold kubelet kubeadm kubectl
    

    可以在这里提前下载docker镜像

    kubeadm config images pull
    
  3. 配置master节点
    这里需要取消步骤2设置的终端代理,不然最后会出错

    unset http_proxy https_proxy
    

    因为后面采用Flannel作为网络配置,所以init时需要设置–pod-network-cidr

    kubeadm init  --pod-network-cidr=10.244.0.0/16
    

    成功的话会看到类似下面的输出

    [init] Using Kubernetes version: vX.Y.Z
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubeadm-cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 31.501735 seconds
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-X.Y" in namespace kube-system with the configuration for the kubelets in the cluster
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-cp" as an annotation
    [mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: <token>
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      /docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
    
  4. 权限配置
    退出root模式,在普通用户模式下执行

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  5. 配置Flannel网络
    因为怕更新导致下面的命令失效,最好参考官网Installing a pod network add-on这里获取这条命令的最新参数

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
    
  6. 设置单机集群(如果想配置多节点集群的大兄弟可以不用看了)
    因为默认master是不允许调度任务的,所以需要执行一下下面的命令

    kubectl taint nodes --all node-role.kubernetes.io/master-
    

    可以看到类似node/cp untainted的输出即可。

遇到的坑

  1. 暂时设置禁止swap没用,改为永久禁用swap

  2. docker和kubectl的cgroup driver不一样, 我在上面步骤把dockerdriver改为了systemd, 但是init居然提示kubectl的是cgroup与docker的systemd不一致…只好也手动改一下kubectl的cgroup driver了

    修改kubectl:
    sudo gedit /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    内容改完后如下,主要在Environment="KUBELET_KUBECONFIG_ARGS=中增加了 --cgroup-driver=systemd 参数
    # Note: This dropin only works with kubeadm and kubelet v1.11+
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd"
    Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
    # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
    EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
    # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
    # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
    EnvironmentFile=-/etc/sysconfig/kubelet
    ExecStart=
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
    

参考
官网-Installing kubeadm
官网-Creating a single control-plane cluster with kubeadm
Ubuntu16.04下部署安装k8s
K8s 安裝筆記 - kubeadm 手動 (ubuntu16.04)
Ubuntu实现终端代理
Docker中的Cgroup Driver:Cgroupfs 与 Systemd

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐