从零开始搭建基于calico的kubenetes
从零开始搭建基于calico的kubenetes安装之前之前搭建过基于CoreOS的kubernetes,毕竟在中国90%以上的公司都不是基于CoreOS的,本文将基于ubuntu 16.04 64bit与目前为止最新的kubernetes从0开始再次进行部署kubernetes。好久没有研究k8s了,借助这个机会再次温习下。本章重点放在搭建,在参考本文的时候可能会需要翻墙拉去镜像。下章节将讲部署
从零开始搭建基于calico的kubenetes
安装之前
之前搭建过基于CoreOS的kubernetes,毕竟在中国90%以上的公司都不是基于CoreOS的,本文将基于ubuntu 16.04 64bit与目前为止最新的kubernetes从0开始再次进行部署kubernetes。好久没有研究k8s了,借助这个机会再次温习下。本章重点放在搭建,在参考本文的时候可能会需要翻墙拉去镜像。下章节将讲部署中遇到的坑以及解决办法,有志同道合的朋友在部署时碰到问题可以随时留言或微信@我,微信号:shenshouer
环境要求:
- 官方下载kubernetes-v1.4.4.tar.gz
- vagrant 1.8.6
- virtualbox 5.1.8 r111374 (Qt5.5.1)
虚拟机准备
可以使用vagrant也可以直接使用其他虚拟机或宿主机,根据自己的环境确定。
- ubuntu 16.04 64bit
- Vagrantfile 内容如下:
$num_instances = 3
$vm_gui = false
$vm_memory = 1024
$vm_cpus = 1
$instance_name_prefix = "ubuntu"
$vb_cpuexecutioncap = 100
def vm_gui
$vb_gui.nil? ? $vm_gui : $vb_gui
end
def vm_memory
$vb_memory.nil? ? $vm_memory : $vb_memory
end
def vm_cpus
$vb_cpus.nil? ? $vm_cpus : $vb_cpus
end
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
(1..$num_instances).each do |i|
config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config|
config.vm.hostname = vm_name
config.vm.provider :virtualbox do |vb|
vb.gui = vm_gui
vb.memory = vm_memory
vb.cpus = vm_cpus
vb.customize ["modifyvm", :id, "--cpuexecutioncap", "#{$vb_cpuexecutioncap}"]
end
ip = "172.18.8.#{i+100}"
config.vm.network :private_network, ip: ip
config.vm.synced_folder "./data", "/vagrant_data"
end
# config.vm.provision "shell", inline: "echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main'>/etc/apt/sources.list.d/kubernetes.list"
# config.vm.provision "shell", inline: "cat /etc/apt/sources.list.d/kubernetes.list"
# config.vm.provision "shell", inline: <<-SHELL
# export ALL_PROXY=socks5://172.18.8.1:1086
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# apt-get update
# apt-get install -y docker.io kubelet kubeadm kubectl kubernetes-cni
#SHELL
end
end
开始安装
设置master
配置TLS
master需要CA根证书公钥,ca.pem; apiserver证书,apiserver.pem与其私钥apiserver-key.pem.
- 1、创建
openssl.cnf
文件,内容如下:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
IP.1 = 10.100.0.1
IP.2 = ${MASTER_IPV4}
使用Master的IP替换${MASTER_IPV4}此IP用于访问kubernetes API.在我的这个测试环境中Master IP为172.18.8.101
- 2、生成必要的TLS文件
# Generate the root CA.
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
# Generate the API server keypair.
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
3、 在生成的文件中找到ca.pem, apiserver.pem, and apiserver-key.pem三个文件,使用
scp
命令复制到master主机上4、将证书移至
/etc/kubernetes/ssl
文件夹下,并设置为只有root用户可读:
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
在master上安装calico需要的etcd
Calico需要自己的一个etcd集群用于保存状态。在本例中将只在master上安装一个单节点的集群。
注意:在生产环境中建议部署一个分布式集群,本例中为简单起见,只部署一个单节点的etcd。
- 1、下载模板manifest文件
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/calico-etcd.manifest
- 2、 使用master ip地址替换calico-etcd.manifest中所有的
<MASTER_IPV4>
- 3、 将文件放置master主机的
/etc/kubernetes/manifests
目录下。在未运行kubelet
之前,这将没有任何作用,但是Calico允许在此期间etcd的缺失。
sudo mv -f calico-etcd.manifest /etc/kubernetes/manifests
在master上安装calico
我们需要在master安装calico。这将运行master路由数据包至其他节点上的pod。
- 1、 安装
calicoctl
工具
wget https://github.com/projectcalico/calico-containers/releases/download/v0.22.0/calicoctl
chmod +x calicoctl
sudo mv calicoctl /usr/bin
- 2、 预拉取calico/node容器(这点保证服务在瞬间启动):
sudo docker pull calico/node:v0.22.0
- 3、 下载
network-environment
模板:
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/network-environment-template
- 4、 编辑
network-environment
用于替代node的设置:
使用mster ip 替换<KUBERNETES_MASTER>
。这个IP是用于与从节点进行通讯的源IP。
- 5、 将
network-environment
放置到/etc
sudo mv -f network-environment /etc
- 6、 Install, enable, 与 start calico-node service:
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.3-docs/samples/kubernetes/common/calico-node.service
其内容如下:
[Unit]
Description=Calico per-node agent
Documentation=https://github.com/projectcalico/calico-docker
Requires=docker.service
After=docker.service
[Service]
User=root
EnvironmentFile=/etc/network-environment
PermissionsStartOnly=true
ExecStart=/usr/bin/calicoctl node --ip=${DEFAULT_IPV4} --detach=false
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
安装并设置自启动
sudo systemctl enable /etc/systemd/system/calico-node.service
sudo systemctl start calico-node.service
安装kubernetes到master
将使用kubelet
来启动master组件
- 1、 下载安装
kubelet
:
翻墙有困难的,实际上如下两个文件在kubernetes-v1.4.4.tar.gz包中解压后可找到。
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.4.4/bin/linux/amd64/kubectl
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.4.4/bin/linux/amd64/kubelet
sudo chmod +x /usr/bin/kubelet /usr/bin/kubectl
- 2、 安装
kubelet
systemd unit文件并启动kubelet:
安装kubelet.service文件
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.3-docs/samples/kubernetes/master/kubelet.service
修改后其内容如下:
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
Requires=docker.service
After=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
--register-node=false \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--pod-infra-container-image=shenshouer/pause-amd64:3.0 \
--cluster-dns=10.100.0.10 \
--cluster_domain=cluster.local \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
我已经将默认的gcr.io/google_containers/pause-amd64:3.0
修改成了shenshouer/pause-amd64:3.0
将会直接去hub.docker.com
上拉去。
sudo systemctl enable /etc/systemd/system/kubelet.service
sudo systemctl start kubelet.service
- 3、 下载并安装master manifest文件,这将自动启动kubernetes master所需的服务:
sudo mkdir -p /etc/kubernetes/manifests
sudo wget -N -P /etc/kubernetes/manifests https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubernetes-master.manifest
修改后的kubernetes-master.manifest
内容如下:
apiVersion: v1
kind: Pod
metadata:
name: kube-controller
spec:
hostNetwork: true
volumes:
- name: "etc-kubernetes"
hostPath:
path: "/etc/kubernetes"
- name: ssl-certs-kubernetes
hostPath:
path: /etc/kubernetes/ssl
- name: "ssl-certs-host"
hostPath:
path: "/usr/share/ca-certificates"
- name: "var-run-kubernetes"
hostPath:
path: "/var/run/kubernetes"
- name: "etcd-datadir"
hostPath:
path: "/var/lib/etcd"
- name: "usr"
hostPath:
path: "/usr"
- name: "lib64"
hostPath:
path: "/lib64"
containers:
- name: etcd
image: shenshouer/etcd:2.2.1
command:
- "/usr/local/bin/etcd"
- "--data-dir=/var/lib/etcd"
- "--advertise-client-urls=http://127.0.0.1:2379"
- "--listen-client-urls=http://127.0.0.1:2379"
- "--listen-peer-urls=http://127.0.0.1:2380"
- "--name=etcd"
volumeMounts:
- mountPath: /var/lib/etcd
name: "etcd-datadir"
- name: kube-apiserver
image: gcr.io/google_containers/kube-apiserver:ac3112fc470bc0f78a8f74feef1baa1f
command:
- /usr/local/bin/kube-apiserver
- --v=4
- --allow-privileged=true
- --insecure-bind-address=0.0.0.0
- --secure-port=443
- --etcd-servers=http://127.0.0.1:2379
- --service-cluster-ip-range=10.100.0.0/24
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --logtostderr=true
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- mountPath: /etc/kubernetes
name: "etc-kubernetes"
- mountPath: /var/run/kubernetes
name: "var-run-kubernetes"
- name: kube-controller-manager
image: gcr.io/google_containers/kube-controller-manager:8c606bf21e2ea4ccdb21d3696fc45c52
command:
- usr/local/bin/kube-controller-manager
- --v=4
- --address=127.0.0.1
- --master=http://127.0.0.1:8080
- --cluster-name=kubernetes
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
- --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252s
initialDelaySeconds: 15
timeoutSeconds: 1
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- name: kube-scheduler
image: gcr.io/google_containers/kube-scheduler:07d41e4e30d78421ec9c9f2f2a2ce5d6
command:
- /usr/local/bin/kube-scheduler
- --v=4
- --address=127.0.0.1
- --leader-elect
- --master=http://127.0.0.1:8080
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 1
- name: kube-proxy
image: gcr.io/google_containers/kube-proxy:d09ef01c7206d2f1dcde5b28763ed7fe
command:
- /usr/local/bin/kube-proxy
- --v=4
- --master=http://127.0.0.1:8080
- --proxy-mode=iptables
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
启动如下镜像
gcr.io/google_containers/kube-apiserver:ac3112fc470bc0f78a8f74feef1baa1f
gcr.io/google_containers/kube-controller-manager:8c606bf21e2ea4ccdb21d3696fc45c52
gcr.io/google_containers/kube-scheduler:07d41e4e30d78421ec9c9f2f2a2ce5d6
gcr.io/google_containers/kube-proxy:d09ef01c7206d2f1dcde5b28763ed7fe
是在kubernetes-v1.4.4.tar.gz包中解压后kubernetes/server/kubernetes-server-linux-amd64.tar.gz
解压所得到的tar包,通过docker load -i kube-apiserver.tar
导入到docker中的
- 4、 通过执行
docker ps
检查进程。一会后,可以看到正在运行的the etcd, apiserver, controller-manager, scheduler, 与 kube-proxy容器。
注意: 启动容器运行需要一会时间。但在国内,需要克服GWF的问题。
设置Node
以下步骤需要在每一个节点上进行.
配置TLS
从节点需要三个密钥:ca.pem, worker.pem, 与 worker-key.pem. 前面已经为生成了ca.pem与ca-key.pem。将需要为每一个从节点生成公/私密钥对。
- 1、 以如下内容创建
worker-openssl.cnf
:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = $ENV::WORKER_IP
- 2、 为从节点生成必要的TLS资源。这里依赖于从节点的IP地址,ca.pem 与 ca-key.pem在前面的章节中已经生成了。
# Export this worker's IP address.
export WORKER_IP=<WORKER_IPV4>
# Generate keys.
openssl genrsa -out worker-key.pem 2048
openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=worker-key" -config worker-openssl.cnf
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
- 3、 将此三个文件(ca.pem, worker.pem, worker-key.pem)
- 4、 将文件放置
/etc/kubernetes/ssl
文件夹下并给予合适的权限:
# Move keys
sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem worker.pem worker-key.pem
# Set permissions
sudo chmod 600 /etc/kubernetes/ssl/worker-key.pem
sudo chown root:root /etc/kubernetes/ssl/worker-key.pem
在节点上配置kubelet
- 1、 在证书到位后,为从节点创建一个配置文件
/etc/kubernetes/worker-kubeconfig.yaml
用于验证,使用master ip 替换<KUBERNETES_MASTER>
:
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: https://<KUBERNETES_MASTER>:443
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
在节点上安装Calico
在kubernetes之前安装calico非常重要。将使用calico-node.service
systemd文件:
- 1、安装calicoctl
wget https://github.com/projectcalico/calico-containers/releases/download/v0.22.0/calicoctl
chmod +x calicoctl
sudo mv calicoctl /usr/bin
- 2、 拉取
calico/node
镜像
docker pull calico/node:v0.22.0
- 3、 下载配置
network-environment
模板
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.3-docs/samples/kubernetes/node/network-environment-template
4、 修改
network-environment
:- 使用节点IP替换
<DEFAULT_IPV4>
. - 使用master ip或者主机名替换
<KUBERNETES_MASTER>
.
- 使用节点IP替换
5、 将
network-environment
移至/etc
:
sudo mv -f network-environment /etc
- 6、 安装
calico-node
服务
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.3-docs/samples/kubernetes/common/calico-node.service
sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
- 7、 安装Calico CNI 插件:
sudo mkdir -p /opt/cni/bin/
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico-ipam
sudo chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
- 8、 创建CNI网络配置文件,用于通知kubernetes创建一个名为calico-k8s-network的网络,并对此网络使用calico插件。使用如下内容创建
/etc/cni/net.d/10-calico.conf
,使用master IP替换<KUBERNETES_MASTER>
,每一个节点上都需要进行:
# Make the directory structure.
mkdir -p /etc/cni/net.d
# Make the network configuration file
cat >/etc/cni/net.d/10-calico.conf <<EOF
{
"name": "calico-k8s-network",
"type": "calico",
"etcd_authority": "<KUBERNETES_MASTER>:6666",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
}
}
EOF
至此仅创建了一个网络,默认情况下将被kubelet使用。
- 9、 验证calico是否启动正确:
calicoctl status
应该显示Felix (Calico的节点代理)正在运行,并且有每个配置了的节点与master的BGP状态。”Info”列显示为“Established”:
$ calicoctl status
calico-node container is running. Status: Up 15 hours
Running felix version 1.3.0rc5
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| Peer address | Peer type | State | Since | Info |
+---------------+-------------------+-------+----------+-------------+
| 172.18.203.41 | node-to-node mesh | up | 17:32:26 | Established |
| 172.18.203.42 | node-to-node mesh | up | 17:32:25 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+
如果“Info”显示为“Active”或其他值表面calico与其他节点的连接存在问题。检查IP设置是否正确,检查calico是否使用了正确的本地IP(设置在network-environment
文件中的)。
在节点上安装kubernetes
- 1、下载
kubelet
二进制文件:
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.4.4/bin/linux/amd64/kubelet
sudo chmod +x /usr/bin/kubelet
- 2、 安装
kubelet
systemd文件:
# Download the unit file.
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.3-docs/samples/kubernetes/node/kubelet.service
# Enable and start the unit files so that they run on boot
sudo systemctl enable /etc/systemd/kubelet.service
sudo systemctl start kubelet.service
- 3、下载
kube-proxy
manifest文件
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.3-docs/samples/kubernetes/node/kube-proxy.manifest
- 4、 使用master的IP地址替换
<KUBERNETES_MASTER>
并放置到指定位置:
sudo mkdir -p /etc/kubernetes/manifests/
sudo mv kube-proxy.manifest /etc/kubernetes/manifests/
配置kubectl
远程访问
安装DNS组件
安装Kubernetes UI组件(可选)
更多推荐
所有评论(0)