kubernetes系列之六:安装kubernets v1.10.0和ipvs mode kube-proxy
一、前言kubeadm是Kubernetes官方推出的快速部署Kubernetes集群工具,kubeadm通过将Kubernetes相关服务容器化以简化部署过程。在kubernetes 1.8以上的版本中,对于kube-proxy组件增加了除iptables模式和用户模式之外ipvs模式的支持。通过ipvs的NAT模式,对访问k8s service的请求进行虚IP到POD IP的转发。本文在kub
一、前言
kubeadm是Kubernetes官方推出的快速部署Kubernetes集群工具,kubeadm通过将Kubernetes相关服务容器化以简化部署过程。
在kubernetes 1.8以上的版本中,对于kube-proxy组件增加了除iptables模式和用户模式之外ipvs模式的支持。通过ipvs的NAT模式,对访问k8s service的请求进行虚IP到POD IP的转发。
本文在kubernetes v1.10.0的基础上enable IPVS模式的kube-proxy。
转载自https://blog.csdn.net/cloudvtech
二、准备工作
1.安装好CentOS操作系统
cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
2.预先设置必要配置
2.1 停止防火墙:systemctl stop firewalld; systemctl disable firewalld; setenforce 0
2.2 更改/etc/selinux/config: SELINUX=disabled
2.3 配置内核参数:/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
sysctl -p /etc/sysctl.d/k8s.conf
载入ipvs内核模块:
modprobe ip_vs
modprobe ip_vs_rr
2.4 配置kubernetes软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
3.安装 kubernetes软件
yum install -y kubelet kubeadm kubectl
3.1 根据需要配置kubelet的启动参数
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
#Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
Environment="KUBELET_PROXY_ARGS=--proxy-mode=ipvs --ipvs-min-sync-period=5s --ipvs-sync-period=5s --ipvs-scheduler=rr"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
3.2 编辑kubeadm的配置文件
4.关闭这个虚拟机,制作多个clone作为minion节点
转载自https://blog.csdn.net/cloudvtech
三、配置安装kubernetes master和minion节点
1.启动一个虚拟机作为master节点
2.编辑kubeadm配置文件 kubeadm.config
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 192.168.166.101
networking:
podSubnet: 10.244.0.0/16
kubernetesVersion: "v1.10.0"
kubeProxy:
config:
mode: "ipvs"
3.使用kubeadm初始化master节点
kubeadm init --config ./kubeadm.config [init] Using Kubernetes version: v1.10.0
......
4. 运行上述初始化返回要求运行的命令来配置kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
5.安装flannel CNI插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
这将在每个node上运行flannel的daemonset
6.在各个节点上运行join命令
kubeadm join 192.168.166.101:6443 --token dtvlg9.798fbe57awyzjzfx --discovery-token-ca-cert-hash sha256:a3406c544acb65954339ab9d6b9801673cb646dd17fa162fbf8aa1cd239b2b99
7.查看安装结果
Note,要reset整个cluster可以在master和每个node上运行如下命令,然后在运行初始化步骤:
kubeadm reset
rm -rf /etc/kubernetes/pki/ca.crt
rm -rf /etc/kubernetes/kubelet.conf
rm -rf /etc/kubernetes/manifests
要切换kube-proxy的模式,必须修改kubeadm.config,然后使用kubeadm reset和再init整个cluster
转载自https://blog.csdn.net/cloudvtech
四、安装nginx deployment
1.编辑deployment文件
nginx.yml
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
2.部署
kubectl apply -f nginx.yml
deployment.apps "nginx-deployment" created
3.暴露nginx service
kubectl expose deployment/nginx-deployment
4.查看ipvs,将数据NAT到后端
而cluster service的VIP会被绑定到每个node的kube-ipvs0虚拟网卡上
node1的NIC
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
link/ether a2:e0:de:f0:46:e4 brd ff:ff:ff:ff:ff:ff
inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.99.190.144/32 brd 10.99.190.144 scope global kube-ipvs0
valid_lft forever preferred_lft forever
node2的NIC
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
link/ether 9e:65:33:4f:c0:7a brd ff:ff:ff:ff:ff:ff
inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.99.190.144/32 brd 10.99.190.144 scope global kube-ipvs0
valid_lft forever preferred_lft forever
而且这些ipvs使用的NIC的状态是DOWN,只是作为绑定VIP使用
使用ipvs之后,iptables已经没有load balance的信息了
这里需要说明的是由于所有从主机经由IPVS调度进入POD的数据包都会经过MASQUERADE,所以数据包实际上经过了DNAT(by IPVS)和SNAT(by iptables MASQUERADE),数据包在POD返回的时候一定会返回到原来node所在的IPVS。所以经由flannel的帮助,kubernnetes的IPVS解决方案完全满足IPVS NAT模式需要director和backend在同一个LAN的需求,事实上,这种模式更加类似于IPVS的FULLNAT模式。
通过cluster IP访问nginx service之后的ipvs统计信息
五、配置nodePort
1.node port配置文件nodeport.yaml
kind: Service
apiVersion: v1
metadata:
name: nginx-nodeport
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
selector:
app: nginx
2.apply
kubectl apply -f nodeport.yaml
3.在k8s-node1上查看ipvs
node
六、相关代码
https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/proxier.go
......
更多推荐
所有评论(0)