【K8S集群安装详细步骤】
k8s集群详细安装步骤
·
安装步骤使用的是本地机器,virtual虚拟机配置3台centos。
linux镜像安装
- 使用Oracle VM VirtualBox虚拟机,virtual官网
- 安装镜像,使用CentosOs7,本人使用的centos 7:链接:https://pan.baidu.com/s/1sqe76HH82BEl_M_ZEXFLlw 提取码:97e7
虚拟机的cpu核数必须 >= 2;否则不能部署k8s
测试3台机器,分别是master,node1,node2
-master内存设为2G,node设为1G,cpu设置为2核,其他默认即可。
配置双网卡/静态ip
虚拟机ip地址会随着网络的变化变化,所以配置一下静态ip,在每次重启时ip不变。如果是在私有云上,或者物理机上有固定ip,不需要此步骤。
- 配置virtual虚拟机双网卡
- 设置静态ip
vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
将BOOTPROTO设置为static,IPADDR设置
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
#BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=08012b4a-d6b1-41d9-a34d-e0f52a123e7a
DEVICE=enp0s3
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.56.110
重启network
systemctl restart network
验证都可以联通即配置正确
ping baidu.com
在宿主机上ping
ping 192.168.56.110
查看本地ip
ip addr
- vitrual的网络配置
步骤2中的192.168.56…此网段是在virtual上配置的。
在virtual左上角菜单:管理-主机网络管理器
- 如果不能访问外网
vim /etc/resolv.conf
将nameserver设为114.114.114.114
配置linux yum源+安装docker
- 下载阿里云repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
- 安装工具,一些是docker 用到的
yum install -y yum-utils device-mapper-persisitent-data lvm2 wget
- 添加docker repo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
或者,这2个地址一个阿里云的,一个官方的。
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- 查看docker版本
yum list docker-ce --showduplicates | sort -r
- 安装docker
yum -y install docker-ce
- 启动docker,设置开机启动
systemctl start docker
systemctl enable docker
部署k8s的一些准备
- 安装工具
yum install -y net-tools vim ntp
- 配置Master和node节点的域名
vim /etc/hosts
192.168.56.110 master
192.168.56.111 node1
192.168.56.112 node2
- 设置主机名分别为master node1 node2
hostnamectl set-hostname master
- 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
- 关闭seLinux
setenforce 0
永久:
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
即
vim /etc/selinux/config 修改SELINUX=disabled
- 关闭swap
swapoff -a
永久:
vim /etc/fstab
注释行:/dev/mapper/centos-swap swap swap defaults 0 0
验证:free -m
- 配置k8s的阿里云yum源
[root@master ~]# cat >>/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- 配置docker的daemon.json,将docker的cgroup Driver改为systemd
vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.cn-hangzhou.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
重启
systemctl daemon-reload
systemctl restart docker
- 同步系统时间
ntpdate 0.asia.pool.ntp.org
- 安装 kubelet kubeadm kubectl
yum install -y kubelet-1.22.4-0 kubeadm-1.22.4-0 kubectl-1.22.4-0
systemctl enable kubelet
- 将桥接的IPv4流量传递到iptables的链
modprobe br_netfilter
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
部署master
- 初始化主节点
kubeadm init --kubernetes-version=1.22.4 \
--apiserver-advertise-address=192.168.56.110 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
如果看到以下信息,安装成功:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.110:6443 --token zlidu2.5f2gro8y9olfwta5 \
--discovery-token-ca-cert-hash sha256:8827883a07b14b4f28a076a287eaf44548b4b2931205c96cd4b2f6eb0894b195
根据提示,
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 配置kubeconfig环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
- 安装网络插件Flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
网络不通的话,使用flannel.yaml,已经替换了镜像地址
链接:https://pan.baidu.com/s/1IKuJF9dxJ-6XyeEsVZvqSw
提取码:apyq
kubectl apply -f flannel.yaml
- 成功
[root@master data]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 10m v1.22.4
[root@master data]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7f6cbbb7b8-qn99s 1/1 Running 0 10m
coredns-7f6cbbb7b8-vfkzx 1/1 Running 0 10m
etcd-master 1/1 Running 0 11m
kube-apiserver-master 1/1 Running 0 11m
kube-controller-manager-master 1/1 Running 0 11m
kube-flannel-ds-79lth 1/1 Running 0 117s
kube-proxy-j6j8g 1/1 Running 0 10m
kube-scheduler-master 1/1 Running 0 11m
部署node
node服务器同样执行master之前的所有步骤
- 将master节点的admin.conf拷贝到node
scp /etc/kubernetes/admin.conf root@node1:/etc/kubernetes/
- 配置kubeconfig环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
- 加入master
kubeadm join 192.168.56.110:6443 --token zlidu2.5f2gro8y9olfwta5 \
--discovery-token-ca-cert-hash sha256:8827883a07b14b4f28a076a287eaf44548b4b2931205c96cd4b2f6eb0894b195
- 成功
[root@localhost ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 50m v1.22.4
node1 Ready <none> 77s v1.22.4
- Token如果没有记住或者过期,在master节点执行
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.56.110:6443 --token lroqco.hofybdx4nqpz192y --discovery-token-ca-cert-hash sha256:8827883a07b14b4f28a076a287eaf44548b4b2931205c96cd4b2f6eb0894b195
- 其他node同上
更多推荐
已为社区贡献1条内容
所有评论(0)