部署k8s集群--1.23.1版本
一、环境信息与准备1、环境信息hostnameIP配置功能k8s-node1192.168.43.112C4Gmaster、registryk8s-node2192.168.43.122C4Gnodek8s-node3192.168.43.132C4Gnode二、环境准备,所有节点2.1 基础配置echo "关闭防火墙"systemctl stop firewalldsystemctl disab
一、环境信息与准备
1、环境信息
hostname | IP | 配置 | 功能 |
---|---|---|---|
k8s-node1 | 192.168.43.11 | 2C4G | master、registry |
k8s-node2 | 192.168.43.12 | 2C4G | node |
k8s-node3 | 192.168.43.13 | 2C4G | node |
二、环境准备,所有节点
2.1 基础配置
echo "关闭防火墙"
systemctl stop firewalld
systemctl disable firewalld
echo "关闭selinux"
sed -i '7s/enforcing/disabled' /etc/selinux/config
echo "关闭swap"
swapoff -a
sed -i '/ swap /s/^/#/' /etc/fstab
2.2 配置master节点免密登入node节点 为了便于分发文件到node节点,设置免密
ssh-keygen
ssh-copy-id k8s-node2
ssh-copy-id k8s-node3
2.3 配置/etc/hosts文件 2.4 配置YUM源 配置docker-ce YUM源,并安装docker与私有镜像仓库(k8s-node1安装,其他节点不用)
yum remove docker* -y
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's/download.docker.com/mirrors.aliyun.com\/docker-ce/g' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
yum -y install docker docker-distribution
systemctl enable --now docker-distribution
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"registry-mirrors":
["https://hub-mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries":["192.168.1.100:5000"]
}
EOF
systemctl daemon-reload
systemctl enable docker --now
2.5 配置k8s YUM源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.6 内核配置
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
modprobe br_netfilter
sysctl --system
三、安装k8s
3.1 安装软件包
yum install -y kubelet kubeadm kubectl
如果不指定版本默认安装最新版,我们这里安装最新版
kubectl
和kebuadm
命令tab健补齐,默认不补齐
kubectl completion bash >/etc/bash_completion.d/kubectl
kubeadm completion bash >/etc/bash_completion.d/kubeadm
#退出当前终端生效
3.2 下载所需的镜像 查看所需要的镜像
kubeadm config images list
[root@k8s-node1 ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
下载镜像,并修改镜像
for IMAGE in `kubeadm config images list |awk -F / '{print $2}'`;
do
docker pull registry.aliyuncs.com/google_containers/$IMAGE
docker tag registry.aliyuncs.com/google_containers/$IMAGE k8s.gcr.io/$IMAGE
done
记得其中有一个镜像不能通过上述命令修改名称,需要手动修改输入docker iamges
验证 3.3 master 集群初始化
kubeadm init \
--apiserver-advertise-address=192.168.43.11 \
--kubernetes-version v1.23.1 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
命令解释-----------------------------------------------
$ kubeadm init \ #初始化
–apiserver-advertise-address=xx.xx.xx.xx \ # 指定apiserver的地址也就是master的地址
–image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ # 指定用阿里云镜像,如果初始化指定阿里云镜像会由于镜像名导致初始化失败,镜像已经下载到了本地,可以不加此参数
–kubernetes-version v1.17.3 \ #指定Kubernetes的版本
–pod-network-cidr=10.244.0.0/16 #指定pod网络
也可以通过文件进行初始化
#生成默认的初始化配置文件
kubeadm config print init-defaults >init-defaults.conf
init-defaults.conf
文件详情
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
#指定镜像仓库地址
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
#指定k8s版本
kubernetesVersion: 1.23.0
#指定pod范围
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
使用配置文件下载镜像
kubeadm config images pull --config=init-defaults.conf
#初始化集群
kubeadm init --config=init-defaults.conf
初始化成功后,复制配置文件到普通用户下,成功后中的输出信息中有此信息,直接复制即可
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown
(
i
d
−
u
)
:
(id -u):
(id−u):(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.8.8:6443 --token qhce4u.9fs411ep2u75bnaz
–discovery-token-ca-cert-hash sha256:3e77b04864b58b83c1e40ed7e33af3f092466ce3504059ebc8fb5930f562251f
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
保存最后的认证信息,node节点加入集群需要使用
kubeadm join 10.0.8.8:6443 --token qhce4u.9fs411ep2u75bnaz \
--discovery-token-ca-cert-hash sha256:3e77b04864b58b83c1e40ed7e33af3f092466ce3504059ebc8fb5930f562251f
验证master节点
[root@k8s-node1 ~]# kubectl get node -n kube-system
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane,master 4m53s v1.23.1
[root@k8s-node1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-64897985d-l67vh 1/1 Running 0 5m8s
coredns-64897985d-rfnpz 1/1 Running 0 5m8s
etcd-k8s-node1 1/1 Running 2 5m21s
kube-apiserver-k8s-node1 1/1 Running 2 5m21s
kube-controller-manager-k8s-node1 1/1 Running 2 5m21s
kube-proxy-h4qjm 1/1 Running 0 5m7s
kube-scheduler-k8s-node1 1/1 Running 2 5m21s
如果初始化失败可以执行kubuctl reset -f
恢复主机状态,重新初始化 安装网络插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#修改kube-flannel.yml文件
128 "Network": "172.30.0.0/16", #IP地址与初始化中的--pod-network-cidr相同
#安装插件
kubectl apply -f kube-flannel.yml
#查看pod
[root@k8s-node1 flannel]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-64897985d-l67vh 1/1 Running 0 14m 172.30.0.2 k8s-node1 <none> <none>
coredns-64897985d-rfnpz 1/1 Running 0 14m 172.30.0.3 k8s-node1 <none> <none>
etcd-k8s-node1 1/1 Running 2 14m 10.0.8.8 k8s-node1 <none> <none>
kube-apiserver-k8s-node1 1/1 Running 2 14m 10.0.8.8 k8s-node1 <none> <none>
kube-controller-manager-k8s-node1 1/1 Running 2 14m 10.0.8.8 k8s-node1 <none> <none>
kube-flannel-ds-rxhqb 1/1 Running 0 20s 10.0.8.8 k8s-node1 <none> <none>
kube-proxy-h4qjm 1/1 Running 0 14m 10.0.8.8 k8s-node1 <none> <none>
kube-scheduler-k8s-node1 1/1 Running 2 14m 10.0.8.8 k8s-node1 <none> <none>
到这里mater节点就配置好了
node节点加入k8s集群
环境配置与master节点相同,安装docker、安装软件包 通过刚才master节点初始化的token信息加入集群,默认24小时有效
kubeadm join 192.168.43.11:6443 --token qhce4u.9fs411ep2u75bnaz \
--discovery-token-ca-cert-hash sha256:3e77b04864b58b83c1e40ed7e33af3f092466ce3504059ebc8fb5930f562251f
如果token过期,可以创建永久token
kubeadm token create --print-join-command # 重新创建一个
kubeadm token create --ttl0 --print-join-command # 创建一个永久的
故障排除 1、如果node加入集群后,coredns和kube-flannel pod没起来,一般是原因 原因:node节点没下载这两个镜像,按照之前的方式重新下载和更换镜像名即可 或者把master节点的镜像推送到私有镜像,从私有镜像仓库下载并更名,前期master节点已安装私有镜像仓库
#master节点操作
for IMAGE in kubeadm config images list |awk -F / '{print $2}'
do
docker tag k8s.gcr.io/$IMAGE 192.168.43.11:5000/$IMAGE
docker push 192.168.43.11:5000/$IMAGE
done
#node节点操作
for IMAGE in kubeadm config images list |awk -F / '{print $2}'
do
docker pull 192.168.43.11:5000/$IMAGE
docker tag 192.168.43.11:5000/$IMAGE registry.aliyuncs.com/google_containers/$IMAGE
done
过一会儿再查看pod,就up了,容器有自动重启机器
kubectl get pods -n kube-system -o wide
到这里k8s集群就搭建好了
更多推荐
所有评论(0)